[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-1.fc38 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f5b30-0x000f5b3f] mapped at [ffffffffff200b30] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5950 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1bb7 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1a53 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01A13 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1ac7 00090 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1b57 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1b8f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 373272179 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.464283] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.467035] pid_max: default: 32768 minimum: 301 [ 0.468647] Security Framework initialized [ 0.469920] SELinux: Initializing. [ 0.472653] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.477002] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.480138] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.482274] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.484886] Initializing cgroup subsys memory [ 0.486261] Initializing cgroup subsys devices [ 0.487892] Initializing cgroup subsys freezer [ 0.489263] Initializing cgroup subsys net_cls [ 0.491193] Initializing cgroup subsys blkio [ 0.493266] Initializing cgroup subsys perf_event [ 0.497516] Initializing cgroup subsys hugetlb [ 0.500315] Initializing cgroup subsys pids [ 0.502509] Initializing cgroup subsys net_prio [ 0.504197] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.507448] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.509234] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.511078] tlb_flushall_shift: 6 [ 0.512283] FEATURE SPEC_CTRL Present [ 0.513402] FEATURE IBPB_SUPPORT Present [ 0.514635] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.516751] Spectre V2 : Vulnerable [ 0.517852] Speculative Store Bypass: Vulnerable [ 0.520244] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.528378] ACPI: Core revision 20130517 [ 0.531695] ACPI: All ACPI Tables successfully acquired [ 0.533512] ftrace: allocating 30294 entries in 119 pages [ 0.593123] Enabling x2apic [ 0.594333] Enabled x2apic [ 0.595696] Switched APIC routing to physical x2apic. [ 0.599329] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.601337] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.604857] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.607921] ... version: 2 [ 0.609135] ... bit width: 48 [ 0.610825] ... generic registers: 4 [ 0.612155] ... value mask: 0000ffffffffffff [ 0.613908] ... max period: 00007fffffffffff [ 0.615446] ... fixed-purpose events: 3 [ 0.616628] ... event mask: 000000070000000f [ 0.618182] KVM setup paravirtual spinlock [ 0.621560] smpboot: Booting Node 0, Processors #1[ 0.623481] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.646131] KVM setup async PF for cpu 1 [ 0.647367] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.654031] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock #3 OK [ 0.665179] KVM setup async PF for cpu 2 [ 0.665186] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.668165] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock [ 0.678344] Brought up 4 CPUs [ 0.679151] smpboot: Max logical packages: 1 [ 0.680357] KVM setup async PF for cpu 3 [ 0.680363] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.687531] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.716907] devtmpfs: initialized [ 0.718166] x86/mm: Memory block size: 128MB [ 0.731149] EVM: security.selinux [ 0.733283] EVM: security.ima [ 0.734304] EVM: security.capability [ 0.740860] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.745749] NET: Registered protocol family 16 [ 0.749691] cpuidle: using governor haltpoll [ 0.751957] ACPI: bus type PCI registered [ 0.753418] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.757782] PCI: Using configuration type 1 for base access [ 0.759951] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.776938] ACPI: Added _OSI(Module Device) [ 0.778821] ACPI: Added _OSI(Processor Device) [ 0.781102] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.783006] ACPI: Added _OSI(Processor Aggregator Device) [ 0.785480] ACPI: Added _OSI(Linux-Dell-Video) [ 0.791812] ACPI: Interpreter enabled [ 0.792874] ACPI: (supports S0 S3 S4 S5) [ 0.794152] ACPI: Using IOAPIC for interrupt routing [ 0.795900] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.799415] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.809657] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.811923] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.814636] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.816888] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.825737] acpiphp: Slot [2] registered [ 0.827365] acpiphp: Slot [3] registered [ 0.828849] acpiphp: Slot [4] registered [ 0.830104] acpiphp: Slot [5] registered [ 0.831387] acpiphp: Slot [6] registered [ 0.832822] acpiphp: Slot [7] registered [ 0.834257] acpiphp: Slot [8] registered [ 0.835729] acpiphp: Slot [9] registered [ 0.837176] acpiphp: Slot [10] registered [ 0.838794] acpiphp: Slot [11] registered [ 0.840296] acpiphp: Slot [12] registered [ 0.841940] acpiphp: Slot [13] registered [ 0.847674] acpiphp: Slot [14] registered [ 0.849316] acpiphp: Slot [15] registered [ 0.850898] acpiphp: Slot [16] registered [ 0.852609] acpiphp: Slot [17] registered [ 0.854247] acpiphp: Slot [18] registered [ 0.855611] acpiphp: Slot [19] registered [ 0.858584] acpiphp: Slot [20] registered [ 0.860231] acpiphp: Slot [21] registered [ 0.864350] acpiphp: Slot [22] registered [ 0.865896] acpiphp: Slot [23] registered [ 0.867671] acpiphp: Slot [24] registered [ 0.873597] acpiphp: Slot [25] registered [ 0.875383] acpiphp: Slot [26] registered [ 0.877035] acpiphp: Slot [27] registered [ 0.878751] acpiphp: Slot [28] registered [ 0.881003] acpiphp: Slot [29] registered [ 0.882150] acpiphp: Slot [30] registered [ 0.885774] acpiphp: Slot [31] registered [ 0.886938] PCI host bridge to bus 0000:00 [ 0.888089] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.891990] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.894895] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.897418] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.899618] pci_bus 0000:00: root bus resource [mem 0x140000000-0x1bfffffff window] [ 0.901974] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.928272] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.930234] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.932228] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.935520] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.939240] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.941843] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 1.317119] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 1.325195] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 1.327277] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 1.331251] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 1.332996] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 1.338819] vgaarb: loaded [ 1.340347] SCSI subsystem initialized [ 1.342040] ACPI: bus type USB registered [ 1.345454] usbcore: registered new interface driver usbfs [ 1.347352] usbcore: registered new interface driver hub [ 1.351923] usbcore: registered new device driver usb [ 1.356525] PCI: Using ACPI for IRQ routing [ 1.359536] NetLabel: Initializing [ 1.360398] NetLabel: domain hash size = 128 [ 1.362499] NetLabel: protocols = UNLABELED CIPSOv4 [ 1.367273] NetLabel: unlabeled traffic allowed by default [ 1.370258] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 1.372927] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 1.378950] amd_nb: Cannot enumerate AMD northbridges [ 1.393937] Switched to clocksource kvm-clock [ 1.451749] pnp: PnP ACPI init [ 1.452839] ACPI: bus type PNP registered [ 1.461210] pnp: PnP ACPI: found 6 devices [ 1.464029] ACPI: bus type PNP unregistered [ 1.498100] NET: Registered protocol family 2 [ 1.504235] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 1.506586] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 1.514482] TCP: Hash tables configured (established 32768 bind 32768) [ 1.520187] TCP: reno registered [ 1.521556] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 1.527341] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 1.532260] NET: Registered protocol family 1 [ 1.545024] RPC: Registered named UNIX socket transport module. [ 1.546831] RPC: Registered udp transport module. [ 1.552240] RPC: Registered tcp transport module. [ 1.553528] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 1.562811] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 1.565913] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 1.571924] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 1.580298] Unpacking initramfs... [ 1.612897] hrtimer: interrupt took 7025499 ns [ 5.395453] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 5.407674] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 5.421944] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 5.425087] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 5.427574] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 5.429492] RAPL PMU: hw unit of domain package 2^-0 Joules [ 5.430955] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 5.453301] cryptomgr_test (52) used greatest stack depth: 14480 bytes left [ 5.463976] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 5.464021] Initialise system trusted keyring [ 5.552993] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 5.556462] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 5.576214] zpool: loaded [ 5.578151] zbud: loaded [ 5.579481] VFS: Disk quotas dquot_6.6.0 [ 5.581077] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 5.586770] NFS: Registering the id_resolver key type [ 5.589578] Key type id_resolver registered [ 5.590858] Key type id_legacy registered [ 5.592663] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 5.595965] Key type big_key registered [ 5.604204] cryptomgr_test (58) used greatest stack depth: 14048 bytes left [ 5.610306] cryptomgr_test (60) used greatest stack depth: 13664 bytes left [ 5.611505] NET: Registered protocol family 38 [ 5.611517] Key type asymmetric registered [ 5.611520] Asymmetric key parser 'x509' registered [ 5.611645] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 5.611732] io scheduler noop registered [ 5.611736] io scheduler deadline registered (default) [ 5.611801] io scheduler cfq registered [ 5.611807] io scheduler mq-deadline registered [ 5.611811] io scheduler kyber registered [ 5.619281] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 5.619294] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 5.630531] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 5.632790] ACPI: Power Button [PWRF] [ 5.634741] GHES: HEST is not enabled! [ 5.725326] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 5.959510] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 6.422967] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 6.438993] tsc: Refined TSC clocksource calibration: 2399.986 MHz [ 6.667371] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 7.193969] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 7.274883] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 7.367461] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 7.382655] Non-volatile memory driver v1.3 [ 7.385853] Linux agpgart interface v0.103 [ 7.393327] crash memory driver: version 1.1 [ 7.397724] nbd: registered device at major 43 [ 7.453391] virtio_blk virtio1: [vda] 60784 512-byte logical blocks (31.1 MB/29.6 MiB) [ 7.498465] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 7.544378] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 7.590342] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 7.627676] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 7.657342] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 7.670033] rdac: device handler registered [ 7.672089] hp_sw: device handler registered [ 7.673604] emc: device handler registered [ 7.674981] libphy: Fixed MDIO Bus: probed [ 7.688046] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 7.694016] ehci-pci: EHCI PCI platform driver [ 7.695459] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 7.705444] ohci-pci: OHCI PCI platform driver [ 7.709971] uhci_hcd: USB Universal Host Controller Interface driver [ 7.718045] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 7.740578] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 7.747649] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 7.753943] mousedev: PS/2 mouse device common for all mice [ 7.770463] rtc_cmos 00:05: RTC can wake from S4 [ 7.777324] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 7.783989] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 7.787948] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 7.810153] hidraw: raw HID events driver (C) Jiri Kosina [ 7.819311] usbcore: registered new interface driver usbhid [ 7.823103] usbhid: USB HID core driver [ 7.824369] drop_monitor: Initializing network drop monitor service [ 7.846033] Netfilter messages via NETLINK v0.30. [ 7.847549] TCP: cubic registered [ 7.848472] Initializing XFRM netlink socket [ 7.854038] NET: Registered protocol family 10 [ 7.861542] NET: Registered protocol family 17 [ 7.863005] Key type dns_resolver registered [ 7.870223] mce: Using 10 MCE banks [ 7.886282] Loading compiled-in X.509 certificates [ 7.889543] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 7.899180] registered taskstats version 1 [ 7.931800] modprobe (72) used greatest stack depth: 13456 bytes left [ 7.958902] Key type trusted registered [ 8.002314] Key type encrypted registered [ 8.003691] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 8.009694] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 8.015845] rtc_cmos 00:05: setting system clock to 2024-04-19 06:09:06 UTC (1713506946) [ 8.021977] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 8.025434] Write protecting the kernel read-only data: 12288k [ 8.029671] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 8.033303] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 8.049993] random: systemd: uninitialized urandom read (16 bytes read) [ 8.053193] random: systemd: uninitialized urandom read (16 bytes read) [ 8.055234] random: systemd: uninitialized urandom read (16 bytes read) [ 8.058933] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 8.064580] systemd[1]: Detected virtualization kvm. [ 8.066203] systemd[1]: Detected architecture x86-64. [ 8.067908] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 8.072418] systemd[1]: No hostname configured. [ 8.073843] systemd[1]: Set hostname to . [ 8.076087] random: systemd: uninitialized urandom read (16 bytes read) [ 8.078442] systemd[1]: Initializing machine ID from random generator. [ 8.316513] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 8.319746] random: systemd: uninitialized urandom read (16 bytes read) [ 8.322123] random: systemd: uninitialized urandom read (16 bytes read) [ 8.325427] random: systemd: uninitialized urandom read (16 bytes read) [ 8.329427] random: systemd: uninitialized urandom read (16 bytes read) [ 8.337356] random: systemd: uninitialized urandom read (16 bytes read) [ 8.348452] random: systemd: uninitialized urandom read (16 bytes read) [ 8.371758] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 8.389728] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 8.404246] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 8.412327] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 8.433350] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 8.456385] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 8.476487] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 8.487216] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 8.509437] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 8.523406] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 8.534310] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 8.550524] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 8.564509] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 8.575720] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 8.580688] systemd[1]: Starting Journal Service... Starting Journal Service... [ 8.630352] systemd[1]: Started Setup Virtual Console. [ OK ] Started Setup Virtual Console. [ 8.663950] systemd[1]: Started Load Kernel Modules. [ OK ] Started Load Kernel Modules. [ 8.693594] systemd[1]: Started Create list of required static device nodes for the current kernel. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ 8.720207] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. Starting Create Static Device Nodes in /dev... Starting Apply Kernel Variables... [ OK ] Started Create Static Device Nodes in /dev. [ OK ] Started Apply Kernel Variables. [ 8.942537] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook... [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... Mounting Configuration File System... [ OK ] Started udev Coldplug all Devices. [ OK ] Mounted Configuration File System. Starting Show Plymouth Boot Screen... Starting dracut initqueue hook... [ OK ] Reached target System Initialization. [ OK ] Started Show Plymouth Boot Screen. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. [ 10.422166] random: fast init done [ 10.633283] scsi host0: ata_piix [ 10.669883] scsi host1: ata_piix [ 10.675542] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 10.711342] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 %G[ 11.030256] ip (323) used greatest stack depth: 13080 bytes left [ 11.237247] ip (346) used greatest stack depth: 12336 bytes left [ 13.343065] dracut-initqueue[278]: RTNETLINK answers: File exists [ 14.802270] dracut-initqueue[278]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... Mounting /sysroot... [ 16.441422] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Initrd Default Target. Starting Plymouth switch root service... [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Basic System. [ OK ] Stopped target System Initialization. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped target Local File Systems. [ OK ] Stopped target Paths. [ OK ] Stopped target Slices. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped target Sockets. [ OK ] Stopped target Timers. [ OK ] Stopped target Swap. [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Stopped udev Kernel Device Manager. [ OK ] Started Plymouth switch root service. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Closed udev Kernel Socket. [ OK ] Closed udev Control Socket. Starting Cleanup udevd DB... [ OK ] Started Cleanup udevd DB. [ OK ] Reached target Switch Root. Starting Switch Root... [ 18.057812] systemd-journald[108]: Received SIGTERM from PID 1 (systemd). [ 18.758589] SELinux: Disabled at runtime. [ 18.927249] ip_tables: (C) 2000-2006 Netfilter Core Team [ 18.935556] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... [ OK ] Listening on udev Control Socket. Mounting Debug File System... [ OK ] Created slice system-getty.slice. Mounting POSIX Message Queue File System... [ OK ] Reached target rpc_pipefs.target. Mounting Huge Pages File System... [ OK ] Reached target Local Encrypted Volumes. [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Listening on udev Kernel Socket. Starting udev Coldplug all Devices... [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. Starting Create list of required st... nodes for the current kernel... Starting Load Kernel Modules... Starting Remount Root and Kernel File Systems... [ OK ] Stopped target Switch Root. [ OK ] Stopped target Initrd File Systems. [ OK ] Stopped target Initrd Root File System. Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Created slice User and Session Slice. [ OK ] Reached target Slices. [ OK ] Created slice system-serial\x2dgetty.slice. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. Starting Set Up Additional Binary Formats... [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ OK ] Started Load Kernel Modules. Starting Apply Kernel Variables... Starting Create Static Device Nodes in /dev... [ OK ] Mounted Huge Pages File System. [ OK ] Mounted POSIX Message Queue File System. [ OK ] Mounted Debug File System. Mounting Arbitrary Executable File Formats File System... [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Journal Service. [ OK ] Started udev Coldplug all Devices. [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started Apply Kernel Variables. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. Starting Flush Journal to Persistent Storage... Starting Configure read-only root support... [ OK ] Started Set Up Additional Binary Formats. [ OK ] Started Create Static Device Nodes in /dev. [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... Starting udev Kernel Device Manager... [ OK ] Mounted /mnt. [ 21.640463] systemd-journald[570]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ OK ] Found device /dev/ttyS1. [ 22.978274] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ 23.013221] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ OK ] Found device /dev/ttyS0. [ 23.399351] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/vda. [ OK ] Found device /dev/disk/by-label/SWAP. Activating swap /dev/disk/by-label/SWAP... [ 24.064115] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS Mounting /home/green/git/lustre-release... [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Reached target Swap. %G[ 24.447358] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ OK ] Mounted /home/green/git/lustre-release. [ 24.663220] AVX version of gcm_enc/dec engaged. [ 24.664628] AES CTR mode by8 optimization enabled [ 24.766395] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 24.782228] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) [ 25.910874] EDAC MC: Ver: 3.0.0 [ 26.061631] EDAC sbridge: Ver: 1.1.2 [* ] A start job is running for Configur...nly root support (9s / no limit) [** ] A start job is running for Configur...ly root support (10s / no limit) [*** ] A start job is running for Configur...ly root support (10s / no limit) [ *** ] A start job is running for Configur...ly root support (11s / no limit) [ *** ] A start job is running for Configur...ly root support (11s / no limit) [ ***] A start job is running for Configur...ly root support (12s / no limit) [ **] A start job is running for Configur...ly root support (13s / no limit) [ *] A start job is running for Configur...ly root support (13s / no limit) [ **] A start job is running for Configur...ly root support (14s / no limit) [ ***] A start job is running for Configur...ly root support (14s / no limit) [ *** ] A start job is running for Configur...ly root support (15s / no limit) [ *** ] A start job is running for Configur...ly root support (16s / no limit) [*** ] A start job is running for Configur...ly root support (16s / no limit) [** ] A start job is running for Configur...ly root support (17s / no limit) [* ] A start job is running for Configur...ly root support (17s / no limit) [** ] A start job is running for Configur...ly root support (18s / no limit) [*** ] A start job is running for Configur...ly root support (18s / no limit) [ *** ] A start job is running for Configur...ly root support (19s / no limit) [ *** ] A start job is running for Configur...ly root support (19s / no limit) [ ***] A start job is running for Configur...ly root support (20s / no limit) [ **] A start job is running for Configur...ly root support (20s / no limit)[ 40.583327] mount.nfs (789) used greatest stack depth: 10704 bytes left [ OK ] Started Configure read-only root support. Starting Load/Save Random Seed... [ OK ] Reached target Local File Systems. Starting Preprocess NFS configuration... Starting Mark the need to relabel after reboot... Starting Tell Plymouth To Write Out Runtime Data... Starting Rebuild Journal Catalog... Starting Create Volatile Files and Directories... [ OK ] Started Mark the need to relabel after reboot. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. Starting Update UTMP about System Boot/Shutdown... [ OK ] Started Preprocess NFS configuration. [ OK ] Started Load/Save Random Seed. [ OK ] Started Tell Plymouth To Write Out Runtime Data. [ OK ] Started Update UTMP about System Boot/Shutdown. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. Starting Login Service... [ OK ] Started D-Bus System Message Bus. Starting Network Manager... Starting Dump dmesg to /var/log/dmesg... Starting GSSAPI Proxy Daemon... [ OK ] Started Login Service. [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. [ OK ] Reached target Network. Starting /etc/rc.d/rc.local Compatibility... Starting OpenSSH server daemon... Starting Network Manager Wait Online... Starting Hostname Service... [ OK ] Started OpenSSH server daemon. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Network Manager Script Dispatcher Service... Starting Wait for Plymouth Boot Screen to Quit... Starting Terminate Plymouth Boot Screen... CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg340-server login: [ 54.456274] device-mapper: uevent: version 1.0.3 [ 54.463657] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 64.175017] libcfs: loading out-of-tree module taints kernel. [ 64.177882] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 64.203069] LNet: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 64.215259] alg: No test for adler32 (adler32-zlib) [ 65.045187] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_hostid [ 73.229305] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_modules_local [ 74.169333] Lustre: Lustre: Build Version: 2.15.4_18_g9f02020 [ 74.645874] LNet: Added LNI 192.168.203.140@tcp [8/256/0/180] [ 74.648081] LNet: Accept secure, port 988 [ 76.290048] Key type lgssc registered [ 76.914024] Lustre: Echo OBD driver; http://www.lustre.org/ [ 83.316165] icp: module license 'CDDL' taints kernel. [ 83.318010] Disabling lock debugging due to kernel taint [ 87.108409] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 93.850648] LDISKFS-fs (vdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 104.312573] LDISKFS-fs (vdd): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 110.312940] LDISKFS-fs (vde): file extents enabled, maximum tree depth=5 [ 110.328534] LDISKFS-fs (vde): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 116.548247] LDISKFS-fs (vdf): file extents enabled, maximum tree depth=5 [ 116.560986] LDISKFS-fs (vdf): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 124.001283] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_modules_local [ 130.326431] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 130.360189] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 131.508752] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 131.523727] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 131.582543] Lustre: lustre-MDT0000: new disk, initializing [ 131.623662] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 131.637805] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 131.695733] mount.lustre (6886) used greatest stack depth: 10064 bytes left [ 133.507267] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 140.384921] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 140.432337] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 140.482351] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 140.504854] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space: rc = -61 [ 140.511228] Lustre: Skipped 1 previous similar message [ 140.574474] Lustre: lustre-MDT0001: new disk, initializing [ 140.612810] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 140.634001] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 140.640028] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 142.152189] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 148.333636] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 148.342201] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 148.369859] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 148.378011] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 148.383445] Lustre: lustre-OST0000-osd: enabled 'large_dir' feature on device /dev/mapper/ost1_flakey [ 148.537441] Lustre: lustre-OST0000: new disk, initializing [ 148.539637] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 148.561167] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 148.879032] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 148.882974] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 150.481735] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 157.870727] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 157.877942] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 157.922810] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 157.932179] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 157.936741] Lustre: lustre-OST0001-osd: enabled 'large_dir' feature on device /dev/mapper/ost2_flakey [ 158.000034] Lustre: lustre-OST0001: new disk, initializing [ 158.005701] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 158.029714] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 158.737102] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 158.747434] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 159.822308] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 161.579958] random: crng init done [ 167.453709] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 172.515764] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 179.723779] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing check_logdir /tmp/testlogs/ [ 182.547208] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing yml_node [ 185.057725] Lustre: DEBUG MARKER: Client: 2.15.4.18 [ 186.710724] Lustre: DEBUG MARKER: MDS: 2.15.4.18 [ 188.264082] Lustre: DEBUG MARKER: OSS: 2.15.4.18 [ 189.301505] Lustre: DEBUG MARKER: -----============= acceptance-small: sanity ============----- Fri Apr 19 02:12:06 EDT 2024 [ 193.203459] Lustre: DEBUG MARKER: excepting tests: 225 255 256 400a 42a 42b 42c 407 [ 194.181806] Lustre: DEBUG MARKER: skipping tests SLOW=no: 27m 60i 64b 68 71 115 135 136 230d 300o [ 197.004310] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing check_config_client /mnt/lustre [ 206.822698] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 208.969483] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 210.670298] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 214.592940] Lustre: DEBUG MARKER: == sanity test 60a: llog_test run from kernel module and test llog_reader ========================================================== 02:12:31 (1713507151) [ 216.141107] Lustre: DEBUG MARKER: SKIP: sanity test_60a missing subtest run-llog.sh [ 216.931507] Lustre: DEBUG MARKER: == sanity test 60b: limit repeated messages from CERROR/CWARN ========================================================== 02:12:34 (1713507154) [ 220.211825] Lustre: DEBUG MARKER: == sanity test 60c: unlink file when mds full ============ 02:12:37 (1713507157) [ 272.754441] Lustre: DEBUG MARKER: == sanity test 60d: test printk console message masking == 02:13:29 (1713507209) [ 275.537836] Lustre: DEBUG MARKER: == sanity test 60e: no space while new llog is being created ========================================================== 02:13:33 (1713507213) [ 276.125383] Lustre: *** cfs_fail_loc=15b, val=0*** [ 276.131483] Lustre: *** cfs_fail_loc=15b, val=0*** [ 278.852502] Lustre: DEBUG MARKER: == sanity test 60f: change debug_path works ============== 02:13:36 (1713507216) [ 281.645038] Lustre: DEBUG MARKER: == sanity test 60g: transaction abort won't cause MDT hung ========================================================== 02:13:39 (1713507219) [ 282.121254] Lustre: *** cfs_fail_loc=19a, val=0*** [ 282.188400] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-MDT0001-osp-MDT0000: fail to cancel 1 llog-records: rc = -116 [ 282.194280] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-MDT0001-osp-MDT0000: fail to cancel 1 of 1 llog-records: rc = -116 [ 282.816195] Lustre: *** cfs_fail_loc=19a, val=0*** [ 282.818910] Lustre: Skipped 1 previous similar message [ 282.822159] LustreError: 8139:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-MDT0001-osd: fail to cancel 1 llog-records: rc = -5 [ 282.828829] LustreError: 8139:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) Skipped 1 previous similar message [ 282.833312] LustreError: 8139:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-MDT0001-osd: fail to cancel 1 of 1 llog-records: rc = -5 [ 282.840862] LustreError: 8139:0:(llog_cat.c:789:llog_cat_cancel_records()) Skipped 1 previous similar message [ 283.897287] Lustre: *** cfs_fail_loc=19a, val=0*** [ 283.899805] Lustre: Skipped 2 previous similar messages [ 283.901683] LustreError: 8139:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-MDT0001-osd: fail to cancel 1 llog-records: rc = -5 [ 283.909399] LustreError: 8139:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-MDT0001-osd: fail to cancel 1 of 1 llog-records: rc = -5 [ 285.947927] Lustre: *** cfs_fail_loc=19a, val=0*** [ 285.949606] Lustre: Skipped 5 previous similar messages [ 286.719100] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-MDT0000-osd: fail to cancel 1 llog-records: rc = -5 [ 286.724809] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) Skipped 1 previous similar message [ 286.730136] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-MDT0000-osd: fail to cancel 1 of 1 llog-records: rc = -5 [ 286.744752] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) Skipped 1 previous similar message [ 290.117422] Lustre: *** cfs_fail_loc=19a, val=0*** [ 290.118963] Lustre: Skipped 9 previous similar messages [ 291.146283] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-MDT0000-osd: fail to cancel 1 llog-records: rc = -5 [ 291.150751] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) Skipped 2 previous similar messages [ 291.154143] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-MDT0000-osd: fail to cancel 1 of 1 llog-records: rc = -5 [ 291.157743] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) Skipped 2 previous similar messages [ 298.126288] Lustre: *** cfs_fail_loc=19a, val=0*** [ 298.127959] Lustre: Skipped 24 previous similar messages [ 300.809789] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-MDT0000-osd: fail to cancel 1 llog-records: rc = -5 [ 300.814680] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) Skipped 12 previous similar messages [ 300.819652] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-MDT0000-osd: fail to cancel 1 of 1 llog-records: rc = -5 [ 300.822252] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) Skipped 12 previous similar messages [ 314.345417] Lustre: *** cfs_fail_loc=19a, val=0*** [ 314.347444] Lustre: Skipped 44 previous similar messages [ 317.059849] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-MDT0000-osd: fail to cancel 1 llog-records: rc = -5 [ 317.066187] LustreError: 6937:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) Skipped 21 previous similar messages [ 317.072671] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-MDT0000-osd: fail to cancel 1 of 1 llog-records: rc = -5 [ 317.076539] LustreError: 6937:0:(llog_cat.c:789:llog_cat_cancel_records()) Skipped 21 previous similar messages [ 322.447859] Lustre: DEBUG MARKER: == sanity test 60h: striped directory with missing stripes can be accessed ========================================================== 02:14:19 (1713507259) [ 322.869188] Lustre: *** cfs_fail_loc=188, val=0*** [ 324.304480] Lustre: *** cfs_fail_loc=189, val=0*** [ 327.585522] Lustre: DEBUG MARKER: SKIP: sanity test_60i skipping SLOW test 60i [ 328.240950] Lustre: DEBUG MARKER: == sanity test 61a: mmap() writes don't make sync hang ========================================================================== 02:14:25 (1713507265) [ 330.741209] Lustre: DEBUG MARKER: == sanity test 61b: mmap() of unstriped file is successful ========================================================== 02:14:28 (1713507268) [ 332.841965] Lustre: DEBUG MARKER: == sanity test 63a: Verify oig_wait interruption does not crash ================================================================= 02:14:30 (1713507270) [ 398.264356] Lustre: DEBUG MARKER: == sanity test 63b: async write errors should be returned to fsync ============================================================= 02:15:35 (1713507335) [ 406.422282] Lustre: DEBUG MARKER: == sanity test 64a: verify filter grant calculations (in kernel) =============================================================== 02:15:43 (1713507343) [ 409.926130] Lustre: DEBUG MARKER: SKIP: sanity test_64b skipping SLOW test 64b [ 410.758483] Lustre: DEBUG MARKER: == sanity test 64c: verify grant shrink ================== 02:15:48 (1713507348) [ 414.074123] Lustre: DEBUG MARKER: == sanity test 64d: check grant limit exceed ============= 02:15:51 (1713507351) [ 448.219106] Lustre: DEBUG MARKER: == sanity test 64e: check grant consumption (no grant allocation) ========================================================== 02:16:25 (1713507385) [ 450.683031] Lustre: *** cfs_fail_loc=725, val=0*** [ 453.071410] Lustre: *** cfs_fail_loc=725, val=0*** [ 457.535705] Lustre: DEBUG MARKER: == sanity test 64f: check grant consumption (with grant allocation) ========================================================== 02:16:34 (1713507394) [ 463.505749] Lustre: DEBUG MARKER: == sanity test 64g: grant shrink on MDT ================== 02:16:40 (1713507400) [ 534.841486] Lustre: DEBUG MARKER: == sanity test 64h: grant shrink on read ================= 02:17:52 (1713507472) [ 548.042993] Lustre: DEBUG MARKER: == sanity test 64i: shrink on reconnect ================== 02:18:05 (1713507485) [ 551.356074] Lustre: *** cfs_fail_loc=513, val=0*** [ 551.357478] LustreError: 9327:0:(service.c:2115:ptlrpc_server_handle_req_in()) drop incoming rpc opc 17, x1796742323760704 [ 552.400332] Lustre: Failing over lustre-OST0000 [ 552.417312] LustreError: 32078:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 552.450479] Lustre: server umount lustre-OST0000 complete [ 554.425445] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 554.427951] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 554.442489] Lustre: Skipped 1 previous similar message [ 555.306553] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.203.40@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 555.319972] LustreError: Skipped 1 previous similar message [ 559.433412] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 559.446292] LustreError: Skipped 1 previous similar message [ 564.442010] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 564.468389] LustreError: Skipped 2 previous similar messages [ 568.131682] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 568.140562] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 568.305207] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 569.395179] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 570.931271] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 571.888744] Lustre: lustre-OST0000-osc-MDT0001: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 571.888755] Lustre: lustre-OST0000: Recovery over after 0:03, of 3 clients 3 recovered and 0 were evicted. [ 571.935336] Lustre: lustre-OST0000: deleting orphan objects from 0x0:2519 to 0x0:2561 [ 577.702697] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 578.804175] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 585.518728] Lustre: DEBUG MARKER: == sanity test 65a: directory with no stripe info ======== 02:18:42 (1713507522) [ 588.999345] Lustre: DEBUG MARKER: == sanity test 65b: directory setstripe -S stripe_size*2 -i 0 -c 1 ========================================================== 02:18:46 (1713507526) [ 592.953324] Lustre: DEBUG MARKER: == sanity test 65c: directory setstripe -S stripe_size*4 -i 1 -c 1 ========================================================== 02:18:50 (1713507530) [ 596.847436] Lustre: DEBUG MARKER: == sanity test 65d: directory setstripe -S stripe_size -c stripe_count ========================================================== 02:18:54 (1713507534) [ 601.193083] Lustre: DEBUG MARKER: == sanity test 65e: directory setstripe defaults ========= 02:18:58 (1713507538) [ 605.229515] Lustre: DEBUG MARKER: == sanity test 65f: dir setstripe permission (should return error) ============================================================= 02:19:02 (1713507542) [ 608.675654] Lustre: DEBUG MARKER: == sanity test 65g: directory setstripe -d =============== 02:19:06 (1713507546) [ 611.762847] Lustre: DEBUG MARKER: == sanity test 65h: directory stripe info inherit ============================================================================== 02:19:09 (1713507549) [ 615.086478] Lustre: DEBUG MARKER: == sanity test 65i: various tests to set root directory striping ========================================================== 02:19:12 (1713507552) [ 618.253179] Lustre: DEBUG MARKER: == sanity test 65j: set default striping on root directory (bug 6367)=========================================================== 02:19:15 (1713507555) [ 621.194601] Lustre: DEBUG MARKER: == sanity test 65k: validate manual striping works properly with deactivated OSCs ========================================================== 02:19:18 (1713507558) [ 621.841712] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 621.847095] Lustre: lustre-OST0000: Client lustre-MDT0001-mdtlov_UUID (at 0@lo) reconnecting [ 621.850756] Lustre: lustre-OST0000-osc-MDT0001: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 621.851093] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:35 to 0x280000400:65 [ 621.860302] Lustre: Skipped 1 previous similar message [ 622.110167] Lustre: lustre-OST0000: deleting orphan objects from 0x0:2564 to 0x0:2593 [ 622.400396] Lustre: lustre-OST0001: Client lustre-MDT0001-mdtlov_UUID (at 0@lo) reconnecting [ 622.403546] Lustre: Skipped 1 previous similar message [ 622.405749] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:3 to 0x2c0000400:33 [ 622.791863] Lustre: lustre-OST0001: deleting orphan objects from 0x0:2514 to 0x0:2529 [ 629.905482] Lustre: setting import lustre-OST0000_UUID INACTIVE by administrator request [ 635.115283] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 635.120368] Lustre: Skipped 3 previous similar messages [ 635.122720] Lustre: lustre-OST0000: Client lustre-MDT0001-mdtlov_UUID (at 0@lo) reconnecting [ 635.124717] Lustre: Skipped 1 previous similar message [ 635.127440] LustreError: 167-0: lustre-OST0000-osc-MDT0001: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 635.132911] Lustre: lustre-OST0000-osc-MDT0001: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 635.133774] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:35 to 0x280000400:97 [ 635.141973] Lustre: Skipped 3 previous similar messages [ 636.337748] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 636.398574] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 637.526800] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 637.591567] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 642.653487] Lustre: setting import lustre-OST0000_UUID INACTIVE by administrator request [ 647.867852] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 647.873843] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 647.878058] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 647.884111] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 647.886882] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3594 to 0x0:3617 [ 648.980049] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 [ 649.035823] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 650.102655] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 [ 650.164444] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 655.332096] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 658.838715] Lustre: lustre-OST0001-osc-MDT0001: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 658.843517] Lustre: lustre-OST0001: Client lustre-MDT0001-mdtlov_UUID (at 0@lo) reconnecting [ 658.846454] LustreError: 167-0: lustre-OST0001-osc-MDT0001: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 658.851565] Lustre: lustre-OST0001-osc-MDT0001: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 658.852023] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:35 to 0x2c0000400:65 [ 659.747932] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 [ 659.807545] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 660.837263] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 [ 660.890514] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 664.493272] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 667.426666] Lustre: lustre-OST0001-osc-MDT0000: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 667.431594] Lustre: lustre-OST0001: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 667.434377] LustreError: 167-0: lustre-OST0001-osc-MDT0000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 667.437744] Lustre: lustre-OST0001-osc-MDT0000: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 667.438262] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3532 to 0x0:3553 [ 668.247578] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 40 [ 668.290349] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 669.076921] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 40 [ 669.119087] Lustre: DEBUG MARKER: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec [ 670.477651] Lustre: DEBUG MARKER: == sanity test 65l: lfs find on -1 stripe dir ================================================================================== 02:20:08 (1713507608) [ 671.952331] Lustre: DEBUG MARKER: == sanity test 65m: normal user can't set filesystem default stripe ========================================================== 02:20:09 (1713507609) [ 673.348467] Lustre: DEBUG MARKER: == sanity test 65n: don't inherit default layout from root for new subdirectories ========================================================== 02:20:11 (1713507611) [ 677.740383] LustreError: 11772:0:(qmt_pool.c:1406:qmt_pool_add_rem()) add to: can't lustre-QMT0000 lustre-OST0000_UUID pool test_65n: rc = -17 [ 682.423427] LustreError: 12310:0:(qmt_pool.c:1406:qmt_pool_add_rem()) remove: can't lustre-QMT0000 lustre-OST0000_UUID pool test_65n: rc = -22 [ 682.427797] LustreError: 12310:0:(qmt_pool.c:1406:qmt_pool_add_rem()) Skipped 1 previous similar message [ 683.589883] LustreError: 12387:0:(qmt_pool.c:1406:qmt_pool_add_rem()) remove: can't lustre-QMT0000 lustre-OST0001_UUID pool test_65n: rc = -22 [ 686.406407] Lustre: DEBUG MARKER: == sanity test 66: update inode blocks count on client ========================================================================= 02:20:24 (1713507624) [ 689.190432] Lustre: DEBUG MARKER: == sanity test 69: verify oa2dentry return -ENOENT doesn't LBUG ================================================================ 02:20:26 (1713507626) [ 689.640511] Lustre: *** cfs_fail_loc=217, val=0*** [ 690.436317] Lustre: *** cfs_fail_loc=217, val=0*** [ 690.438089] Lustre: Skipped 1 previous similar message [ 692.066076] Lustre: DEBUG MARKER: SKIP: sanity test_71 skipping SLOW test 71 [ 692.434454] Lustre: DEBUG MARKER: == sanity test 72a: Test that remove suid works properly (bug5695) ============================================================== 02:20:30 (1713507630) [ 693.919218] Lustre: DEBUG MARKER: == sanity test 72b: Test that we keep mode setting if without file data changed (bug 24226) ========================================================== 02:20:31 (1713507631) [ 695.613423] Lustre: DEBUG MARKER: == sanity test 73: multiple MDC requests (should not deadlock) ========================================================== 02:20:33 (1713507633) [ 723.248419] Lustre: DEBUG MARKER: == sanity test 74a: ldlm_enqueue freed-export error path, ls (shouldn't LBUG) ========================================================== 02:21:00 (1713507660) [ 724.842388] Lustre: DEBUG MARKER: == sanity test 74b: ldlm_enqueue freed-export error path, touch (shouldn't LBUG) ========================================================== 02:21:02 (1713507662) [ 726.770680] Lustre: DEBUG MARKER: == sanity test 74c: ldlm_lock_create error path, (shouldn't LBUG) ========================================================== 02:21:04 (1713507664) [ 728.338158] Lustre: DEBUG MARKER: == sanity test 76a: confirm clients recycle inodes properly ============================================================== 02:21:06 (1713507666) [ 737.398436] Lustre: DEBUG MARKER: == sanity test 76b: confirm clients recycle directory inodes properly ============================================================== 02:21:14 (1713507674) [ 747.770630] Lustre: DEBUG MARKER: == sanity test 77a: normal checksum read/write operation ========================================================== 02:21:25 (1713507685) [ 749.735514] Lustre: DEBUG MARKER: == sanity test 77b: checksum error on client write, read ========================================================== 02:21:27 (1713507687) [ 749.881724] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc14:0x0] object 0x0:3883 extent [0-4194303]: client csum bfcb1704, server csum bfcb1703 [ 751.328589] Lustre: DEBUG MARKER: set checksum type to crc32, rc = 0 [ 752.432190] LustreError: 132-0: lustre-OST0000: BAD READ CHECKSUM: should have changed on the client or in transit: from 192.168.203.40@tcp inode [0x200000407:0xc14:0x0] object 0x0:3883 extent [0-4194303], client returned csum 108fafec (type 1), server csum f029fa32 (type 1) [ 753.015630] Lustre: DEBUG MARKER: set checksum type to adler, rc = 0 [ 754.115562] LustreError: 132-0: lustre-OST0000: BAD READ CHECKSUM: should have changed on the client or in transit: from 192.168.203.40@tcp inode [0x200000407:0xc14:0x0] object 0x0:3883 extent [0-4194303], client returned csum a9ef0f67 (type 2), server csum 2d440f46 (type 2) [ 754.710850] Lustre: DEBUG MARKER: set checksum type to crc32c, rc = 0 [ 755.800820] LustreError: 132-0: lustre-OST0000: BAD READ CHECKSUM: should have changed on the client or in transit: from 192.168.203.40@tcp inode [0x200000407:0xc14:0x0] object 0x0:3883 extent [0-4194303], client returned csum 4a433c23 (type 4), server csum bfcb1703 (type 4) [ 756.370195] Lustre: DEBUG MARKER: set checksum type to t10ip512, rc = 0 [ 758.055962] Lustre: DEBUG MARKER: set checksum type to t10ip4K, rc = 0 [ 759.141182] LustreError: 132-0: lustre-OST0000: BAD READ CHECKSUM: should have changed on the client or in transit: from 192.168.203.40@tcp inode [0x200000407:0xc14:0x0] object 0x0:3883 extent [0-4194303], client returned csum 142efbab (type 20), server csum 13ccfccb (type 20) [ 759.154642] LustreError: Skipped 1 previous similar message [ 759.816904] Lustre: DEBUG MARKER: set checksum type to t10crc512, rc = 0 [ 761.659634] Lustre: DEBUG MARKER: set checksum type to t10crc4K, rc = 0 [ 763.389323] Lustre: DEBUG MARKER: set checksum type to crc32c, rc = 0 [ 765.182360] Lustre: DEBUG MARKER: == sanity test 77c: checksum error on client read with debug ========================================================== 02:21:42 (1713507702) [ 767.318650] Lustre: 9332:0:(tgt_handler.c:1903:dump_all_bulk_pages()) dumping checksum data to /tmp/lustre-log-checksum_dump-ost-[0x200000407:0xc15:0x0]:[0-1048575]-9d2b8904-cb739cfc [ 767.328618] LustreError: dumping log to /tmp/lustre-log.1713507705.9332 [ 767.359391] LustreError: 132-0: lustre-OST0000: BAD READ CHECKSUM: should have changed on the client or in transit: from 192.168.203.40@tcp inode [0x200000407:0xc15:0x0] object 0x0:3884 extent [0-1048575], client returned csum 9d2b8904 (type 4), server csum cb739cfc (type 4) [ 767.371656] LustreError: Skipped 2 previous similar messages [ 775.961730] Lustre: DEBUG MARKER: == sanity test 77d: checksum error on OST direct write, read ========================================================== 02:21:53 (1713507713) [ 776.074082] LustreError: 168-f: lustre-OST0001: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc17:0x0] object 0x0:3817 extent [0-4194303]: client csum de2bf73f, server csum de2bf73e [ 778.377218] LustreError: 132-0: lustre-OST0001: BAD READ CHECKSUM: should have changed on the client or in transit: from 192.168.203.40@tcp inode [0x200000407:0xc17:0x0] object 0x0:3817 extent [0-4194303], client returned csum f5a99216 (type 4), server csum de2bf73e (type 4) [ 779.858433] Lustre: DEBUG MARKER: == sanity test 77f: repeat checksum error on write (expect error) ========================================================== 02:21:57 (1713507717) [ 780.235755] Lustre: DEBUG MARKER: set checksum type to crc32, rc = 0 [ 780.318368] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc18:0x0] object 0x0:3885 extent [0-4194303]: client csum 8b19b060, server csum 8b19b05f [ 783.396227] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc18:0x0] object 0x0:3885 extent [4194304-8388607]: client csum 8b19b060, server csum 8b19b05f [ 783.401339] LustreError: Skipped 4 previous similar messages [ 790.503982] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc18:0x0] object 0x0:3885 extent [0-4194303]: client csum 8b19b060, server csum 8b19b05f [ 790.513574] LustreError: Skipped 3 previous similar messages [ 801.657069] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc18:0x0] object 0x0:3885 extent [4194304-8388607]: client csum 8b19b060, server csum 8b19b05f [ 801.661864] LustreError: Skipped 2 previous similar messages [ 819.760189] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc18:0x0] object 0x0:3885 extent [4194304-8388607]: client csum 8b19b060, server csum 8b19b05f [ 819.770224] LustreError: Skipped 4 previous similar messages [ 826.296872] Lustre: DEBUG MARKER: set checksum type to adler, rc = 0 [ 853.815432] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc18:0x0] object 0x0:3885 extent [4194304-8388607]: client csum 7d33b9a0, server csum 7d33b99f [ 853.822982] LustreError: Skipped 17 previous similar messages [ 872.496868] Lustre: DEBUG MARKER: set checksum type to crc32c, rc = 0 [ 918.070081] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc18:0x0] object 0x0:3885 extent [0-4194303]: client csum de2bf73f, server csum de2bf73e [ 918.073452] LustreError: Skipped 27 previous similar messages [ 918.467896] Lustre: DEBUG MARKER: set checksum type to t10ip512, rc = 0 [ 964.485030] Lustre: DEBUG MARKER: set checksum type to t10ip4K, rc = 0 [ 1009.747509] Lustre: DEBUG MARKER: set checksum type to t10crc512, rc = 0 [ 1048.132635] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.203.40@tcp inode [0x200000407:0xc18:0x0] object 0x0:3885 extent [0-4194303]: client csum a30822f0, server csum a30822ef [ 1048.141519] LustreError: Skipped 63 previous similar messages [ 1054.807356] Lustre: DEBUG MARKER: set checksum type to t10crc4K, rc = 0 [ 1101.241723] Lustre: DEBUG MARKER: set checksum type to crc32c, rc = 0 [ 1103.499919] Lustre: DEBUG MARKER: == sanity test 77g: checksum error on OST write, read ==== 02:27:21 (1713508041) [ 1103.998950] Lustre: *** cfs_fail_loc=21a, val=0*** [ 1106.136051] Lustre: *** cfs_fail_loc=21b, val=0*** [ 1110.061080] Lustre: DEBUG MARKER: == sanity test 77k: enable/disable checksum correctly ==== 02:27:27 (1713508047) [ 1110.405595] Lustre: Setting parameter lustre.osc.lustre*.checksums in log params [ 1111.189441] Lustre: Modifying parameter lustre.osc.lustre*.checksums in log params [ 1113.999476] Lustre: Disabling parameter lustre.osc.lustre*.checksums in log params [ 1117.148484] Lustre: Setting parameter lustre.osc.lustre*.checksums in log params [ 1118.825665] Lustre: DEBUG MARKER: == sanity test 77l: preferred checksum type is remembered after reconnected ========================================================== 02:27:36 (1713508056) [ 1119.350645] Lustre: DEBUG MARKER: set checksum type to invalid, rc = 22 [ 1119.787451] Lustre: DEBUG MARKER: set checksum type to crc32, rc = 0 [ 1121.806521] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1135.635655] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in IDLE state after 13 sec [ 1137.462609] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1138.007338] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in FULL state after 0 sec [ 1138.587037] Lustre: DEBUG MARKER: set checksum type to adler, rc = 0 [ 1140.528304] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1161.536362] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in IDLE state after 20 sec [ 1163.578809] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1164.021634] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in FULL state after 0 sec [ 1164.602125] Lustre: DEBUG MARKER: set checksum type to crc32c, rc = 0 [ 1166.654216] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1186.648133] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in IDLE state after 19 sec [ 1188.327496] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1188.740146] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in FULL state after 0 sec [ 1189.252459] Lustre: DEBUG MARKER: set checksum type to t10ip512, rc = 0 [ 1191.159889] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1211.129945] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in IDLE state after 19 sec [ 1213.172391] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1213.654344] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in FULL state after 0 sec [ 1214.123503] Lustre: DEBUG MARKER: set checksum type to t10ip4K, rc = 0 [ 1215.997029] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1235.901248] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in IDLE state after 19 sec [ 1237.610325] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1238.143022] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in FULL state after 0 sec [ 1238.724389] Lustre: DEBUG MARKER: set checksum type to t10crc512, rc = 0 [ 1240.642548] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1261.546424] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in IDLE state after 20 sec [ 1263.570301] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1264.103976] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in FULL state after 0 sec [ 1264.679428] Lustre: DEBUG MARKER: set checksum type to t10crc4K, rc = 0 [ 1266.709794] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1286.693066] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in IDLE state after 19 sec [ 1288.727378] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid 40 [ 1289.254035] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88007d091000.ost_server_uuid in FULL state after 0 sec [ 1291.478069] Lustre: DEBUG MARKER: set checksum type to crc32c, rc = 0 [ 1292.051872] Lustre: DEBUG MARKER: == sanity test 77m: Verify checksum_speed is correctly read ========================================================== 02:30:29 (1713508229) [ 1294.259589] Lustre: DEBUG MARKER: == sanity test 77n: Verify read from a hole inside contiguous blocks with T10PI ========================================================== 02:30:31 (1713508231) [ 1294.914595] Lustre: DEBUG MARKER: set checksum type to t10ip512, rc = 0 [ 1295.457731] Lustre: DEBUG MARKER: set checksum type to t10ip4K, rc = 0 [ 1296.028368] Lustre: DEBUG MARKER: set checksum type to t10crc512, rc = 0 [ 1296.595804] Lustre: DEBUG MARKER: set checksum type to t10crc4K, rc = 0 [ 1298.845785] Lustre: DEBUG MARKER: set checksum type to crc32c, rc = 0 [ 1299.434607] Lustre: DEBUG MARKER: == sanity test 77o: Verify checksum_type for server (mdt and ofd(obdfilter)) ========================================================== 02:30:36 (1713508236) [ 1303.246047] Lustre: DEBUG MARKER: == sanity test 78: handle large O_DIRECT writes correctly ====================================================================== 02:30:40 (1713508240) [ 1306.011190] Lustre: DEBUG MARKER: == sanity test 79: df report consistency check ================================================================================= 02:30:43 (1713508243) [ 1318.200011] Lustre: DEBUG MARKER: == sanity test 80: Page eviction is equally fast at high offsets too ========================================================== 02:30:55 (1713508255) [ 1321.655634] Lustre: DEBUG MARKER: == sanity test 81a: OST should retry write when get -ENOSPC ========================================================================= 02:30:59 (1713508259) [ 1322.106750] Lustre: *** cfs_fail_loc=228, val=0*** [ 1324.443452] Lustre: DEBUG MARKER: == sanity test 81b: OST should return -ENOSPC when retry still fails ================================================================= 02:31:01 (1713508261) [ 1324.872113] Lustre: *** cfs_fail_loc=228, val=0*** [ 1327.168306] Lustre: DEBUG MARKER: == sanity test 99: cvs strange file/directory operations ========================================================== 02:31:04 (1713508264) [ 1333.447311] Lustre: DEBUG MARKER: == sanity test 100: check local port using privileged port ===================================================================== 02:31:10 (1713508270) [ 1336.707379] Lustre: DEBUG MARKER: == sanity test 101a: check read-ahead for random reads === 02:31:14 (1713508274) [ 1362.420873] Lustre: DEBUG MARKER: == sanity test 101b: check stride-io mode read-ahead =========================================================================== 02:31:39 (1713508299) [ 1367.482396] Lustre: DEBUG MARKER: == sanity test 101c: check stripe_size aligned read-ahead ========================================================== 02:31:44 (1713508304) [ 1377.856607] Lustre: DEBUG MARKER: == sanity test 101d: file read with and without read-ahead enabled ========================================================== 02:31:55 (1713508315) [ 1427.838731] Lustre: DEBUG MARKER: == sanity test 101e: check read-ahead for small read(1k) for small files(500k) ========================================================== 02:32:45 (1713508365) [ 1435.310223] Lustre: DEBUG MARKER: == sanity test 101f: check mmap read performance ========= 02:32:52 (1713508372) [ 1437.936926] Lustre: DEBUG MARKER: == sanity test 101g: Big bulk(4/16 MiB) readahead ======== 02:32:55 (1713508375) [ 1447.124513] Lustre: DEBUG MARKER: == sanity test 101h: Readahead should cover current read window ========================================================== 02:33:04 (1713508384) [ 1449.995153] Lustre: DEBUG MARKER: == sanity test 101i: allow current readahead to exceed reservation ========================================================== 02:33:07 (1713508387) [ 1451.994643] Lustre: DEBUG MARKER: == sanity test 101j: A complete read block should be submitted when no RA ========================================================== 02:33:09 (1713508389) [ 1461.739915] Lustre: DEBUG MARKER: == sanity test 102a: user xattr test ============================================================================================ 02:33:19 (1713508399) [ 1463.908991] Lustre: DEBUG MARKER: == sanity test 102b: getfattr/setfattr for trusted.lov EAs ========================================================== 02:33:21 (1713508401) [ 1466.702238] Lustre: DEBUG MARKER: == sanity test 102c: non-root getfattr/setfattr for lustre.lov EAs ===================================================================== 02:33:24 (1713508404) [ 1469.202782] Lustre: DEBUG MARKER: == sanity test 102d: tar restore stripe info from tarfile,not keep osts ========================================================== 02:33:26 (1713508406) [ 1474.064504] Lustre: DEBUG MARKER: == sanity test 102f: tar copy files, not keep osts ======= 02:33:31 (1713508411) [ 1478.376321] Lustre: DEBUG MARKER: == sanity test 102h: grow xattr from inside inode to external block ========================================================== 02:33:35 (1713508415) [ 1478.862116] Lustre: DEBUG MARKER: save trusted.big on /mnt/lustre/f102h.sanity [ 1479.208415] Lustre: DEBUG MARKER: save trusted.sml on /mnt/lustre/f102h.sanity [ 1479.581113] Lustre: DEBUG MARKER: grow trusted.sml on /mnt/lustre/f102h.sanity [ 1480.158619] Lustre: DEBUG MARKER: trusted.big still valid after growing trusted.sml [ 1482.428877] Lustre: DEBUG MARKER: == sanity test 102ha: grow xattr from inside inode to external inode ========================================================== 02:33:39 (1713508419) [ 1483.433703] Lustre: DEBUG MARKER: save trusted.big on /mnt/lustre/f102ha.sanity [ 1484.001347] Lustre: DEBUG MARKER: save trusted.sml on /mnt/lustre/f102ha.sanity [ 1484.551229] Lustre: DEBUG MARKER: grow trusted.sml on /mnt/lustre/f102ha.sanity [ 1485.158963] Lustre: DEBUG MARKER: trusted.big still valid after growing trusted.sml [ 1485.797408] Lustre: DEBUG MARKER: save trusted.big on /mnt/lustre/f102ha.sanity [ 1488.083517] Lustre: DEBUG MARKER: == sanity test 102i: lgetxattr test on symbolic link ====================================================================== 02:33:45 (1713508425) [ 1490.475746] Lustre: DEBUG MARKER: == sanity test 102j: non-root tar restore stripe info from tarfile, not keep osts ============================================================= 02:33:47 (1713508427) [ 1495.094145] Lustre: DEBUG MARKER: == sanity test 102k: setfattr without parameter of value shouldn't cause a crash ========================================================== 02:33:52 (1713508432) [ 1497.630975] Lustre: DEBUG MARKER: == sanity test 102l: listxattr size test ============================================================================================ 02:33:55 (1713508435) [ 1500.037165] Lustre: DEBUG MARKER: == sanity test 102m: Ensure listxattr fails on small bufffer ================================================================== 02:33:57 (1713508437) [ 1502.429409] Lustre: DEBUG MARKER: == sanity test 102n: silently ignore setxattr on internal trusted xattrs ========================================================== 02:33:59 (1713508439) [ 1505.104576] Lustre: DEBUG MARKER: == sanity test 102p: check setxattr(2) correctly fails without permission ========================================================== 02:34:02 (1713508442) [ 1507.539692] Lustre: DEBUG MARKER: == sanity test 102q: flistxattr should not return trusted.link EAs for orphans ========================================================== 02:34:04 (1713508444) [ 1509.920075] Lustre: DEBUG MARKER: == sanity test 102r: set EAs with empty values =========== 02:34:07 (1713508447) [ 1512.523034] Lustre: DEBUG MARKER: == sanity test 102s: getting nonexistent xattrs should fail ========================================================== 02:34:09 (1713508449) [ 1515.053673] Lustre: DEBUG MARKER: == sanity test 102t: zero length xattr values handled correctly ========================================================== 02:34:12 (1713508452) [ 1517.463663] Lustre: DEBUG MARKER: == sanity test 103a: acl test ============================ 02:34:15 (1713508455) [ 1562.191650] Lustre: DEBUG MARKER: == sanity test 103b: umask lfs setstripe ================= 02:34:59 (1713508499) [ 1577.048878] Lustre: DEBUG MARKER: == sanity test 103c: 'cp -rp' won't set empty acl ======== 02:35:14 (1713508514) [ 1579.378915] Lustre: DEBUG MARKER: == sanity test 103e: inheritance of big amount of default ACLs ========================================================== 02:35:16 (1713508516) [ 1818.401201] Lustre: 15649:0:(osd_handler.c:1948:osd_trans_start()) lustre-MDT0000: credits 64189 > trans_max 3200 [ 1818.403161] Lustre: 15649:0:(osd_handler.c:1877:osd_trans_dump_creds()) create: 1000/4000/0, destroy: 1/4/0 [ 1818.405021] Lustre: 15649:0:(osd_handler.c:1884:osd_trans_dump_creds()) attr_set: 3/3/0, xattr_set: 1004/148/0 [ 1818.406970] Lustre: 15649:0:(osd_handler.c:1894:osd_trans_dump_creds()) write: 5001/43010/0, punch: 0/0/0, quota 0/0/0 [ 1818.408899] Lustre: 15649:0:(osd_handler.c:1901:osd_trans_dump_creds()) insert: 1001/17016/0, delete: 2/5/0 [ 1818.411028] Lustre: 15649:0:(osd_handler.c:1908:osd_trans_dump_creds()) ref_add: 1/1/0, ref_del: 2/2/0 [ 1818.412904] Pid: 15649, comm: mdt00_005 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 1818.414573] Call Trace: [ 1818.415152] [<0>] libcfs_call_trace+0x90/0xf0 [libcfs] [ 1818.416218] [<0>] libcfs_debug_dumpstack+0x26/0x30 [libcfs] [ 1818.417404] [<0>] osd_trans_start+0x5ba/0x5e0 [osd_ldiskfs] [ 1818.418613] [<0>] top_trans_start+0x763/0xa10 [ptlrpc] [ 1818.419675] [<0>] lod_trans_start+0x34/0x40 [lod] [ 1818.420652] [<0>] mdd_trans_start+0x14/0x20 [mdd] [ 1818.421608] [<0>] mdd_unlink+0x5e3/0xdb0 [mdd] [ 1818.422518] [<0>] mdt_reint_unlink+0xe32/0x1df0 [mdt] [ 1818.423524] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 1818.424482] [<0>] mdt_reint_internal+0x76c/0xb50 [mdt] [ 1818.425463] [<0>] mdt_reint+0x67/0x150 [mdt] [ 1818.426388] [<0>] tgt_request_handle+0x93a/0x19c0 [ptlrpc] [ 1818.427578] [<0>] ptlrpc_server_handle_request+0x250/0xc30 [ptlrpc] [ 1818.428778] [<0>] ptlrpc_main+0xbd9/0x15f0 [ptlrpc] [ 1818.429735] [<0>] kthread+0xe4/0xf0 [ 1818.430456] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 1818.431578] [<0>] 0xfffffffffffffffe [ 1818.432486] Lustre: 15649:0:(osd_internal.h:1333:osd_trans_exec_op()) lustre-MDT0000: opcode 7: before 3200 < left 43010, rollback = 7 [ 1820.808109] Lustre: DEBUG MARKER: == sanity test 103f: changelog doesn't interfere with default ACLs buffers ========================================================== 02:39:18 (1713508758) [ 1822.102793] Lustre: lustre-MDD0000: changelog on [ 1823.044440] Lustre: lustre-MDD0001: changelog on [ 1825.542265] Lustre: lustre-MDD0001: changelog off [ 1826.533978] Lustre: lustre-MDD0000: changelog off [ 1827.545440] Lustre: DEBUG MARKER: == sanity test 104a: lfs df [-ih] [path] test =================================================================================== 02:39:24 (1713508764) [ 1827.653226] Lustre: lustre-OST0000: Client 78e4b081-fdc0-44d7-a58a-cb919bcc4886 (at 192.168.203.40@tcp) reconnecting [ 1829.607277] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff880076c3d000.ost_server_uuid 40 [ 1830.072345] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880076c3d000.ost_server_uuid in FULL state after 0 sec [ 1832.515778] Lustre: DEBUG MARKER: == sanity test 104b: runas -u 500 -g 500 lfs check servers test ============================================================================== 02:39:29 (1713508769) [ 1834.973320] Lustre: DEBUG MARKER: == sanity test 104c: Verify df vs lfs_df stays same after recordsize change ========================================================== 02:39:32 (1713508772) [ 1835.540851] Lustre: DEBUG MARKER: SKIP: sanity test_104c zfs only test [ 1836.132735] Lustre: DEBUG MARKER: == sanity test 105a: flock when mounted without -o flock test ================================================================== 02:39:33 (1713508773) [ 1838.614094] Lustre: DEBUG MARKER: == sanity test 105b: fcntl when mounted without -o flock test ================================================================== 02:39:36 (1713508776) [ 1841.057382] Lustre: DEBUG MARKER: == sanity test 105c: lockf when mounted without -o flock test ========================================================== 02:39:38 (1713508778) [ 1843.423569] Lustre: DEBUG MARKER: == sanity test 105d: flock race (should not freeze) ================================================================== 02:39:40 (1713508780) [ 1855.925404] Lustre: DEBUG MARKER: == sanity test 105e: Two conflicting flocks from same process ========================================================== 02:39:53 (1713508793) [ 1858.287519] Lustre: DEBUG MARKER: == sanity test 106: attempt exec of dir followed by chown of that dir ========================================================== 02:39:55 (1713508795) [ 1860.758341] Lustre: DEBUG MARKER: == sanity test 107: Coredump on SIG ====================== 02:39:58 (1713508798) [ 1864.705742] Lustre: DEBUG MARKER: == sanity test 110: filename length checking ============= 02:40:02 (1713508802) [ 1867.352887] Lustre: DEBUG MARKER: SKIP: sanity test_115 skipping SLOW test 115 [ 1867.880351] Lustre: DEBUG MARKER: == sanity test 116a: stripe QOS: free space balance ============================================================================= 02:40:05 (1713508805) [ 1927.458330] Lustre: DEBUG MARKER: == sanity test 116b: QoS shouldn't LBUG if not enough OSTs found on the 2nd pass ========================================================== 02:41:04 (1713508864) [ 1928.580586] Lustre: *** cfs_fail_loc=147, val=0*** [ 1931.568565] Lustre: DEBUG MARKER: == sanity test 117: verify osd extend ==================== 02:41:09 (1713508869) [ 1934.081759] Lustre: DEBUG MARKER: == sanity test 118a: verify O_SYNC works ================= 02:41:11 (1713508871) [ 1936.591522] Lustre: DEBUG MARKER: == sanity test 118b: Reclaim dirty pages on fatal error ==================================================================== 02:41:14 (1713508874) [ 1937.232469] Lustre: *** cfs_fail_loc=217, val=0*** [ 1937.234803] Lustre: Skipped 9 previous similar messages [ 1939.928501] Lustre: DEBUG MARKER: == sanity test 118c: Fsync blocks on EROFS until dirty pages are flushed ==================================================================== 02:41:17 (1713508877) [ 1940.557020] LustreError: 13636:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 216 sleeping for 1000ms [ 1941.562008] LustreError: 13636:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 216 awake [ 1945.053655] Lustre: DEBUG MARKER: == sanity test 118d: Fsync validation inject a delay of the bulk ==================================================================== 02:41:22 (1713508882) [ 1945.703983] LustreError: 9332:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 sleeping for 5000ms [ 1950.709094] LustreError: 9332:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 awake [ 1953.562852] Lustre: DEBUG MARKER: == sanity test 118f: Simulate unrecoverable OSC side error ==================================================================== 02:41:31 (1713508891) [ 1956.181588] Lustre: DEBUG MARKER: == sanity test 118g: Don't stay in wait if we got local -ENOMEM ==================================================================== 02:41:33 (1713508893) [ 1958.628433] Lustre: DEBUG MARKER: == sanity test 118h: Verify timeout in handling recoverables errors ==================================================================== 02:41:36 (1713508896) [ 1959.229803] Lustre: *** cfs_fail_loc=20e, val=0*** [ 1960.239975] Lustre: *** cfs_fail_loc=20e, val=0*** [ 1962.465348] Lustre: *** cfs_fail_loc=20e, val=0*** [ 1965.471280] Lustre: *** cfs_fail_loc=20e, val=0*** [ 1969.592151] Lustre: *** cfs_fail_loc=20e, val=0*** [ 1971.865953] Lustre: DEBUG MARKER: == sanity test 118i: Fix error before timeout in recoverable error ==================================================================== 02:41:49 (1713508909) [ 1981.052524] Lustre: DEBUG MARKER: == sanity test 118j: Simulate unrecoverable OST side error ==================================================================== 02:41:58 (1713508918) [ 1981.668787] Lustre: *** cfs_fail_loc=220, val=0*** [ 1981.672206] Lustre: Skipped 3 previous similar messages [ 1984.695304] Lustre: DEBUG MARKER: == sanity test 118k: bio alloc -ENOMEM and IO TERM handling =================================================================== 02:42:02 (1713508922) [ 1996.194845] Lustre: DEBUG MARKER: == sanity test 118l: fsync dir =========================== 02:42:13 (1713508933) [ 1998.391725] Lustre: DEBUG MARKER: == sanity test 118m: fdatasync dir ======================= 02:42:16 (1713508936) [ 2000.084153] Lustre: DEBUG MARKER: == sanity test 118n: statfs() sends OST_STATFS requests in parallel ========================================================== 02:42:17 (1713508937) [ 2004.336433] Lustre: DEBUG MARKER: == sanity test 119a: Short directIO read must return actual read amount ========================================================== 02:42:22 (1713508942) [ 2006.719234] Lustre: DEBUG MARKER: == sanity test 119b: Sparse directIO read must return actual read amount ========================================================== 02:42:24 (1713508944) [ 2009.263288] Lustre: DEBUG MARKER: == sanity test 119c: Testing for direct read hitting hole ========================================================== 02:42:26 (1713508946) [ 2011.769110] Lustre: DEBUG MARKER: == sanity test 119d: The DIO path should try to send a new rpc once one is completed ========================================================== 02:42:29 (1713508949) [ 2013.225857] Lustre: DEBUG MARKER: the DIO writes have completed, now wait for the reads (should not block very long) [ 2017.755139] Lustre: DEBUG MARKER: == sanity test 120a: Early Lock Cancel: mkdir test ======= 02:42:35 (1713508955) [ 2021.162300] Lustre: DEBUG MARKER: == sanity test 120b: Early Lock Cancel: create test ====== 02:42:38 (1713508958) [ 2024.479125] Lustre: DEBUG MARKER: == sanity test 120c: Early Lock Cancel: link test ======== 02:42:41 (1713508961) [ 2027.863856] Lustre: DEBUG MARKER: == sanity test 120d: Early Lock Cancel: setattr test ===== 02:42:45 (1713508965) [ 2031.049500] Lustre: DEBUG MARKER: == sanity test 120e: Early Lock Cancel: unlink test ====== 02:42:48 (1713508968) [ 2041.245503] Lustre: DEBUG MARKER: == sanity test 120f: Early Lock Cancel: rename test ====== 02:42:58 (1713508978) [ 2051.675390] Lustre: DEBUG MARKER: == sanity test 120g: Early Lock Cancel: performance test ========================================================== 02:43:09 (1713508989) [ 2140.467462] Lustre: DEBUG MARKER: == sanity test 121: read cancel race ===================== 02:44:37 (1713509077) [ 2143.037870] Lustre: DEBUG MARKER: == sanity test 123aa: verify statahead work ============== 02:44:40 (1713509080) [ 2144.715411] Lustre: DEBUG MARKER: ls -l 100 files without statahead: 1 sec [ 2145.522266] Lustre: DEBUG MARKER: ls -l 100 files with statahead: 1 sec [ 2155.171311] Lustre: DEBUG MARKER: ls -l 1000 files without statahead: 4 sec [ 2156.163133] Lustre: DEBUG MARKER: ls -l 1000 files with statahead: 0 sec [ 2212.391733] Lustre: DEBUG MARKER: ls -l 10000 files without statahead: 26 sec [ 2216.516284] Lustre: DEBUG MARKER: ls -l 10000 files with statahead: 3 sec [ 2216.960706] Lustre: DEBUG MARKER: ls -l done [ 2231.412586] Lustre: DEBUG MARKER: rm -r /mnt/lustre/d123aa.sanity/: 15 seconds [ 2231.763520] Lustre: DEBUG MARKER: rm done [ 2233.453292] Lustre: DEBUG MARKER: == sanity test 123ab: verify statahead work by using statx ========================================================== 02:46:11 (1713509171) [ 2233.872917] Lustre: DEBUG MARKER: SKIP: sanity test_123ab Test must be statx() syscall supported [ 2234.294505] Lustre: DEBUG MARKER: == sanity test 123ac: verify statahead work by using statx without glimpse RPCs ========================================================== 02:46:11 (1713509171) [ 2234.664789] Lustre: DEBUG MARKER: SKIP: sanity test_123ac Test must be statx() syscall supported [ 2235.096127] Lustre: DEBUG MARKER: == sanity test 123b: not panic with network error in statahead enqueue (bug 15027) ========================================================== 02:46:12 (1713509172) [ 2241.460130] Lustre: DEBUG MARKER: ls done [ 2244.468551] Lustre: DEBUG MARKER: == sanity test 123c: Can not initialize inode warning on DNE statahead ========================================================== 02:46:22 (1713509182) [ 2246.610225] Lustre: DEBUG MARKER: == sanity test 124a: lru resize ================================================================================================= 02:46:24 (1713509184) [ 2247.028379] Lustre: DEBUG MARKER: create 2000 files at /mnt/lustre/d124a.sanity [ 2253.157039] Lustre: DEBUG MARKER: NSDIR=ldlm.namespaces.lustre-MDT0000-mdc-ffff88007d00c800 [ 2253.514417] Lustre: DEBUG MARKER: NS=ldlm.namespaces.lustre-MDT0000-mdc-ffff88007d00c800 [ 2253.863378] Lustre: DEBUG MARKER: LRU=1004 [ 2254.233595] Lustre: DEBUG MARKER: LIMIT=46624 [ 2254.580527] Lustre: DEBUG MARKER: LVF=5572500 [ 2254.916788] Lustre: DEBUG MARKER: OLD_LVF=100 [ 2255.280605] Lustre: DEBUG MARKER: Sleep 50 sec [ 2305.899729] Lustre: DEBUG MARKER: Dropped 402 locks in 50s [ 2306.445231] Lustre: DEBUG MARKER: unlink 2000 files at /mnt/lustre/d124a.sanity [ 2314.185280] Lustre: DEBUG MARKER: == sanity test 124b: lru resize (performance test) ================================================================================= 02:47:31 (1713509251) [ 2344.736488] Lustre: DEBUG MARKER: doing ls -la /mnt/lustre/d124b.sanity/disable_lru_resize 3 times [ 2361.675858] Lustre: DEBUG MARKER: ls -la time: 17 seconds [ 2362.266821] Lustre: DEBUG MARKER: lru_size = 400 [ 2419.202855] Lustre: DEBUG MARKER: doing ls -la /mnt/lustre/d124b.sanity/enable_lru_resize 3 times [ 2425.407360] Lustre: DEBUG MARKER: ls -la time: 5 seconds [ 2425.981829] Lustre: DEBUG MARKER: lru_size = 4006 [ 2426.549525] Lustre: DEBUG MARKER: ls -la is 70% faster with lru resize enabled [ 2441.250789] Lustre: DEBUG MARKER: == sanity test 124c: LRUR cancel very aged locks ========= 02:49:38 (1713509378) [ 2464.571717] Lustre: DEBUG MARKER: == sanity test 124d: cancel very aged locks if lru-resize diasbaled ========================================================== 02:50:01 (1713509401) [ 2488.179479] Lustre: DEBUG MARKER: == sanity test 125: don't return EPROTO when a dir has a non-default striping and ACLs ========================================================== 02:50:25 (1713509425) [ 2490.520539] Lustre: DEBUG MARKER: == sanity test 126: check that the fsgid provided by the client is taken into account ========================================================== 02:50:28 (1713509428) [ 2493.043228] Lustre: DEBUG MARKER: == sanity test 127a: verify the client stats are sane ==== 02:50:30 (1713509430) [ 2495.485073] Lustre: DEBUG MARKER: == sanity test 127b: verify the llite client stats are sane ========================================================== 02:50:33 (1713509433) [ 2498.084762] Lustre: DEBUG MARKER: == sanity test 127c: test llite extent stats with regular [ 2503.930535] Lustre: DEBUG MARKER: == sanity test 128: interactive lfs for 2 consecutive find's ========================================================== 02:50:41 (1713509441) [ 2506.407639] Lustre: DEBUG MARKER: == sanity test 129: test directory size limit ================================================================================== 02:50:43 (1713509443) [ 2511.934214] Lustre: 6908:0:(osd_handler.c:586:osd_ldiskfs_add_entry()) lustre-MDT0000: directory (inode: 1805, FID: [0x20000040b:0x23ac:0x0]) is approaching max size limit [ 2515.576710] Lustre: 6909:0:(osd_handler.c:582:osd_ldiskfs_add_entry()) lustre-MDT0000: directory (inode: 1805, FID: [0x20000040b:0x23ac:0x0]) has reached max size limit [ 2527.164700] Lustre: DEBUG MARKER: == sanity test 130a: FIEMAP (1-stripe file) ============== 02:51:04 (1713509464) [ 2529.864008] Lustre: DEBUG MARKER: == sanity test 130b: FIEMAP (2-stripe file) ============== 02:51:07 (1713509467) [ 2532.535997] Lustre: DEBUG MARKER: == sanity test 130c: FIEMAP (2-stripe file with hole) ==== 02:51:09 (1713509469) [ 2535.019839] Lustre: DEBUG MARKER: == sanity test 130d: FIEMAP (N-stripe file) ============== 02:51:12 (1713509472) [ 2535.503080] Lustre: DEBUG MARKER: SKIP: sanity test_130d needs >= 3 OSTs [ 2536.192165] Lustre: DEBUG MARKER: == sanity test 130e: FIEMAP (test continuation FIEMAP calls) ========================================================== 02:51:13 (1713509473) [ 2545.895369] Lustre: DEBUG MARKER: == sanity test 130f: FIEMAP (unstriped file) ============= 02:51:23 (1713509483) [ 2548.401596] Lustre: DEBUG MARKER: == sanity test 130g: FIEMAP (overstripe file) ============ 02:51:25 (1713509485) [ 2550.924692] Lustre: 24490:0:(osd_handler.c:1948:osd_trans_start()) lustre-MDT0000: credits 12989 > trans_max 3200 [ 2550.929531] Lustre: 24490:0:(osd_handler.c:1877:osd_trans_dump_creds()) create: 200/800/0, destroy: 1/4/0 [ 2550.933963] Lustre: 24490:0:(osd_handler.c:1877:osd_trans_dump_creds()) Skipped 4001 previous similar messages [ 2550.939097] Lustre: 24490:0:(osd_handler.c:1884:osd_trans_dump_creds()) attr_set: 3/3/0, xattr_set: 204/148/0 [ 2550.944234] Lustre: 24490:0:(osd_handler.c:1884:osd_trans_dump_creds()) Skipped 4001 previous similar messages [ 2550.948898] Lustre: 24490:0:(osd_handler.c:1894:osd_trans_dump_creds()) write: 1001/8610/0, punch: 0/0/0, quota 0/0/0 [ 2550.954223] Lustre: 24490:0:(osd_handler.c:1894:osd_trans_dump_creds()) Skipped 4001 previous similar messages [ 2550.958983] Lustre: 24490:0:(osd_handler.c:1901:osd_trans_dump_creds()) insert: 201/3416/0, delete: 2/5/0 [ 2550.963799] Lustre: 24490:0:(osd_handler.c:1901:osd_trans_dump_creds()) Skipped 4001 previous similar messages [ 2550.968753] Lustre: 24490:0:(osd_handler.c:1908:osd_trans_dump_creds()) ref_add: 1/1/0, ref_del: 2/2/0 [ 2550.973396] Lustre: 24490:0:(osd_handler.c:1908:osd_trans_dump_creds()) Skipped 4001 previous similar messages [ 2550.978261] Pid: 24490, comm: mdt00_006 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 2550.982412] Call Trace: [ 2550.983802] [<0>] libcfs_call_trace+0x90/0xf0 [libcfs] [ 2550.986496] [<0>] libcfs_debug_dumpstack+0x26/0x30 [libcfs] [ 2550.989436] [<0>] osd_trans_start+0x5ba/0x5e0 [osd_ldiskfs] [ 2550.992096] [<0>] top_trans_start+0x763/0xa10 [ptlrpc] [ 2550.994209] [<0>] lod_trans_start+0x34/0x40 [lod] [ 2550.996322] [<0>] mdd_trans_start+0x14/0x20 [mdd] [ 2550.998461] [<0>] mdd_unlink+0x5e3/0xdb0 [mdd] [ 2551.000451] [<0>] mdt_reint_unlink+0xe32/0x1df0 [mdt] [ 2551.002524] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 2551.004685] [<0>] mdt_reint_internal+0x76c/0xb50 [mdt] [ 2551.007029] [<0>] mdt_reint+0x67/0x150 [mdt] [ 2551.008809] [<0>] tgt_request_handle+0x93a/0x19c0 [ptlrpc] [ 2551.011077] [<0>] ptlrpc_server_handle_request+0x250/0xc30 [ptlrpc] [ 2551.013682] [<0>] ptlrpc_main+0xbd9/0x15f0 [ptlrpc] [ 2551.015549] [<0>] kthread+0xe4/0xf0 [ 2551.017080] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 2551.018862] [<0>] 0xfffffffffffffffe [ 2551.020354] Lustre: 24490:0:(osd_internal.h:1333:osd_trans_exec_op()) lustre-MDT0000: opcode 7: before 3203 < left 8610, rollback = 7 [ 2551.024326] Lustre: 24490:0:(osd_internal.h:1333:osd_trans_exec_op()) Skipped 4000 previous similar messages [ 2553.547003] Lustre: DEBUG MARKER: == sanity test 131a: test iov's crossing stripe boundary for writev/readv ========================================================== 02:51:30 (1713509490) [ 2555.847500] Lustre: DEBUG MARKER: == sanity test 131b: test append writev ================== 02:51:33 (1713509493) [ 2558.451854] Lustre: DEBUG MARKER: == sanity test 131c: test read/write on file w/o objects ========================================================== 02:51:35 (1713509495) [ 2560.580298] Lustre: DEBUG MARKER: == sanity test 131d: test short read ===================== 02:51:38 (1713509498) [ 2563.150474] Lustre: DEBUG MARKER: == sanity test 131e: test read hitting hole ============== 02:51:40 (1713509500) [ 2565.422851] Lustre: DEBUG MARKER: == sanity test 133a: Verifying MDT stats ================================================================================================== 02:51:42 (1713509502) [ 2572.434200] Lustre: DEBUG MARKER: == sanity test 133b: Verifying extra MDT stats ============================================================================================ 02:51:49 (1713509509) [ 2576.407359] Lustre: DEBUG MARKER: == sanity test 133c: Verifying OST stats ================================================================================================== 02:51:53 (1713509513) [ 2599.706306] Lustre: DEBUG MARKER: == sanity test 133d: Verifying rename_stats ================================================================================================== 02:52:17 (1713509537) [ 2608.796567] Lustre: DEBUG MARKER: == sanity test 133e: Verifying OST read_bytes write_bytes nid stats =========================================================================== 02:52:26 (1713509546) [ 2612.727539] Lustre: DEBUG MARKER: == sanity test 133f: Check reads/writes of client lustre proc files with bad area io ========================================================== 02:52:30 (1713509550) [ 2619.654666] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2619.660522] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2619.669193] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2620.566591] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2620.570610] Lustre: Skipped 3 previous similar messages [ 2622.452022] LustreError: 9933:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2622.456222] LustreError: 9933:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 7 previous similar messages [ 2622.530385] Lustre: server umount lustre-MDT0000 complete [ 2625.310869] LustreError: 17308:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713509563 with bad export cookie 15516961531680374286 [ 2625.312278] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2625.322875] LustreError: 17308:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 2625.361156] LustreError: 10534:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2625.365068] LustreError: 10534:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 42 previous similar messages [ 2625.452951] Lustre: server umount lustre-MDT0001 complete [ 2634.594989] Lustre: 11130:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713509567/real 1713509567] req@ffff8800844a04c0 x1796742337199744/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713509573 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 2634.614399] LustreError: 11130:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2634.619230] LustreError: 11130:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 22 previous similar messages [ 2634.644147] Lustre: server umount lustre-OST0000 complete [ 2643.119081] Lustre: 11729:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713509575/real 1713509575] req@ffff88009dde3dc0 x1796742337200192/t0(0) o39->lustre-MDT0001-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713509581 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 2643.139022] LustreError: 11729:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2643.143875] LustreError: 11729:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 6 previous similar messages [ 2643.258807] Lustre: server umount lustre-OST0001 complete [ 2645.294475] device-mapper: core: cleaned up [ 2648.438091] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing unload_modules_local [ 2649.156794] Key type lgssc unregistered [ 2649.210171] LNet: 12665:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2649.212162] LNet: Removed LNI 192.168.203.140@tcp [ 2654.516362] LNet: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 2654.523279] alg: No test for adler32 (adler32-zlib) [ 2655.295645] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_modules_local [ 2655.481741] Lustre: Lustre: Build Version: 2.15.4_18_g9f02020 [ 2655.527962] LNet: Added LNI 192.168.203.140@tcp [8/256/0/180] [ 2655.529704] LNet: Accept secure, port 988 [ 2657.062063] Key type lgssc registered [ 2657.399744] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2662.157135] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_modules_local [ 2663.674059] device-mapper: uevent: version 1.0.3 [ 2663.677532] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 2666.140626] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2667.274989] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2667.289982] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2668.616093] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2672.288678] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2672.596455] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2672.683635] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2673.961599] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2678.228404] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2678.234302] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2678.362330] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2678.384264] mount.lustre (17321) used greatest stack depth: 10048 bytes left [ 2679.770457] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2682.361082] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2682.362642] Lustre: lustre-OST0000: deleting orphan objects from 0x0:16719 to 0x0:16737 [ 2684.060204] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 2684.066686] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2684.135967] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 2685.254627] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2688.137157] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:10915 to 0x2c0000400:10945 [ 2688.138373] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11081 to 0x280000400:11105 [ 2688.140170] Lustre: lustre-OST0001: deleting orphan objects from 0x0:16527 to 0x0:16545 [ 2692.693703] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 2693.472424] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 2699.918031] Lustre: DEBUG MARKER: == sanity test 133g: Check reads/writes of server lustre proc files with bad area io ========================================================== 02:53:57 (1713509637) [ 2712.742272] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2712.744821] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2712.748076] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2717.535404] LustreError: 22305:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2717.558255] Lustre: server umount lustre-MDT0000 complete [ 2718.182311] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2718.182318] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2718.190159] LustreError: Skipped 2 previous similar messages [ 2719.503558] LustreError: 15036:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713509657 with bad export cookie 11343816438795114697 [ 2719.504261] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2719.508819] LustreError: 15036:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 2719.527627] LustreError: 22904:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2719.529379] LustreError: 22904:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 2 previous similar messages [ 2719.590316] Lustre: server umount lustre-MDT0001 complete [ 2727.559990] Lustre: 23496:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713509660/real 1713509660] req@ffff8801371363c0 x1796745036135232/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713509666 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 2727.567695] LustreError: 23496:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2727.569445] LustreError: 23496:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 1 previous similar message [ 2727.577594] Lustre: server umount lustre-OST0000 complete [ 2735.509014] Lustre: 24097:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713509667/real 1713509667] req@ffff88007ca5da40 x1796745036135744/t0(0) o39->lustre-MDT0001-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713509673 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 2735.519412] LustreError: 24097:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2735.569814] Lustre: server umount lustre-OST0001 complete [ 2737.118115] device-mapper: core: cleaned up [ 2739.423724] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing unload_modules_local [ 2739.803051] Key type lgssc unregistered [ 2739.850152] LNet: 25031:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2739.852148] LNet: Removed LNI 192.168.203.140@tcp [ 2743.917597] LNet: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 2743.920778] alg: No test for adler32 (adler32-zlib) [ 2744.691127] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_modules_local [ 2744.854133] Lustre: Lustre: Build Version: 2.15.4_18_g9f02020 [ 2744.891487] LNet: Added LNI 192.168.203.140@tcp [8/256/0/180] [ 2744.892809] LNet: Accept secure, port 988 [ 2746.416000] Key type lgssc registered [ 2746.587001] Lustre: Echo OBD driver; http://www.lustre.org/ [ 2750.143299] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_modules_local [ 2751.149896] device-mapper: uevent: version 1.0.3 [ 2751.151127] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 2752.863153] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2753.981542] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2753.998918] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2754.759585] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2757.411939] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2757.494582] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 2758.290059] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2760.923078] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2760.925806] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2760.995972] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2761.767658] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2761.995600] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2762.000000] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11081 to 0x280000400:11137 [ 2764.372914] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 2764.376185] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 2764.413205] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 2765.173401] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2767.375571] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:10915 to 0x2c0000400:10977 [ 2769.375546] Lustre: lustre-OST0000: deleting orphan objects from 0x0:16719 to 0x0:16769 [ 2769.377766] Lustre: lustre-OST0001: deleting orphan objects from 0x0:16527 to 0x0:16577 [ 2772.578654] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 2773.343316] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 2779.913820] Lustre: DEBUG MARKER: == sanity test 133h: Proc files should end with newlines ========================================================== 02:55:17 (1713509717) [ 2957.739407] Lustre: DEBUG MARKER: == sanity test 134a: Server reclaims locks when reaching lock_reclaim_threshold ========================================================== 02:58:15 (1713509895) [ 2963.330759] Lustre: *** cfs_fail_loc=327, val=0*** [ 2977.795422] Lustre: DEBUG MARKER: == sanity test 134b: Server rejects lock request when reaching lock_limit_mb ========================================================== 02:58:35 (1713509915) [ 2980.012689] Lustre: *** cfs_fail_loc=328, val=0*** [ 2980.015075] Lustre: Skipped 512 previous similar messages [ 2981.888010] Lustre: *** cfs_fail_loc=328, val=0*** [ 2981.889727] Lustre: Skipped 252 previous similar messages [ 2983.902344] Lustre: *** cfs_fail_loc=328, val=0*** [ 2983.904251] Lustre: Skipped 1 previous similar message [ 2988.278511] Lustre: *** cfs_fail_loc=328, val=0*** [ 2988.279677] Lustre: Skipped 125 previous similar messages [ 2996.443573] Lustre: *** cfs_fail_loc=328, val=0*** [ 2996.444678] Lustre: Skipped 93 previous similar messages [ 2999.838179] LustreError: 19587:0:(ldlm_resource.c:133:seq_watermark_write()) Failed to set lock_reclaim_threshold_mb, rc = -22. [ 3002.622595] Lustre: DEBUG MARKER: SKIP: sanity test_135 skipping SLOW test 135 [ 3002.988945] Lustre: DEBUG MARKER: SKIP: sanity test_136 skipping SLOW test 136 [ 3003.350971] Lustre: DEBUG MARKER: == sanity test 140: Check reasonable stack depth (shouldn't LBUG) ============================================================== 02:59:01 (1713509941) [ 3011.186458] Lustre: DEBUG MARKER: == sanity test 150a: truncate/append tests =============== 02:59:08 (1713509948) [ 3022.335302] Lustre: DEBUG MARKER: == sanity test 150b: Verify fallocate (prealloc) functionality ========================================================== 02:59:19 (1713509959) [ 3032.429813] Lustre: DEBUG MARKER: == sanity test 150bb: Verify fallocate modes both zero space ========================================================== 02:59:29 (1713509969) [ 3042.962591] Lustre: DEBUG MARKER: == sanity test 150c: Verify fallocate Size and Blocks ==== 02:59:40 (1713509980) [ 3052.990494] Lustre: DEBUG MARKER: == sanity test 150d: Verify fallocate Size and Blocks - Non zero start ========================================================== 02:59:50 (1713509990) [ 3057.896100] Lustre: DEBUG MARKER: == sanity test 150e: Verify 60% of available OST space consumed by fallocate ========================================================== 02:59:55 (1713509995) [ 3074.239716] Lustre: DEBUG MARKER: == sanity test 150f: Verify fallocate punch functionality ========================================================== 03:00:11 (1713510011) [ 3087.116689] Lustre: DEBUG MARKER: == sanity test 150g: Verify fallocate punch on large range ========================================================== 03:00:24 (1713510024) [ 3098.298008] Lustre: DEBUG MARKER: == sanity test 151: test cache on oss and controls ========================================================================================= 03:00:35 (1713510035) [ 3103.940267] bash (28017): drop_caches: 1 [ 3108.385402] Lustre: DEBUG MARKER: == sanity test 152: test read/write with enomem ====================================================================================== 03:00:45 (1713510045) [ 3110.990996] Lustre: DEBUG MARKER: == sanity test 153: test if fdatasync does not crash ================================================================================= 03:00:48 (1713510048) [ 3113.425504] Lustre: DEBUG MARKER: == sanity test 154A: lfs path2fid and fid2path basic checks ========================================================== 03:00:50 (1713510050) [ 3115.308684] Lustre: DEBUG MARKER: == sanity test 154B: verify the ll_decode_linkea tool ==== 03:00:52 (1713510052) [ 3117.123097] Lustre: DEBUG MARKER: == sanity test 154a: Open-by-FID ========================= 03:00:54 (1713510054) [ 3117.394196] LustreError: 12523:0:(fld_handler.c:263:fld_server_lookup()) srv-lustre-MDT0000: Cannot find sequence 0xf00000400: rc = -2 [ 3120.297453] Lustre: DEBUG MARKER: == sanity test 154b: Open-by-FID for remote directory ==== 03:00:57 (1713510057) [ 3120.495596] LustreError: 27412:0:(fld_handler.c:263:fld_server_lookup()) srv-lustre-MDT0000: Cannot find sequence 0xf00000400: rc = -2 [ 3120.497865] LustreError: 27412:0:(fld_handler.c:263:fld_server_lookup()) Skipped 3 previous similar messages [ 3122.883421] Lustre: DEBUG MARKER: == sanity test 154c: lfs path2fid and fid2path multiple arguments ========================================================== 03:01:00 (1713510060) [ 3125.128619] Lustre: DEBUG MARKER: == sanity test 154d: Verify open file fid ================ 03:01:02 (1713510062) [ 3127.305827] Lustre: DEBUG MARKER: == sanity test 154e: .lustre is not returned by readdir == 03:01:04 (1713510064) [ 3129.062299] Lustre: DEBUG MARKER: == sanity test 154f: get parent fids by reading link ea == 03:01:06 (1713510066) [ 3130.864189] Lustre: DEBUG MARKER: == sanity test 154g: various llapi FID tests ============= 03:01:08 (1713510068) [ 3237.212809] Lustre: DEBUG MARKER: == sanity test 155a: Verify small file correctness: read cache:on write_cache:on ========================================================== 03:02:54 (1713510174) [ 3240.425254] Lustre: DEBUG MARKER: == sanity test 155b: Verify small file correctness: read cache:on write_cache:off ========================================================== 03:02:58 (1713510178) [ 3245.200801] Lustre: DEBUG MARKER: == sanity test 155c: Verify small file correctness: read cache:off write_cache:on ========================================================== 03:03:02 (1713510182) [ 3250.269016] Lustre: DEBUG MARKER: == sanity test 155d: Verify small file correctness: read cache:off write_cache:off ========================================================== 03:03:07 (1713510187) [ 3255.225938] Lustre: DEBUG MARKER: == sanity test 155e: Verify big file correctness: read cache:on write_cache:on ========================================================== 03:03:12 (1713510192) [ 3268.140967] Lustre: DEBUG MARKER: == sanity test 155f: Verify big file correctness: read cache:on write_cache:off ========================================================== 03:03:25 (1713510205) [ 3278.076219] Lustre: DEBUG MARKER: == sanity test 155g: Verify big file correctness: read cache:off write_cache:on ========================================================== 03:03:35 (1713510215) [ 3287.835682] Lustre: DEBUG MARKER: == sanity test 155h: Verify big file correctness: read cache:off write_cache:off ========================================================== 03:03:45 (1713510225) [ 3298.623343] Lustre: DEBUG MARKER: == sanity test 156: Verification of tunables ============= 03:03:56 (1713510236) [ 3301.996381] Lustre: DEBUG MARKER: Turn on read and write cache [ 3303.531827] Lustre: DEBUG MARKER: Write data and read it back. [ 3303.869140] Lustre: DEBUG MARKER: Read should be satisfied from the cache. [ 3304.943086] Lustre: DEBUG MARKER: cache hits: before: 65581, after: 65584 [ 3305.495159] Lustre: DEBUG MARKER: Read again; it should be satisfied from the cache. [ 3306.607114] Lustre: DEBUG MARKER: cache hits:: before: 65584, after: 65587 [ 3307.160243] Lustre: DEBUG MARKER: Turn off the read cache and turn on the write cache [ 3308.701416] Lustre: DEBUG MARKER: Read again; it should be satisfied from the cache. [ 3310.089779] Lustre: DEBUG MARKER: cache hits:: before: 65587, after: 65590 [ 3310.516986] Lustre: DEBUG MARKER: Write data and read it back. [ 3311.049270] Lustre: DEBUG MARKER: Read should be satisfied from the cache. [ 3312.552417] Lustre: DEBUG MARKER: cache hits:: before: 65590, after: 65593 [ 3312.931680] Lustre: DEBUG MARKER: Turn off read and write cache [ 3314.383662] Lustre: DEBUG MARKER: Write data and read it back [ 3314.923890] Lustre: DEBUG MARKER: It should not be satisfied from the cache. [ 3316.341417] Lustre: DEBUG MARKER: cache hits:: before: 65593, after: 65593 [ 3316.851740] Lustre: DEBUG MARKER: Turn on the read cache and turn off the write cache [ 3318.301609] Lustre: DEBUG MARKER: Write data and read it back [ 3318.769729] Lustre: DEBUG MARKER: It should not be satisfied from the cache. [ 3320.423468] Lustre: DEBUG MARKER: cache hits:: before: 65593, after: 65593 [ 3320.950667] Lustre: DEBUG MARKER: Read again; it should be satisfied from the cache. [ 3322.482296] Lustre: DEBUG MARKER: cache hits:: before: 65593, after: 65596 [ 3325.400698] Lustre: DEBUG MARKER: == sanity test 160a: changelog sanity ==================== 03:04:22 (1713510262) [ 3326.469008] Lustre: lustre-MDD0000: changelog on [ 3327.453245] Lustre: lustre-MDD0001: changelog on [ 3332.895367] Lustre: Failing over lustre-MDT0000 [ 3333.398630] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3333.403294] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3333.410552] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3335.286621] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3335.286839] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3335.296990] Lustre: Skipped 2 previous similar messages [ 3337.169141] Lustre: lustre-MDT0000: Not available for connect from 192.168.203.40@tcp (stopping) [ 3337.173535] Lustre: Skipped 3 previous similar messages [ 3338.517458] LustreError: 10373:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 3338.559442] Lustre: server umount lustre-MDT0000 complete [ 3340.294482] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3340.306349] LustreError: Skipped 3 previous similar messages [ 3341.736895] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3341.770266] LustreError: 11-0: MGC192.168.203.140@tcp: operation mgs_target_reg to node 0@lo failed: rc = -107 [ 3341.770449] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3341.770747] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c99a21232 to 0xb530840c99aaca6e [ 3341.779579] Lustre: MGC192.168.203.140@tcp: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 3341.842932] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3341.854978] Lustre: lustre-MDD0000: changelog on [ 3341.858172] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3342.168427] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 3343.189423] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3346.855598] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 3346.887339] Lustre: lustre-MDT0000: Recovery over after 0:05, of 2 clients 2 recovered and 0 were evicted. [ 3346.903305] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17596 to 0x0:17633 [ 3346.903327] Lustre: lustre-OST0001: deleting orphan objects from 0x0:17406 to 0x0:17441 [ 3347.108438] Lustre: lustre-MDD0000: changelog off [ 3347.664162] Lustre: lustre-MDD0001: changelog off [ 3352.008036] Lustre: DEBUG MARKER: == sanity test 160b: Verify that very long rename doesn't crash in changelog ========================================================== 03:04:49 (1713510289) [ 3352.901543] Lustre: lustre-MDD0000: changelog on [ 3356.307772] Lustre: lustre-MDD0001: changelog off [ 3358.304334] Lustre: DEBUG MARKER: == sanity test 160c: verify that changelog log catch the truncate event ========================================================== 03:04:55 (1713510295) [ 3359.391279] Lustre: lustre-MDD0000: changelog on [ 3359.392923] Lustre: Skipped 1 previous similar message [ 3363.490749] Lustre: lustre-MDD0001: changelog off [ 3363.493686] Lustre: Skipped 1 previous similar message [ 3365.115723] Lustre: DEBUG MARKER: == sanity test 160d: verify that changelog log catch the migrate event ========================================================== 03:05:02 (1713510302) [ 3369.213909] Lustre: lustre-MDD0001: changelog off [ 3369.218391] Lustre: Skipped 1 previous similar message [ 3370.901373] Lustre: DEBUG MARKER: == sanity test 160e: changelog negative testing (should return errors) ========================================================== 03:05:08 (1713510308) [ 3371.824286] Lustre: lustre-MDD0000: changelog on [ 3371.826073] Lustre: Skipped 3 previous similar messages [ 3377.660226] Lustre: DEBUG MARKER: == sanity test 160f: changelog garbage collect (timestamped users) ========================================================== 03:05:15 (1713510315) [ 3382.535950] Lustre: DEBUG MARKER: 1713510320: creating first files [ 3398.515565] Lustre: *** cfs_fail_loc=1313, val=0*** [ 3398.517959] Lustre: 27412:0:(mdd_dir.c:895:mdd_changelog_store()) lustre-MDD0000: starting changelog garbage collection [ 3398.522791] Lustre: 18910:0:(mdd_trans.c:160:mdd_chlg_garbage_collect()) lustre-MDD0000: force deregister of changelog user cl7 idle for 18s with 4 unprocessed records [ 3404.990442] Lustre: lustre-MDD0001: changelog off [ 3404.992515] Lustre: Skipped 3 previous similar messages [ 3406.916166] Lustre: DEBUG MARKER: == sanity test 160g: changelog garbage collect on idle records ========================================================== 03:05:44 (1713510344) [ 3407.873387] Lustre: lustre-MDD0000: changelog on [ 3407.875491] Lustre: Skipped 3 previous similar messages [ 3415.069180] Lustre: 27414:0:(mdd_dir.c:895:mdd_changelog_store()) lustre-MDD0000: starting changelog garbage collection [ 3415.071203] Lustre: 27414:0:(mdd_dir.c:895:mdd_changelog_store()) Skipped 1 previous similar message [ 3415.073522] Lustre: 22205:0:(mdd_trans.c:160:mdd_chlg_garbage_collect()) lustre-MDD0000: force deregister of changelog user cl9 idle for 5s with 4 unprocessed records [ 3415.076967] Lustre: 22205:0:(mdd_trans.c:160:mdd_chlg_garbage_collect()) Skipped 1 previous similar message [ 3421.598556] Lustre: lustre-MDD0000: changelog off [ 3421.602560] Lustre: Skipped 2 previous similar messages [ 3422.549257] Lustre: DEBUG MARKER: == sanity test 160h: changelog gc thread stop upon umount, orphan records delete ========================================================== 03:05:59 (1713510359) [ 3442.951943] Lustre: *** cfs_fail_loc=1316, val=0*** [ 3442.954183] Lustre: 27413:0:(mdd_dir.c:895:mdd_changelog_store()) lustre-MDD0001: simulate starting changelog garbage collection [ 3442.959374] Lustre: 27413:0:(mdd_dir.c:895:mdd_changelog_store()) Skipped 1 previous similar message [ 3442.964119] Lustre: 25499:0:(mdd_trans.c:160:mdd_chlg_garbage_collect()) lustre-MDD0001: force deregister of changelog user cl11 idle for 16s with 3 unprocessed records [ 3442.973534] Lustre: 25499:0:(mdd_trans.c:160:mdd_chlg_garbage_collect()) Skipped 2 previous similar messages [ 3443.452602] Lustre: Failing over lustre-MDT0000 [ 3443.465892] LustreError: 11-0: lustre-MDT0000-lwp-MDT0001: operation mds_disconnect to node 0@lo failed: rc = -107 [ 3443.476439] LustreError: 25811:0:(osp_dev.c:494:osp_disconnect()) lustre-MDT0001-osp-MDT0000: can't disconnect: rc = -19 [ 3443.480005] LustreError: 25811:0:(lod_dev.c:261:lod_sub_process_config()) lustre-MDT0000-mdtlov: error cleaning up LOD index 1: cmd 0xcf031 : rc = -19 [ 3443.503014] LustreError: 25813:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 3443.505783] LustreError: 25813:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 49 previous similar messages [ 3445.065785] Lustre: server umount lustre-MDT0001 complete [ 3447.962702] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3448.015985] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3448.020583] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c99aaca6e to 0xb530840c99aadb3f [ 3448.025424] Lustre: MGC192.168.203.140@tcp: Connection restored to (at 0@lo) [ 3448.030292] Lustre: Skipped 3 previous similar messages [ 3448.136388] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3448.140277] LustreError: Skipped 4 previous similar messages [ 3448.150852] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3448.161980] Lustre: lustre-MDD0000: changelog on [ 3448.163576] Lustre: Skipped 3 previous similar messages [ 3448.165566] Lustre: 27050:0:(mdd_device.c:618:mdd_changelog_llog_init()) lustre-MDD0000 : orphan changelog records found, starting from index 24 to index 25, being cleared now [ 3448.172859] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3449.303803] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3452.346375] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 3452.384832] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 192.168.203.40@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3452.413681] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3452.489513] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_connect to node 0@lo failed: rc = -114 [ 3452.489578] Lustre: lustre-MDT0001-lwp-OST0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3452.495638] LustreError: Skipped 2 previous similar messages [ 3452.497728] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 3452.509193] Lustre: 28023:0:(mdd_device.c:618:mdd_changelog_llog_init()) lustre-MDD0001 : orphan changelog records found, starting from index 22 to index 23, being cleared now [ 3452.513680] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 3453.662725] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3454.001382] Lustre: lustre-MDT0001: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 3454.142016] Lustre: 25550:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713510385/real 1713510385] req@ffff8800a5034740 x1796745129734656/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713510392 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:0.0' [ 3455.500012] Lustre: 25551:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713510386/real 1713510386] req@ffff880136bbb900 x1796745129734912/t0(0) o400->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713510393 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:0.0' [ 3455.505439] Lustre: 25551:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 3457.512967] Lustre: lustre-MDT0001-lwp-OST0000: Connection restored to (at 0@lo) [ 3457.518838] Lustre: Skipped 1 previous similar message [ 3458.171190] Lustre: lustre-MDT0000: Recovery over after 0:06, of 2 clients 2 recovered and 0 were evicted. [ 3458.187410] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17596 to 0x0:17665 [ 3458.187456] Lustre: lustre-OST0001: deleting orphan objects from 0x0:17443 to 0x0:17473 [ 3458.221825] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:10985 to 0x2c0000400:11009 [ 3458.221827] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11145 to 0x280000400:11169 [ 3463.765109] Lustre: lustre-MDD0001: changelog off [ 3465.821541] Lustre: DEBUG MARKER: == sanity test 160i: changelog user register/unregister race ========================================================== 03:06:43 (1713510403) [ 3469.781501] LustreError: 30652:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 1315 sleeping [ 3472.056211] LustreError: 30825:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 1315 waking [ 3472.060146] LustreError: 30652:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 1315 awake: rc=2725 [ 3472.744573] LustreError: 31078:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 1315 waking [ 3480.581703] Lustre: DEBUG MARKER: == sanity test 160j: client can be umounted while its chanangelog is being used ========================================================== 03:06:58 (1713510418) [ 3487.185123] Lustre: DEBUG MARKER: == sanity test 160k: Verify that changelog records are not lost ========================================================== 03:07:04 (1713510424) [ 3489.431123] LustreError: 27061:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 15d sleeping for 3000ms [ 3492.435003] LustreError: 27061:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 15d awake [ 3499.304478] Lustre: DEBUG MARKER: == sanity test 160l: Verify that MTIME changelog records contain the parent FID ========================================================== 03:07:16 (1713510436) [ 3505.930679] Lustre: DEBUG MARKER: == sanity test 160m: Changelog clear race ================ 03:07:23 (1713510443) [ 3509.708897] LustreError: 27061:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 15f sleeping [ 3511.710271] LustreError: 29441:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15f waking [ 3511.712080] LustreError: 29441:0:(libcfs_fail.h:180:cfs_race()) Skipped 1 previous similar message [ 3511.713805] LustreError: 27061:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 15f awake: rc=2997 [ 3516.362609] Lustre: DEBUG MARKER: == sanity test 160n: Changelog destroy race ============== 03:07:34 (1713510454) [ 3517.177944] Lustre: lustre-MDD0000: changelog on [ 3517.180503] Lustre: Skipped 13 previous similar messages [ 3869.807978] LustreError: 6456:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 16c sleeping [ 3871.811547] LustreError: 28014:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 16c waking [ 3871.815834] LustreError: 6456:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 16c awake: rc=2997 [ 3874.693567] Lustre: lustre-MDD0001: changelog off [ 3874.695948] Lustre: Skipped 13 previous similar messages [ 3876.334804] Lustre: DEBUG MARKER: == sanity test 160o: changelog user name and mask ======== 03:13:33 (1713510813) [ 3877.455945] Lustre: lustre-MDD0000: changelog on [ 3877.458847] Lustre: Skipped 1 previous similar message [ 3878.681263] LustreError: 8834:0:(mdd_device.c:1704:mdd_changelog_name_check()) lustre-MDD0000: wrong char '#' in name 'Tt3_-#': rc = -22 [ 3879.026535] Lustre: 8898:0:(mdd_device.c:1721:mdd_changelog_name_check()) lustre-MDD0000: changelog name test_160o exists already: rc = -17 [ 3879.382436] LustreError: 8962:0:(mdd_device.c:1713:mdd_changelog_name_check()) lustre-MDD0000: name 'test_160toolongname' is over 16 symbols limit: rc = -36 [ 3890.572669] Lustre: DEBUG MARKER: == sanity test 160p: Changelog orphan cleanup with no users ========================================================== 03:13:48 (1713510828) [ 3893.911738] Lustre: Failing over lustre-MDT0000 [ 3893.914554] Lustre: Skipped 1 previous similar message [ 3893.971659] LustreError: 11283:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 3893.974967] LustreError: 11283:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 19 previous similar messages [ 3894.007266] Lustre: server umount lustre-MDT0000 complete [ 3894.009932] Lustre: Skipped 1 previous similar message [ 3895.044180] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.203.40@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3897.489978] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3897.542286] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3897.542340] LustreError: 11-0: MGC192.168.203.140@tcp: operation mgs_target_reg to node 0@lo failed: rc = -107 [ 3897.542356] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3897.542824] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c99aadb3f to 0xb530840c99ee518f [ 3897.564003] Lustre: Skipped 6 previous similar messages [ 3897.566950] Lustre: MGC192.168.203.140@tcp: Connection restored to (at 0@lo) [ 3897.571086] Lustre: Skipped 3 previous similar messages [ 3897.643906] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3897.647350] Lustre: Skipped 1 previous similar message [ 3897.711116] Lustre: 11943:0:(mdd_device.c:618:mdd_changelog_llog_init()) lustre-MDD0000 : orphan changelog records found, starting from index 90148 to index 18446744073709551615, being cleared now [ 3897.718062] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3899.067992] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3900.040726] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 3902.655068] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 3902.662328] Lustre: Skipped 1 previous similar message [ 3902.691449] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17667 to 0x0:17697 [ 3902.691473] Lustre: lustre-OST0001: deleting orphan objects from 0x0:17475 to 0x0:17505 [ 3906.857240] Lustre: DEBUG MARKER: == sanity test 160q: changelog effective mask is DEFMASK if not set ========================================================== 03:14:04 (1713510844) [ 3910.294787] Lustre: DEBUG MARKER: == sanity test 160s: changelog garbage collect on idle records anaconda-ks.cfg stress.sh time ========================================================== 03:14:07 (1713510847) [ 3916.266127] Lustre: 27059:0:(mdd_dir.c:895:mdd_changelog_store()) lustre-MDD0000: starting changelog garbage collection [ 3916.270644] Lustre: 27059:0:(mdd_dir.c:895:mdd_changelog_store()) Skipped 1 previous similar message [ 3916.276538] Lustre: 14588:0:(mdd_trans.c:160:mdd_chlg_garbage_collect()) lustre-MDD0000: force deregister of changelog user cl2 idle for 864005s with 500000004 unprocessed records [ 3922.830372] Lustre: DEBUG MARKER: == sanity test 161a: link ea sanity ====================== 03:14:20 (1713510860) [ 3932.750576] Lustre: DEBUG MARKER: == sanity test 161b: link ea sanity under remote directory ========================================================== 03:14:30 (1713510870) [ 3945.871259] Lustre: DEBUG MARKER: == sanity test 161c: check CL_RENME[UNLINK] changelog record flags ========================================================== 03:14:43 (1713510883) [ 3953.491603] Lustre: DEBUG MARKER: == sanity test 161d: create with concurrent .lustre/fid access ========================================================== 03:14:50 (1713510890) [ 3962.360467] Lustre: DEBUG MARKER: == sanity test 162a: path lookup sanity ================== 03:14:59 (1713510899) [ 3965.278648] Lustre: DEBUG MARKER: == sanity test 162b: striped directory path lookup sanity ========================================================== 03:15:02 (1713510902) [ 3968.064116] Lustre: DEBUG MARKER: == sanity test 162c: fid2path works with paths 100 or more directories deep ========================================================== 03:15:05 (1713510905) [ 3988.512149] Lustre: DEBUG MARKER: == sanity test 165a: ofd access log discovery ============ 03:15:25 (1713510925) [ 3994.511655] Lustre: Failing over lustre-OST0000 [ 3994.515759] LustreError: 20078:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 3994.517489] LustreError: 20078:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 11 previous similar messages [ 3994.525440] Lustre: server umount lustre-OST0000 complete [ 3995.202272] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.203.40@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3995.213362] LustreError: Skipped 4 previous similar messages [ 3997.798517] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3997.805317] Lustre: Skipped 1 previous similar message [ 4000.528900] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4000.532720] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4000.580680] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4000.588066] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 4001.868480] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4001.962194] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 4002.404706] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 4002.404716] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 4002.404726] Lustre: Skipped 4 previous similar messages [ 4002.407926] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11174 to 0x280000400:11201 [ 4002.409066] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17703 to 0x0:17729 [ 4003.168147] Lustre: DEBUG MARKER: == sanity test 165b: ofd access log entries are produced and consumed ========================================================== 03:15:40 (1713510940) [ 4027.336417] Lustre: Failing over lustre-OST0000 [ 4027.340587] LustreError: 22527:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4027.343613] LustreError: 22527:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 4 previous similar messages [ 4027.354114] Lustre: server umount lustre-OST0000 complete [ 4027.446176] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 4027.446188] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4027.446375] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4027.446376] LustreError: Skipped 3 previous similar messages [ 4027.456467] LustreError: Skipped 1 previous similar message [ 4030.086353] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4030.089217] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4030.125852] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 4030.255876] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 4031.072037] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4031.935818] Lustre: lustre-OST0000: Recovery over after 0:02, of 3 clients 3 recovered and 0 were evicted. [ 4031.938927] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11174 to 0x280000400:11233 [ 4031.939643] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17731 to 0x0:17761 [ 4032.349968] Lustre: DEBUG MARKER: == sanity test 165c: full ofd access logs do not block IOs ========================================================== 03:16:09 (1713510969) [ 4043.570854] Lustre: Failing over lustre-OST0000 [ 4043.578461] LustreError: 24436:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4043.592872] Lustre: server umount lustre-OST0000 complete [ 4045.142568] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4045.149351] Lustre: Skipped 1 previous similar message [ 4046.640620] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4046.646432] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4046.715142] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 4047.702848] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4048.334945] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 4048.591399] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 4048.591490] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 4048.591492] Lustre: Skipped 3 previous similar messages [ 4048.592780] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17829 to 0x0:17857 [ 4048.594612] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11295 to 0x280000400:11329 [ 4048.887690] Lustre: DEBUG MARKER: == sanity test 165d: ofd_access_log mask works =========== 03:16:26 (1713510986) [ 4072.000986] Lustre: Failing over lustre-OST0000 [ 4072.006458] LustreError: 27444:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4072.021991] Lustre: server umount lustre-OST0000 complete [ 4073.638365] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 4073.642985] LustreError: Skipped 1 previous similar message [ 4074.751853] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4074.757517] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4074.822286] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4074.826707] Lustre: Skipped 2 previous similar messages [ 4076.124694] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4076.301353] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 4076.303261] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17829 to 0x0:17889 [ 4076.305444] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11331 to 0x280000400:11361 [ 4077.453697] Lustre: DEBUG MARKER: == sanity test 165e: ofd_access_log MDT index filter works ========================================================== 03:16:54 (1713511014) [ 4089.846528] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4089.856315] Lustre: Skipped 3 previous similar messages [ 4092.256769] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4092.262318] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4092.329480] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 4092.333398] Lustre: Skipped 1 previous similar message [ 4093.541236] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 4093.547343] Lustre: Skipped 1 previous similar message [ 4093.579655] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4094.204984] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11363 to 0x280000400:11393 [ 4094.205481] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17891 to 0x0:17921 [ 4094.893956] Lustre: DEBUG MARKER: == sanity test 165f: ofd_access_log_reader --exit-on-close works ========================================================== 03:17:12 (1713511032) [ 4102.343161] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4102.350443] LustreError: Skipped 10 previous similar messages [ 4107.056662] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4107.062567] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4108.201774] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4108.540683] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 4108.543453] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:11363 to 0x280000400:11425 [ 4108.544011] Lustre: lustre-OST0000: deleting orphan objects from 0x0:17891 to 0x0:17953 [ 4108.557125] Lustre: Skipped 1 previous similar message [ 4109.494311] Lustre: DEBUG MARKER: == sanity test 169: parallel read and truncate should not deadlock ========================================================== 03:17:26 (1713511046) [ 4110.089229] Lustre: DEBUG MARKER: creating a 10 Mb file [ 4110.757120] Lustre: DEBUG MARKER: starting reads [ 4111.363112] Lustre: DEBUG MARKER: truncating the file [ 4111.990998] Lustre: DEBUG MARKER: killing dd [ 4112.576794] Lustre: DEBUG MARKER: removing the temporary file [ 4115.027538] Lustre: DEBUG MARKER: == sanity test 170: test lctl df to handle corrupted log =============================================================================== 03:17:32 (1713511052) [ 4117.625287] Lustre: DEBUG MARKER: == sanity test 171: test libcfs_debug_dumplog_thread stuck in do_exit() ================================================================ 03:17:35 (1713511055) [ 4123.104481] Lustre: DEBUG MARKER: == sanity test 180a: test obdecho on osc ================= 03:17:40 (1713511060) [ 4125.161757] Lustre: DEBUG MARKER: == sanity test 180b: test obdecho directly on obdfilter == 03:17:42 (1713511062) [ 4125.787119] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_module obdecho/obdecho [ 4130.298756] Lustre: DEBUG MARKER: == sanity test 180c: test huge bulk I/O size on obdfilter, don't LASSERT ========================================================== 03:17:47 (1713511067) [ 4131.114519] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_module obdecho/obdecho [ 4131.154711] Lustre: Echo OBD driver; http://www.lustre.org/ [ 4136.569741] Lustre: DEBUG MARKER: == sanity test 181: Test open-unlinked dir ================================================================================== 03:17:54 (1713511074) [ 4151.337000] Lustre: DEBUG MARKER: == sanity test 182: Test parallel modify metadata operations ========================================================================== 03:18:08 (1713511088) [ 4160.184290] Lustre: DEBUG MARKER: == sanity test 183: No crash or request leak in case of strange dispositions ================================================================== 03:18:17 (1713511097) [ 4160.469718] Lustre: *** cfs_fail_loc=148, val=0*** [ 4162.230609] Lustre: DEBUG MARKER: == sanity test 184a: Basic layout swap =================== 03:18:19 (1713511099) [ 4164.853922] Lustre: DEBUG MARKER: == sanity test 184b: Forbidden layout swap (will generate errors) ========================================================== 03:18:22 (1713511102) [ 4166.612505] Lustre: DEBUG MARKER: == sanity test 184c: Concurrent write and layout swap ==== 03:18:24 (1713511104) [ 4172.920789] Lustre: DEBUG MARKER: == sanity test 184d: allow stripeless layouts swap ======= 03:18:30 (1713511110) [ 4176.267568] Lustre: DEBUG MARKER: == sanity test 184e: Recreate layout after stripeless layout swaps ========================================================== 03:18:33 (1713511113) [ 4180.122015] Lustre: DEBUG MARKER: == sanity test 184f: IOC_MDC_GETFILEINFO for files with long names but no striping ========================================================== 03:18:37 (1713511117) [ 4181.916050] Lustre: DEBUG MARKER: == sanity test 185: Volatile file support ================ 03:18:39 (1713511119) [ 4184.974311] Lustre: DEBUG MARKER: == sanity test 185a: Volatile file creation in .lustre/fid/ ========================================================== 03:18:42 (1713511122) [ 4191.588997] Lustre: DEBUG MARKER: == sanity test 187a: Test data version change ============ 03:18:49 (1713511129) [ 4194.384692] Lustre: DEBUG MARKER: == sanity test 187b: Test data version change on volatile file ========================================================== 03:18:51 (1713511131) [ 4197.030018] Lustre: DEBUG MARKER: == sanity test 200: OST pools ============================ 03:18:54 (1713511134) [ 4200.536548] LustreError: 12037:0:(qmt_pool.c:1406:qmt_pool_add_rem()) add to: can't lustre-QMT0000 lustre-OST0000_UUID pool cea1: rc = -17 [ 4203.583476] LustreError: 12252:0:(qmt_pool.c:1406:qmt_pool_add_rem()) remove: can't lustre-QMT0000 lustre-OST0000_UUID pool cea1: rc = -22 [ 4211.551520] Lustre: DEBUG MARKER: == sanity test 204a: Print default stripe attributes ===== 03:19:09 (1713511149) [ 4213.892762] Lustre: DEBUG MARKER: == sanity test 204b: Print default stripe size and offset ========================================================== 03:19:11 (1713511151) [ 4215.955647] Lustre: DEBUG MARKER: == sanity test 204c: Print default stripe count and offset ========================================================== 03:19:13 (1713511153) [ 4217.697662] Lustre: DEBUG MARKER: == sanity test 204d: Print default stripe count and size ========================================================== 03:19:15 (1713511155) [ 4219.382092] Lustre: DEBUG MARKER: == sanity test 204e: Print raw stripe attributes ========= 03:19:16 (1713511156) [ 4221.152795] Lustre: DEBUG MARKER: == sanity test 204f: Print raw stripe size and offset ==== 03:19:18 (1713511158) [ 4222.862905] Lustre: DEBUG MARKER: == sanity test 204g: Print raw stripe count and offset === 03:19:20 (1713511160) [ 4224.541165] Lustre: DEBUG MARKER: == sanity test 204h: Print raw stripe count and size ===== 03:19:22 (1713511162) [ 4226.214199] Lustre: DEBUG MARKER: == sanity test 205a: Verify job stats ==================== 03:19:23 (1713511163) [ 4229.355814] Lustre: lustre-MDD0000: changelog on [ 4229.357309] Lustre: Skipped 10 previous similar messages [ 4230.932237] Lustre: DEBUG MARKER: Test: /home/green/git/lustre-release/lustre/utils/lfs mkdir -i 0 -c 1 /mnt/lustre/d205a.sanity [ 4231.311495] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.lfs.25792 [ 4231.963546] Lustre: DEBUG MARKER: Test: rmdir /mnt/lustre/d205a.sanity [ 4232.354959] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.rmdir.13684 [ 4232.985167] Lustre: DEBUG MARKER: Test: lfs mkdir -i 1 /mnt/lustre/d205a.sanity.remote [ 4233.371008] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.lfs.22669 [ 4234.034232] Lustre: DEBUG MARKER: Test: mknod /mnt/lustre/f205a.sanity c 1 3 [ 4234.415061] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.mknod.8353 [ 4235.058030] Lustre: DEBUG MARKER: Test: rm -f /mnt/lustre/f205a.sanity [ 4235.449094] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.rm.15005 [ 4236.081067] Lustre: DEBUG MARKER: Test: /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/f205a.sanity [ 4236.476302] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.lfs.13876 [ 4237.122660] Lustre: DEBUG MARKER: Test: touch /mnt/lustre/f205a.sanity [ 4237.518018] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.touch.5422 [ 4238.384073] Lustre: DEBUG MARKER: Test: dd if=/dev/zero of=/mnt/lustre/f205a.sanity bs=1M count=1 oflag=sync [ 4238.768208] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.dd.8057 [ 4239.419661] Lustre: DEBUG MARKER: Test: dd if=/mnt/lustre/f205a.sanity of=/dev/null bs=1M count=1 iflag=direct [ 4239.807754] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.dd.463 [ 4240.440514] Lustre: DEBUG MARKER: Test: /home/green/git/lustre-release/lustre/tests/truncate /mnt/lustre/f205a.sanity 0 [ 4240.818758] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.truncate.25159 [ 4241.681172] Lustre: DEBUG MARKER: Test: mv -f /mnt/lustre/f205a.sanity /mnt/lustre/d205a.sanity.rename [ 4242.065454] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.mv.21214 [ 4242.687715] Lustre: DEBUG MARKER: Test: /home/green/git/lustre-release/lustre/utils/lfs mkdir -i 0 -c 1 /mnt/lustre/d205a.sanity.expire [ 4243.065372] Lustre: DEBUG MARKER: Using JobID environment nodelocal=id.205a.lfs.28945 [ 4246.442434] Lustre: DEBUG MARKER: Test: touch /mnt/lustre/f205a.sanity [ 4246.837149] Lustre: DEBUG MARKER: Using JobID environment USER=S.root.touch.0.oleg340-client.v [ 4247.465474] Lustre: DEBUG MARKER: Test: touch /mnt/lustre/f205a.sanity [ 4247.852239] Lustre: DEBUG MARKER: Using JobID environment USER=S.root.touch.0.oleg340-client.E [ 4248.489272] Lustre: DEBUG MARKER: Test: touch /mnt/lustre/f205a.sanity [ 4248.887136] Lustre: DEBUG MARKER: Using JobID environment session=S.root.touch.0.oleg340-client.v [ 4251.065434] Lustre: lustre-MDD0001: changelog off [ 4251.066494] Lustre: Skipped 11 previous similar messages [ 4252.643286] Lustre: DEBUG MARKER: == sanity test 205b: Verify job stats jobid and output format ========================================================== 03:19:50 (1713511190) [ 4255.430336] Lustre: DEBUG MARKER: == sanity test 205c: Verify client stats format ========== 03:19:53 (1713511193) [ 4257.034447] Lustre: DEBUG MARKER: == sanity test 206: fail lov_init_raid0() doesn't lbug === 03:19:54 (1713511194) [ 4258.682288] Lustre: DEBUG MARKER: == sanity test 207a: can refresh layout at glimpse ======= 03:19:56 (1713511196) [ 4260.322583] Lustre: DEBUG MARKER: == sanity test 207b: can refresh layout at open ========== 03:19:57 (1713511197) [ 4261.956629] Lustre: DEBUG MARKER: == sanity test 208: Exclusive open ======================= 03:19:59 (1713511199) [ 4268.544458] Lustre: Failing over lustre-MDT0000 [ 4268.546124] Lustre: Skipped 2 previous similar messages [ 4268.577636] LustreError: 21239:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4268.579568] LustreError: 21239:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 1 previous similar message [ 4268.616952] Lustre: server umount lustre-MDT0000 complete [ 4268.618320] Lustre: Skipped 2 previous similar messages [ 4272.390331] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4272.390568] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4272.390570] LustreError: Skipped 1 previous similar message [ 4272.398440] Lustre: Skipped 5 previous similar messages [ 4278.813996] Lustre: 25550:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713511210/real 1713511210] req@ffff88008a6be3c0 x1796745131075712/t0(0) o400->MGC192.168.203.140@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1713511217 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:3.0' [ 4278.822061] Lustre: 25550:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 4278.823776] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4280.916644] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4284.830603] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c99ee518f to 0xb530840c9a02ec24 [ 4284.834293] Lustre: MGC192.168.203.140@tcp: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 4284.836637] Lustre: Skipped 7 previous similar messages [ 4284.897536] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4284.900057] Lustre: Skipped 2 previous similar messages [ 4284.908957] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4284.911190] Lustre: Skipped 1 previous similar message [ 4285.516279] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 4285.519568] Lustre: Skipped 1 previous similar message [ 4285.802724] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4289.916002] Lustre: lustre-MDT0000: Recovery over after 0:05, of 2 clients 2 recovered and 0 were evicted. [ 4289.933718] Lustre: lustre-OST0001: deleting orphan objects from 0x0:21057 to 0x0:21089 [ 4289.933727] Lustre: lustre-OST0000: deleting orphan objects from 0x0:21430 to 0x0:21473 [ 4291.258491] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4291.635688] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4306.933937] Lustre: 25551:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713511238/real 1713511238] req@ffff88011e60e880 x1796745131087808/t0(0) o400->MGC192.168.203.140@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1713511245 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:0.0' [ 4306.940597] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4307.757216] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4312.958727] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c9a02ec24 to 0xb530840c9a02f125 [ 4312.963243] LustreError: 23413:0:(ldlm_resource.c:1126:ldlm_resource_complain()) MGC192.168.203.140@tcp: namespace resource [0x65727473756c:0x5:0x0].0x0 (ffff8801344c2600) refcount nonzero (1) after lock cleanup; forcing cleanup. [ 4313.913068] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4318.056015] Lustre: lustre-OST0000: deleting orphan objects from 0x0:21430 to 0x0:21505 [ 4318.058613] Lustre: lustre-OST0001: deleting orphan objects from 0x0:21057 to 0x0:21121 [ 4319.344532] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4319.699321] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4322.540560] Lustre: DEBUG MARKER: == sanity test 209: read-only open/close requests should be freed promptly ========================================================== 03:21:00 (1713511260) [ 4330.754089] Lustre: DEBUG MARKER: == sanity test 210: lfs getstripe does not break leases == 03:21:08 (1713511268) [ 4334.380914] Lustre: DEBUG MARKER: == sanity test 212: Sendfile test ====================================================================================================== 03:21:12 (1713511272) [ 4336.380465] Lustre: DEBUG MARKER: == sanity test 213: OSC lock completion and cancel race don't crash - bug 18829 ========================================================== 03:21:14 (1713511274) [ 4348.036209] Lustre: DEBUG MARKER: == sanity test 214: hash-indexed directory test - bug 20133 ========================================================== 03:21:25 (1713511285) [ 4352.123353] Lustre: DEBUG MARKER: == sanity test 215: lnet exists and has proper content - bugs 18102, 21079, 21517 ========================================================== 03:21:29 (1713511289) [ 4353.806581] Lustre: DEBUG MARKER: == sanity test 216: check lockless direct write updates file size and kms correctly ========================================================== 03:21:31 (1713511291) [ 4359.025535] Lustre: DEBUG MARKER: == sanity test 217: check lctl ping for hostnames with hiphen ('-') ========================================================== 03:21:36 (1713511296) [ 4361.025076] Lustre: DEBUG MARKER: == sanity test 218: parallel read and truncate should not deadlock ========================================================== 03:21:38 (1713511298) [ 4361.407658] Lustre: DEBUG MARKER: creating a 10 Mb file [ 4361.834456] Lustre: DEBUG MARKER: starting reads [ 4362.228993] Lustre: DEBUG MARKER: truncating the file [ 4362.649348] Lustre: DEBUG MARKER: killing dd [ 4363.039598] Lustre: DEBUG MARKER: removing the temporary file [ 4364.683341] Lustre: DEBUG MARKER: == sanity test 219: LU-394: Write partial won't cause uncontiguous pages vec at LND ========================================================== 03:21:42 (1713511302) [ 4366.388545] Lustre: DEBUG MARKER: == sanity test 220: preallocated MDS objects still used if ENOSPC from OST ========================================================== 03:21:44 (1713511304) [ 4368.134191] Lustre: *** cfs_fail_loc=229, val=0*** [ 4368.966239] Lustre: *** cfs_fail_loc=229, val=0*** [ 4368.968057] Lustre: Skipped 1 previous similar message [ 4370.487028] LustreError: 29017:0:(qmt_pool.c:1406:qmt_pool_add_rem()) add to: can't lustre-QMT0000 lustre-OST0000_UUID pool test_220: rc = -17 [ 4371.427274] Lustre: *** cfs_fail_loc=229, val=0*** [ 4371.428358] Lustre: Skipped 1 previous similar message [ 4374.089624] LustreError: 29353:0:(qmt_pool.c:1406:qmt_pool_add_rem()) remove: can't lustre-QMT0000 lustre-OST0000_UUID pool test_220: rc = -22 [ 4378.737438] Lustre: DEBUG MARKER: == sanity test 221: make sure fault and truncate race to not cause OOM ========================================================== 03:21:56 (1713511316) [ 4380.785646] Lustre: DEBUG MARKER: == sanity test 222a: AGL for ls should not trigger CLIO lock failure ========================================================== 03:21:58 (1713511318) [ 4382.555716] Lustre: DEBUG MARKER: == sanity test 222b: AGL for rmdir should not trigger CLIO lock failure ========================================================== 03:22:00 (1713511320) [ 4384.318024] Lustre: DEBUG MARKER: == sanity test 223: osc reenqueue if without AGL lock granted ================================================================================= 03:22:01 (1713511321) [ 4386.069447] Lustre: DEBUG MARKER: == sanity test 224a: Don't panic on bulk IO failure ====== 03:22:03 (1713511323) [ 4386.115051] Lustre: lustre-OST0001: Client 31917e59-0617-48e6-93c1-45652f57aeb4 (at 192.168.203.40@tcp) reconnecting [ 4388.819770] Lustre: DEBUG MARKER: == sanity test 224b: Don't panic on bulk IO failure ====== 03:22:06 (1713511326) [ 4390.020725] LustreError: 23785:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 515 sleeping for 3000ms [ 4393.024038] LustreError: 23785:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 515 awake [ 4395.344665] Lustre: DEBUG MARKER: == sanity test 224c: Don't hang if one of md lost during large bulk RPC ========================================================== 03:22:12 (1713511332) [ 4400.230086] Lustre: *** cfs_fail_loc=520, val=57344*** [ 4400.231214] LNet: *** cfs_fail_loc=e000, val=2147483648*** [ 4405.237735] Lustre: lustre-MDT0000: Client 31917e59-0617-48e6-93c1-45652f57aeb4 (at 192.168.203.40@tcp) reconnecting [ 4405.242223] Lustre: 27062:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a8f70050 x1796745154227968/t34359738611(0) o35->31917e59-0617-48e6-93c1-45652f57aeb4@192.168.203.40@tcp:0/0 lens 392/456 e 0 to 0 dl 1713511348 ref 1 fl Interpret:/2/0 rc 0/0 job:'' [ 4412.238829] Lustre: DEBUG MARKER: == sanity test 224d: Don't corrupt data on bulk IO timeout ========================================================== 03:22:29 (1713511349) [ 4413.399874] LustreError: 23785:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 515 sleeping for 22000ms [ 4433.291176] Lustre: lustre-OST0000: Client 31917e59-0617-48e6-93c1-45652f57aeb4 (at 192.168.203.40@tcp) reconnecting [ 4434.601958] LustreError: 23785:0:(fail.c:144:__cfs_fail_timeout_set()) cfs_fail_timeout interrupted [ 4434.605185] LustreError: 23785:0:(ldlm_lib.c:3494:target_bulk_io()) @@@ bulk READ failed: rc = -107 req@ffff88008a6bb440 x1796745154234304/t0(0) o3->31917e59-0617-48e6-93c1-45652f57aeb4@192.168.203.40@tcp:0/0 lens 488/440 e 0 to 0 dl 1713511371 ref 1 fl Interpret:/0/0 rc 0/0 job:'' [ 4434.614366] Lustre: 23785:0:(service.c:2333:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/2s); client may timeout req@ffff88008a6bb440 x1796745154234304/t0(0) o3->31917e59-0617-48e6-93c1-45652f57aeb4@192.168.203.40@tcp:0/0 lens 488/440 e 0 to 0 dl 1713511371 ref 1 fl Complete:/0/0 rc 0/0 job:'' [ 4436.634557] Lustre: DEBUG MARKER: SKIP: sanity test_225a skipping excluded test 225a (base 225) [ 4437.085986] Lustre: DEBUG MARKER: SKIP: sanity test_225b skipping excluded test 225b (base 225) [ 4437.517048] Lustre: DEBUG MARKER: == sanity test 226a: call path2fid and fid2path on files of all type ========================================================== 03:22:55 (1713511375) [ 4439.346296] Lustre: DEBUG MARKER: == sanity test 226b: call path2fid and fid2path on files of all type under remote dir ========================================================== 03:22:56 (1713511376) [ 4441.180512] Lustre: DEBUG MARKER: == sanity test 226c: call path2fid and fid2path under remote dir with subdir mount ========================================================== 03:22:58 (1713511378) [ 4441.279094] Lustre: lustre-MDT0000: subdir mount '/d226c.sanity' is remote and may be slow [ 4442.959674] Lustre: DEBUG MARKER: == sanity test 227: running truncated executable does not cause OOM ========================================================== 03:23:00 (1713511380) [ 4444.612753] Lustre: DEBUG MARKER: == sanity test 228a: try to reuse idle OI blocks ========= 03:23:02 (1713511382) [ 4493.112524] Lustre: DEBUG MARKER: == sanity test 228b: idle OI blocks can be reused after MDT restart ========================================================== 03:23:50 (1713511430) [ 4529.132299] Lustre: Failing over lustre-MDT0000 [ 4529.133920] Lustre: Skipped 1 previous similar message [ 4529.289024] LustreError: 3982:0:(ldlm_resource.c:1126:ldlm_resource_complain()) mdt-lustre-MDT0000_UUID: namespace resource [0x200003ab1:0x24cf:0x0].0x0 (ffff880093831e00) refcount nonzero (1) after lock cleanup; forcing cleanup. [ 4529.319440] LustreError: 3982:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4529.321259] LustreError: 3982:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 3 previous similar messages [ 4529.344799] Lustre: server umount lustre-MDT0000 complete [ 4529.346123] Lustre: Skipped 1 previous similar message [ 4531.438458] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.203.40@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4531.444538] LustreError: Skipped 48 previous similar messages [ 4531.545765] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4531.572987] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4531.573014] LustreError: 11-0: MGC192.168.203.140@tcp: operation mgs_target_reg to node 0@lo failed: rc = -107 [ 4531.573181] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4531.573183] Lustre: Skipped 5 previous similar messages [ 4531.584405] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c9a02f125 to 0xb530840c9a196ed6 [ 4531.589507] Lustre: MGC192.168.203.140@tcp: Connection restored to (at 0@lo) [ 4531.591653] Lustre: Skipped 9 previous similar messages [ 4531.656355] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4531.658433] Lustre: Skipped 1 previous similar message [ 4532.681127] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4536.440487] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 4536.443199] Lustre: Skipped 1 previous similar message [ 4536.666383] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 4536.668854] Lustre: Skipped 1 previous similar message [ 4536.682455] Lustre: lustre-OST0001: deleting orphan objects from 0x0:26128 to 0x0:26145 [ 4536.682523] Lustre: lustre-OST0000: deleting orphan objects from 0x0:26558 to 0x0:26593 [ 4549.254323] Lustre: DEBUG MARKER: == sanity test 228c: NOT shrink the last entry in OI index node to recycle idle leaf ========================================================== 03:24:46 (1713511486) [ 4626.352622] Lustre: DEBUG MARKER: == sanity test 229: getstripe/stat/rm/attr changes work on released files ========================================================== 03:26:03 (1713511563) [ 4628.005466] Lustre: DEBUG MARKER: == sanity test 230a: Create remote directory and files under the remote directory ========================================================== 03:26:05 (1713511565) [ 4629.753370] Lustre: DEBUG MARKER: == sanity test 230b: migrate directory =================== 03:26:07 (1713511567) [ 4639.648285] Lustre: DEBUG MARKER: == sanity test 230c: check directory accessiblity if migration failed ========================================================== 03:26:17 (1713511577) [ 4640.082053] Lustre: *** cfs_fail_loc=1801, val=0*** [ 4640.083100] Lustre: Skipped 1 previous similar message [ 4640.337524] LustreError: 29441:0:(mdd_dir.c:4234:mdd_migrate_cmd_check()) lustre-MDD0000: 'migrate_dir' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush migrate_dir' to finish migration. [ 4642.124117] Lustre: DEBUG MARKER: SKIP: sanity test_230d skipping SLOW test 230d [ 4642.550487] Lustre: DEBUG MARKER: == sanity test 230e: migrate mulitple local link files === 03:26:20 (1713511580) [ 4644.420896] Lustre: DEBUG MARKER: == sanity test 230f: migrate mulitple remote link files == 03:26:22 (1713511582) [ 4646.411596] Lustre: DEBUG MARKER: == sanity test 230g: migrate dir to non-exist MDT ======== 03:26:24 (1713511584) [ 4648.014301] Lustre: DEBUG MARKER: == sanity test 230h: migrate .. and root ================= 03:26:25 (1713511585) [ 4649.638145] Lustre: DEBUG MARKER: == sanity test 230i: lfs migrate -m tolerates trailing slashes ========================================================== 03:26:27 (1713511587) [ 4651.303762] Lustre: DEBUG MARKER: == sanity test 230j: DoM file data not changed after dir migration ========================================================== 03:26:28 (1713511588) [ 4652.965831] Lustre: DEBUG MARKER: == sanity test 230k: file data not changed after dir migration ========================================================== 03:26:30 (1713511590) [ 4653.355722] Lustre: DEBUG MARKER: SKIP: sanity test_230k needs >= 4 MDTs [ 4653.810449] Lustre: DEBUG MARKER: == sanity test 230l: readdir between MDTs won't crash ==== 03:26:31 (1713511591) [ 4677.945871] Lustre: DEBUG MARKER: == sanity test 230m: xattrs not changed after dir migration ========================================================== 03:26:55 (1713511615) [ 4680.860520] Lustre: DEBUG MARKER: == sanity test 230n: Dir migration with mirrored file ==== 03:26:58 (1713511618) [ 4682.626261] Lustre: DEBUG MARKER: == sanity test 230o: dir split =========================== 03:27:00 (1713511620) [ 4693.795708] Lustre: DEBUG MARKER: == sanity test 230p: dir merge =========================== 03:27:11 (1713511631) [ 4699.378973] Lustre: DEBUG MARKER: == sanity test 230q: dir auto split ====================== 03:27:17 (1713511637) [ 4709.673048] Lustre: DEBUG MARKER: == sanity test 230r: migrate with too many local locks === 03:27:27 (1713511647) [ 4711.561952] Lustre: DEBUG MARKER: == sanity test 230s: lfs mkdir should return -EEXIST if target exists ========================================================== 03:27:29 (1713511649) [ 4715.351839] Lustre: DEBUG MARKER: == sanity test 230t: migrate directory with project ID set ========================================================== 03:27:32 (1713511652) [ 4717.942011] Lustre: DEBUG MARKER: == sanity test 230u: migrate directory by QOS ============ 03:27:35 (1713511655) [ 4718.511124] Lustre: DEBUG MARKER: SKIP: sanity test_230u needs >= 4 MDTs [ 4719.212141] Lustre: DEBUG MARKER: == sanity test 230v: subdir migrated to the MDT where its parent is located ========================================================== 03:27:36 (1713511656) [ 4719.819434] Lustre: DEBUG MARKER: SKIP: sanity test_230v needs >= 4 MDTs [ 4720.518506] Lustre: DEBUG MARKER: == sanity test 230w: non-recursive mode dir migration ==== 03:27:37 (1713511657) [ 4722.975261] Lustre: DEBUG MARKER: == sanity test 230y: unlink dir with bad hash type ======= 03:27:40 (1713511660) [ 4724.432138] Lustre: *** cfs_fail_loc=1802, val=0*** [ 4724.939856] Lustre: *** cfs_fail_loc=1802, val=0*** [ 4724.941464] Lustre: Skipped 46 previous similar messages [ 4728.289758] Lustre: DEBUG MARKER: == sanity test 230z: resume dir migration with bad hash type ========================================================== 03:27:45 (1713511665) [ 4729.813286] Lustre: *** cfs_fail_loc=1802, val=0*** [ 4729.815081] Lustre: Skipped 21 previous similar messages [ 4738.704630] Lustre: DEBUG MARKER: == sanity test 231a: checking that reading/writing of BRW RPC size results in one RPC ========================================================== 03:27:56 (1713511676) [ 4742.202257] Lustre: DEBUG MARKER: == sanity test 231b: must not assert on fully utilized OST request buffer ========================================================== 03:27:59 (1713511679) [ 4747.091117] Lustre: DEBUG MARKER: == sanity test 232a: failed lock should not block umount ========================================================== 03:28:04 (1713511684) [ 4747.403827] Lustre: *** cfs_fail_loc=31c, val=0*** [ 4748.477192] Lustre: Failing over lustre-OST0000 [ 4748.481260] LustreError: 16278:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4748.483515] LustreError: 16278:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 12 previous similar messages [ 4748.506222] Lustre: server umount lustre-OST0000 complete [ 4749.638261] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 4750.718546] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4750.722981] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4750.781660] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4750.784762] Lustre: Skipped 2 previous similar messages [ 4751.828835] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:32098 to 0x280000400:32129 [ 4751.828943] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28154 to 0x0:28193 [ 4751.853357] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4754.133094] Lustre: DEBUG MARKER: == sanity test 232b: failed data version lock should not block umount ========================================================== 03:28:11 (1713511691) [ 4754.443092] Lustre: *** cfs_fail_loc=31c, val=0*** [ 4754.445463] LustreError: 24419:0:(ldlm_request.c:490:ldlm_cli_enqueue_local()) ### delayed lvb init failed (rc -12) ns: filter-lustre-OST0000_UUID lock: ffff8800727ac6c0/0xb530840c9a381c12 lrc: 2/0,0 mode: --/PR res: [0x6e22:0x0:0x0].0x0 rrc: 2 type: EXT [0->0] (req 0->0) gid 0 flags: 0x40000000000000 nid: local remote: 0x0 expref: -99 pid: 24419 timeout: 0 lvb_type: 0 [ 4758.086004] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4758.090424] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 4758.988728] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4759.934976] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28195 to 0x0:28225 [ 4759.938608] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:32098 to 0x280000400:32161 [ 4761.107318] Lustre: DEBUG MARKER: == sanity test 233a: checking that OBF of the FS root succeeds ========================================================== 03:28:18 (1713511698) [ 4762.733150] Lustre: DEBUG MARKER: == sanity test 233b: checking that OBF of the FS .lustre succeeds ========================================================== 03:28:20 (1713511700) [ 4764.382313] Lustre: DEBUG MARKER: == sanity test 234: xattr cache should not crash on ENOMEM ========================================================== 03:28:22 (1713511702) [ 4766.136293] Lustre: DEBUG MARKER: == sanity test 235: LU-1715: flock deadlock detection does not work properly ========================================================== 03:28:23 (1713511703) [ 4769.773076] Lustre: DEBUG MARKER: == sanity test 236: Layout swap on open unlinked file ==== 03:28:27 (1713511707) [ 4771.510169] Lustre: DEBUG MARKER: == sanity test 238: Verify linkea consistency ============ 03:28:29 (1713511709) [ 4773.130839] Lustre: DEBUG MARKER: == sanity test 239A: osp_sync test ======================= 03:28:30 (1713511710) [ 4790.470622] Lustre: DEBUG MARKER: == sanity test 239a: process invalid osp sync record correctly ========================================================== 03:28:48 (1713511728) [ 4791.318587] Lustre: *** cfs_fail_loc=2100, val=0*** [ 4798.380344] Lustre: DEBUG MARKER: == sanity test 239b: process osp sync record with ENOMEM error correctly ========================================================== 03:28:56 (1713511736) [ 4803.891081] Lustre: DEBUG MARKER: == sanity test 240: race between ldlm enqueue and the connection RPC (no ASSERT) ========================================================== 03:29:01 (1713511741) [ 4804.443005] Lustre: *** cfs_fail_loc=713, val=0*** [ 4804.443010] Lustre: *** cfs_fail_loc=713, val=0*** [ 4809.463178] Lustre: DEBUG MARKER: == sanity test 241a: bio vs dio ========================== 03:29:07 (1713511747) [ 4826.292809] Lustre: DEBUG MARKER: == sanity test 241b: dio vs dio ========================== 03:29:23 (1713511763) [ 4832.266961] Lustre: DEBUG MARKER: == sanity test 242: mdt_readpage failure should not cause directory unreadable ========================================================== 03:29:29 (1713511769) [ 4832.656619] Lustre: *** cfs_fail_loc=105, val=0*** [ 4834.753001] Lustre: DEBUG MARKER: == sanity test 243: various group lock tests ============= 03:29:32 (1713511772) [ 4846.139253] Lustre: DEBUG MARKER: == sanity test 244a: sendfile with group lock tests ====== 03:29:43 (1713511783) [ 4856.110512] Lustre: DEBUG MARKER: == sanity test 244b: multi-threaded write with group lock ========================================================== 03:29:53 (1713511793) [ 4858.276820] Lustre: DEBUG MARKER: == sanity test 245: check mdc connection flag/data: multiple modify RPCs ========================================================== 03:29:55 (1713511795) [ 4860.224575] Lustre: DEBUG MARKER: == sanity test 247a: mount subdir as fileset ============= 03:29:57 (1713511797) [ 4862.498325] Lustre: DEBUG MARKER: == sanity test 247b: mount subdir that dose not exist ==== 03:30:00 (1713511800) [ 4864.446559] Lustre: DEBUG MARKER: == sanity test 247c: running fid2path outside subdirectory root ========================================================== 03:30:01 (1713511801) [ 4864.805778] Lustre: lustre-MDT0000: subdir mount '/d247c.sanity' is remote and may be slow [ 4867.211986] Lustre: DEBUG MARKER: == sanity test 247d: running fid2path inside subdirectory root ========================================================== 03:30:04 (1713511804) [ 4867.425255] Lustre: lustre-MDT0000: subdir mount '/d247d.sanity' is remote and may be slow [ 4869.655019] Lustre: DEBUG MARKER: == sanity test 247e: mount .. as fileset ================= 03:30:07 (1713511807) [ 4871.590629] Lustre: DEBUG MARKER: == sanity test 247f: mount striped or remote directory as fileset ========================================================== 03:30:09 (1713511809) [ 4872.741341] Lustre: lustre-MDT0000: subdir mount '/d247f.sanity/remote' refused because 'enable_remote_subdir_mount=0': rc = -66 [ 4875.043769] Lustre: lustre-MDT0000: subdir mount '/d247f.sanity/remote' is remote and may be slow [ 4877.328809] Lustre: DEBUG MARKER: == sanity test 247g: mount striped directory as fileset caches ROOT lookup lock ========================================================== 03:30:14 (1713511814) [ 4877.732570] Lustre: DEBUG MARKER: SKIP: sanity test_247g needs >= 4 MDTs [ 4878.235927] Lustre: DEBUG MARKER: == sanity test 248a: fast read verification ============== 03:30:15 (1713511815) [ 4885.145630] Lustre: DEBUG MARKER: == sanity test 248b: test short_io read and write for both small and large sizes ========================================================== 03:30:22 (1713511822) [ 4893.514927] Lustre: DEBUG MARKER: == sanity test 249: Write above 2T file size ============= 03:30:31 (1713511831) [ 4895.548242] Lustre: DEBUG MARKER: == sanity test 250: Write above 16T limit ================ 03:30:33 (1713511833) [ 4897.363865] Lustre: DEBUG MARKER: == sanity test 251: Handling short read and write correctly ========================================================== 03:30:34 (1713511834) [ 4899.037927] Lustre: DEBUG MARKER: == sanity test 252: check lr_reader tool ================= 03:30:36 (1713511836) [ 4901.964000] Lustre: DEBUG MARKER: == sanity test 253: Check object allocation limit ======== 03:30:39 (1713511839) [ 4913.402323] LustreError: 29985:0:(qmt_pool.c:1406:qmt_pool_add_rem()) add to: can't lustre-QMT0000 lustre-OST0000_UUID pool test_253: rc = -17 [ 4949.872851] LustreError: 7745:0:(lod_qos.c:1362:lod_ost_alloc_specific()) can't lstripe objid [0x200004283:0x30:0x0]: have 0 want 1 [ 4991.109004] LustreError: 32453:0:(qmt_pool.c:1406:qmt_pool_add_rem()) remove: can't lustre-QMT0000 lustre-OST0000_UUID pool test_253: rc = -22 [ 4995.238863] Lustre: DEBUG MARKER: == sanity test 254: Check changelog size ================= 03:32:12 (1713511932) [ 4996.233854] Lustre: lustre-MDD0000: changelog on [ 4996.234946] Lustre: Skipped 1 previous similar message [ 4999.186581] Lustre: lustre-MDD0001: changelog off [ 4999.188092] Lustre: Skipped 1 previous similar message [ 5000.539052] Lustre: DEBUG MARKER: SKIP: sanity test_255a skipping excluded test 255a (base 255) [ 5000.999675] Lustre: DEBUG MARKER: SKIP: sanity test_255b skipping excluded test 255b (base 255) [ 5001.464027] Lustre: DEBUG MARKER: SKIP: sanity test_255c skipping excluded test 255c (base 255) [ 5001.923084] Lustre: DEBUG MARKER: SKIP: sanity test_256 skipping excluded test 256 [ 5002.337684] Lustre: DEBUG MARKER: == sanity test 257: xattr locks are not lost ============= 03:32:19 (1713511939) [ 5002.741546] Lustre: *** cfs_fail_loc=161, val=0*** [ 5003.526332] Lustre: lustre-MDT0001-lwp-OST0001: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5003.529780] Lustre: Skipped 9 previous similar messages [ 5005.517138] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5005.611245] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 5005.617442] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 5005.619132] Lustre: Skipped 2 previous similar messages [ 5006.539863] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5007.001441] Lustre: lustre-MDT0001: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 5007.003302] Lustre: Skipped 2 previous similar messages [ 5007.951031] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 [ 5010.615652] Lustre: lustre-MDT0001-lwp-OST0000: Connection restored to (at 0@lo) [ 5010.617813] Lustre: Skipped 8 previous similar messages [ 5010.623419] Lustre: lustre-MDT0001: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 5010.625954] Lustre: Skipped 2 previous similar messages [ 5010.641154] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:34247 to 0x2c0000400:34273 [ 5010.641160] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34662 to 0x280000400:34689 [ 5015.165445] Lustre: DEBUG MARKER: == sanity test 258a: verify i_mutex security behavior when suid attributes is set ========================================================== 03:32:32 (1713511952) [ 5017.330359] Lustre: DEBUG MARKER: == sanity test 258b: verify i_mutex security behavior ==== 03:32:34 (1713511954) [ 5019.680621] Lustre: DEBUG MARKER: == sanity test 259: crash at delayed truncate ============ 03:32:37 (1713511957) [ 5035.307255] Lustre: *** cfs_fail_loc=2301, val=0*** [ 5036.201017] Lustre: Failing over lustre-OST0000 [ 5036.202476] Lustre: Skipped 2 previous similar messages [ 5036.206011] LustreError: 5476:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 5036.207995] LustreError: 5476:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 52 previous similar messages [ 5036.216907] Lustre: server umount lustre-OST0000 complete [ 5036.218035] Lustre: Skipped 2 previous similar messages [ 5039.268235] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5039.274156] LDISKFS-fs (dm-2): 1 truncate cleaned up [ 5039.275735] LDISKFS-fs (dm-2): recovery complete [ 5039.278666] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5039.325557] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 5040.427107] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5041.231263] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34662 to 0x280000400:34721 [ 5041.231676] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28253 to 0x0:28289 [ 5045.208908] Lustre: DEBUG MARKER: == sanity test 260: Check mdc_close fail ================= 03:33:02 (1713511982) [ 5047.019803] Lustre: DEBUG MARKER: == sanity test 270a: DoM: basic functionality tests ====== 03:33:04 (1713511984) [ 5051.202201] Lustre: DEBUG MARKER: == sanity test 270b: DoM: maximum size overflow checks for DoM-only file ========================================================== 03:33:08 (1713511988) [ 5053.035112] Lustre: DEBUG MARKER: == sanity test 270c: DoM: DoM EA inheritance tests ======= 03:33:10 (1713511990) [ 5055.487354] Lustre: DEBUG MARKER: == sanity test 270d: DoM: change striping from DoM to RAID0 ========================================================== 03:33:13 (1713511993) [ 5057.711114] Lustre: DEBUG MARKER: == sanity test 270e: DoM: lfs find with DoM files test === 03:33:15 (1713511995) [ 5059.587416] Lustre: DEBUG MARKER: == sanity test 270f: DoM: maximum DoM stripe size checks ========================================================== 03:33:17 (1713511997) [ 5061.247083] Lustre: Increasing provided stripe size to a minimum value 64 [ 5064.465950] Lustre: DEBUG MARKER: == sanity test 270g: DoM: default DoM stripe size depends on free space ========================================================== 03:33:22 (1713512002) [ 5066.587532] Lustre: *** cfs_fail_loc=168, val=0*** [ 5067.530190] Lustre: *** cfs_fail_loc=168, val=0*** [ 5067.531354] Lustre: Skipped 5 previous similar messages [ 5068.974654] Lustre: *** cfs_fail_loc=168, val=0*** [ 5068.976230] Lustre: Skipped 8 previous similar messages [ 5071.458954] Lustre: DEBUG MARKER: == sanity test 270h: DoM: DoM stripe removal when disabled on server ========================================================== 03:33:28 (1713512008) [ 5073.898830] Lustre: DEBUG MARKER: == sanity test 270i: DoM: setting invalid DoM striping should fail ========================================================== 03:33:31 (1713512011) [ 5075.637436] Lustre: DEBUG MARKER: == sanity test 271a: DoM: data is cached for read after write ========================================================== 03:33:33 (1713512013) [ 5077.837515] Lustre: DEBUG MARKER: == sanity test 271b: DoM: no glimpse RPC for stat (DoM only file) ========================================================== 03:33:35 (1713512015) [ 5079.602446] Lustre: DEBUG MARKER: == sanity test 271ba: DoM: no glimpse RPC for stat (combined file) ========================================================== 03:33:37 (1713512017) [ 5081.504640] Lustre: DEBUG MARKER: == sanity test 271c: DoM: IO lock at open saves enqueue RPCs ========================================================== 03:33:39 (1713512019) [ 5108.343206] Lustre: DEBUG MARKER: == sanity test 271d: DoM: read on open (1K file in reply buffer) ========================================================== 03:34:05 (1713512045) [ 5111.239696] Lustre: DEBUG MARKER: == sanity test 271f: DoM: read on open (200K file and read tail) ========================================================== 03:34:08 (1713512048) [ 5113.757449] Lustre: DEBUG MARKER: == sanity test 271g: Discard DoM data vs client flush race ========================================================== 03:34:11 (1713512051) [ 5116.961443] Lustre: DEBUG MARKER: == sanity test 272a: DoM migration: new layout with the same DOM component ========================================================== 03:34:14 (1713512054) [ 5119.156233] Lustre: DEBUG MARKER: == sanity test 272b: DoM migration: DOM file to the OST-striped file (plain) ========================================================== 03:34:16 (1713512056) [ 5123.044711] Lustre: DEBUG MARKER: == sanity test 272c: DoM migration: DOM file to the OST-striped file (composite) ========================================================== 03:34:20 (1713512060) [ 5126.329862] Lustre: DEBUG MARKER: == sanity test 272d: DoM mirroring: OST-striped mirror to DOM file ========================================================== 03:34:23 (1713512063) [ 5130.009108] Lustre: DEBUG MARKER: == sanity test 272e: DoM mirroring: DOM mirror to the OST-striped file ========================================================== 03:34:27 (1713512067) [ 5132.862946] Lustre: DEBUG MARKER: == sanity test 272f: DoM migration: OST-striped file to DOM file ========================================================== 03:34:30 (1713512070) [ 5135.574978] Lustre: DEBUG MARKER: == sanity test 273a: DoM: layout swapping should fail with DOM ========================================================== 03:34:32 (1713512072) [ 5138.004638] Lustre: DEBUG MARKER: == sanity test 273b: DoM: race writeback and object destroy ========================================================== 03:34:35 (1713512075) [ 5138.398121] LustreError: 27075:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 16b sleeping for 2000ms [ 5140.402059] LustreError: 27075:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 16b awake [ 5142.682363] Lustre: DEBUG MARKER: == sanity test 275: Read on a canceled duplicate lock ==== 03:34:40 (1713512080) [ 5146.379597] Lustre: DEBUG MARKER: == sanity test 276: Race between mount and obd_statfs ==== 03:34:43 (1713512083) [ 5149.510671] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5149.514658] LustreError: Skipped 13 previous similar messages [ 5149.999490] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5150.004952] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5150.042967] LustreError: 19344:0:(obd_class.h:1061:obd_statfs()) Device 22 not setup [ 5150.060023] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 5151.239646] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5151.599574] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28321 [ 5151.608683] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:34785 [ 5152.435350] LustreError: 20495:0:(obd_class.h:1061:obd_statfs()) Device 22 not setup [ 5155.145201] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5155.150869] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5155.212440] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 5156.171795] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5156.510435] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:34817 [ 5156.513157] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28353 [ 5157.139022] LustreError: 22974:0:(obd_class.h:1061:obd_statfs()) Device 22 not setup [ 5159.726810] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5159.733354] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5159.779215] LustreError: 24262:0:(obd_class.h:1061:obd_statfs()) Device 22 not setup [ 5159.795970] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 5160.968903] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5161.627257] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:34849 [ 5161.631373] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28385 [ 5164.647793] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5164.651650] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5164.689967] LustreError: 26750:0:(obd_class.h:1061:obd_statfs()) Device 22 not setup [ 5164.693387] LustreError: 26750:0:(obd_class.h:1061:obd_statfs()) Skipped 1 previous similar message [ 5165.830606] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5166.466610] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:34881 [ 5166.471624] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28417 [ 5169.384319] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5169.387827] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5169.435772] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 5169.438191] Lustre: Skipped 1 previous similar message [ 5170.556066] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5171.092920] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28449 [ 5171.095121] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:34913 [ 5174.298261] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5174.301564] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5175.228698] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5176.124181] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28481 [ 5176.128665] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:34945 [ 5176.328110] LustreError: 436:0:(obd_class.h:1061:obd_statfs()) Device 22 not setup [ 5176.332067] LustreError: 436:0:(obd_class.h:1061:obd_statfs()) Skipped 2 previous similar messages [ 5178.452168] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5178.456800] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5179.481779] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5179.850252] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:34977 [ 5179.852841] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28513 [ 5182.631262] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5182.634120] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5183.611795] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5184.539745] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:35009 [ 5184.540332] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28545 [ 5187.368300] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5187.373709] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5187.423981] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 5187.428481] Lustre: Skipped 3 previous similar messages [ 5188.320294] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5189.082908] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28577 [ 5189.085228] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:35041 [ 5192.328702] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5192.332001] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5193.277427] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5194.211422] LustreError: 10526:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 5194.212489] LustreError: 10533:0:(obd_class.h:1061:obd_statfs()) Device 22 not setup [ 5194.212491] LustreError: 10533:0:(obd_class.h:1061:obd_statfs()) Skipped 4 previous similar messages [ 5194.221497] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5194.221524] Lustre: 9340:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5194.222605] Lustre: 9340:0:(ofd_obd.c:554:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 5194.232835] Lustre: Skipped 1 previous similar message [ 5196.910007] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5196.913872] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5197.835101] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5198.812118] LustreError: 13017:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 5198.814087] Lustre: 11828:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5198.815974] Lustre: 11828:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 5198.818440] Lustre: 11828:0:(ofd_obd.c:554:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 5201.374406] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5201.379239] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5202.476525] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5203.729467] LustreError: 15460:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 5203.737952] Lustre: 14337:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5203.743060] Lustre: 14337:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 5203.748104] Lustre: 14337:0:(ofd_obd.c:554:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 5206.139648] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5206.144765] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5207.308580] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5208.586167] LustreError: 17930:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 5208.590138] Lustre: 16804:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5208.594656] Lustre: 16804:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 5208.599591] Lustre: 16804:0:(ofd_obd.c:554:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 5211.361224] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5211.367572] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5212.463005] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5213.584562] LustreError: 20367:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 5213.592659] Lustre: 19218:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5213.597204] Lustre: 19218:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 5213.601990] Lustre: 19218:0:(ofd_obd.c:554:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 5216.360062] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5216.363047] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5217.236863] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5220.666713] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5220.671160] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5220.718372] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 5220.721478] Lustre: Skipped 6 previous similar messages [ 5221.748840] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5222.644765] LustreError: 25318:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 5222.647080] LustreError: 25318:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) Skipped 1 previous similar message [ 5222.649517] Lustre: 24160:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5222.655619] Lustre: 24160:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 5 previous similar messages [ 5222.660001] Lustre: 24160:0:(ofd_obd.c:554:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 5222.665896] Lustre: 24160:0:(ofd_obd.c:554:ofd_postrecov()) Skipped 1 previous similar message [ 5225.355760] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5225.363260] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5226.471824] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5227.723376] LustreError: 27779:0:(obd_class.h:1061:obd_statfs()) Device 22 not setup [ 5227.728988] LustreError: 27779:0:(obd_class.h:1061:obd_statfs()) Skipped 33 previous similar messages [ 5230.672702] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5230.680781] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5231.807636] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5235.910252] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5235.918131] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5237.235143] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5241.028987] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5241.032161] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 5241.974974] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5244.721216] Lustre: DEBUG MARKER: == sanity test 277: Direct IO shall drop page cache ====== 03:36:22 (1713512182) [ 5245.068066] Lustre: 25547:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713512141/real 1713512141] req@ffff8800a5922600 x1796745137594176/t0(0) o400->lustre-OST0000-osc-MDT0001@0@lo:28/4 lens 224/224 e 0 to 1 dl 1713512183 ref 1 fl Rpc:XQr/c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' [ 5247.294136] Lustre: DEBUG MARKER: == sanity test 278: Race starting MDS between MDTs stop/start ========================================================== 03:36:24 (1713512184) [ 5248.368096] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_disconnect to node 0@lo failed: rc = -107 [ 5248.372119] LustreError: 25547:0:(import.c:692:ptlrpc_connect_import_locked()) can't connect to a closed import [ 5248.419667] LustreError: 3389:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 60c sleeping [ 5251.291837] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5251.340305] LustreError: 3967:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 60c waking [ 5251.340800] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5251.341075] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c9a196ed6 to 0xb530840c9a42fd08 [ 5251.349627] LustreError: 3389:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 60c awake: rc=2074 [ 5253.392268] Lustre: lustre-OST0000: Received new MDS connection from 0@lo, remove former export from same NID [ 5253.395330] Lustre: lustre-OST0000: Denying connection for new client lustre-MDT0000-mdtlov_UUID (at 0@lo), waiting for 3 known clients (1 recovered, 0 in progress, and 1 evicted) to recover in 1:31 [ 5254.554193] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5257.847799] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5257.933869] Lustre: lustre-OST0000: Denying connection for new client lustre-MDT0000-mdtlov_UUID (at 0@lo), waiting for 3 known clients (1 recovered, 0 in progress, and 1 evicted) to recover in 1:26 [ 5257.936355] Lustre: lustre-OST0000: Received new MDS connection from 0@lo, remove former export from same NID [ 5257.940715] Lustre: Skipped 1 previous similar message [ 5258.963302] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5260.360393] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 [ 5262.966753] Lustre: lustre-OST0000: Denying connection for new client lustre-MDT0000-mdtlov_UUID (at 0@lo), waiting for 3 known clients (0 recovered, 0 in progress, and 2 evicted) to recover in 1:21 [ 5262.975447] Lustre: Skipped 1 previous similar message [ 5262.997416] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:34282 to 0x2c0000400:34305 [ 5263.010418] Lustre: lustre-OST0001: deleting orphan objects from 0x0:27736 to 0x0:27777 [ 5267.606563] Lustre: DEBUG MARKER: == sanity test 280: Race between MGS umount and client llog processing ========================================================== 03:36:45 (1713512205) [ 5267.974282] Lustre: lustre-OST0000: Denying connection for new client lustre-MDT0000-mdtlov_UUID (at 0@lo), waiting for 3 known clients (0 recovered, 0 in progress, and 2 evicted) to recover in 1:16 [ 5267.978478] Lustre: Skipped 1 previous similar message [ 5268.188981] LustreError: 3975:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 15e sleeping [ 5269.983372] LustreError: 6324:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15e waking [ 5269.985928] LustreError: 3975:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 15e awake: rc=3204 [ 5270.993558] LustreError: 6324:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15e waking [ 5272.982284] Lustre: lustre-OST0000: Denying connection for new client lustre-MDT0001-mdtlov_UUID (at 0@lo), waiting for 3 known clients (0 recovered, 0 in progress, and 2 evicted) to recover in 1:11 [ 5273.850977] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5273.886506] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5273.891026] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c9a42fd08 to 0xb530840c9a430209 [ 5273.898872] LustreError: 6925:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15e waking [ 5274.974413] LustreError: 6925:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15e waking [ 5275.980802] LustreError: 6925:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15e waking [ 5277.992506] LustreError: 6925:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15e waking [ 5277.994733] LustreError: 6925:0:(libcfs_fail.h:180:cfs_race()) Skipped 1 previous similar message [ 5281.002791] Lustre: lustre-OST0000: Denying connection for new client lustre-MDT0001-mdtlov_UUID (at 0@lo), waiting for 3 known clients (0 recovered, 0 in progress, and 2 evicted) to recover in 1:03 [ 5281.007966] Lustre: Skipped 6 previous similar messages [ 5282.003619] LustreError: 6954:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15e waking [ 5282.007493] LustreError: 6954:0:(libcfs_fail.h:180:cfs_race()) Skipped 3 previous similar messages [ 5284.017308] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5284.021164] Lustre: Skipped 3 previous similar messages [ 5285.327482] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5286.034261] Lustre: lustre-OST0001: deleting orphan objects from 0x0:27736 to 0x0:27809 [ 5290.115848] LustreError: 6925:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 15e waking [ 5290.117501] LustreError: 6925:0:(libcfs_fail.h:180:cfs_race()) Skipped 5 previous similar messages [ 5294.381883] Lustre: DEBUG MARKER: == sanity test 300a: basic striped dir sanity test ======= 03:37:11 (1713512231) [ 5296.634329] Lustre: DEBUG MARKER: == sanity test 300b: check ctime/mtime for striped dir === 03:37:14 (1713512234) [ 5297.144185] Lustre: lustre-OST0000: Denying connection for new client 2d502242-cb93-4bed-b22e-3e67a666c0c6 (at 192.168.203.40@tcp), waiting for 3 known clients (0 recovered, 0 in progress, and 2 evicted) to recover in 0:47 [ 5297.148202] Lustre: Skipped 7 previous similar messages [ 5319.088114] Lustre: DEBUG MARKER: == sanity test 300c: chown [ 5331.078283] Lustre: lustre-OST0000: Denying connection for new client lustre-MDT0001-mdtlov_UUID (at 0@lo), waiting for 3 known clients (0 recovered, 0 in progress, and 2 evicted) to recover in 0:13 [ 5331.083236] Lustre: Skipped 19 previous similar messages [ 5340.554505] Lustre: DEBUG MARKER: == sanity test 300d: check default stripe under striped directory ========================================================== 03:37:58 (1713512278) [ 5341.154551] Lustre: DEBUG MARKER: sanity test_300d: @@@@@@ FAIL: wrong stripe 1 for /mnt/lustre/d300d.sanity/striped_dir/f2 [ 5342.558657] Lustre: DEBUG MARKER: == sanity test 300e: check rename under striped directory ========================================================== 03:38:00 (1713512280) [ 5344.421049] Lustre: DEBUG MARKER: == sanity test 300f: check rename cross striped directory ========================================================== 03:38:02 (1713512282) [ 5344.759003] Lustre: lustre-OST0000: recovery is timed out, evict stale exports [ 5344.761727] Lustre: lustre-OST0000: disconnecting 1 stale clients [ 5346.104107] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:34741 to 0x280000400:35073 [ 5346.104473] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28302 to 0x0:28609 [ 5346.219714] Lustre: DEBUG MARKER: == sanity test 300g: check default striped directory for normal directory ========================================================== 03:38:03 (1713512283) [ 5348.992276] Lustre: DEBUG MARKER: == sanity test 300h: check default striped directory for striped directory ========================================================== 03:38:06 (1713512286) [ 5352.371989] Lustre: DEBUG MARKER: == sanity test 300i: client handle unknown hash type striped directory ========================================================== 03:38:09 (1713512289) [ 5355.891768] Lustre: DEBUG MARKER: == sanity test 300j: test large update record ============ 03:38:13 (1713512293) [ 5358.172453] Lustre: DEBUG MARKER: == sanity test 300k: test large striped directory ======== 03:38:15 (1713512295) [ 5360.877021] Lustre: DEBUG MARKER: == sanity test 300l: non-root user to create dir under striped dir with stale layout ========================================================== 03:38:18 (1713512298) [ 5363.118103] Lustre: DEBUG MARKER: == sanity test 300m: setstriped directory on single MDT FS ========================================================== 03:38:20 (1713512300) [ 5363.651551] Lustre: DEBUG MARKER: SKIP: sanity test_300m Only for single MDT [ 5364.363024] Lustre: DEBUG MARKER: == sanity test 300n: non-root user to create dir under striped dir with default EA ========================================================== 03:38:21 (1713512301) [ 5368.600290] Lustre: DEBUG MARKER: SKIP: sanity test_300o skipping SLOW test 300o [ 5368.995190] Lustre: DEBUG MARKER: == sanity test 300p: create striped directory without space ========================================================== 03:38:26 (1713512306) [ 5369.288479] Lustre: *** cfs_fail_loc=1704, val=0*** [ 5369.289606] LustreError: 7372:0:(out_handler.c:910:out_tx_end()) lustre-MDT0001-osd: undo for /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:445: rc = -524 [ 5370.835445] Lustre: DEBUG MARKER: == sanity test 300q: create remote directory under orphan directory ========================================================== 03:38:28 (1713512308) [ 5372.455028] Lustre: DEBUG MARKER: == sanity test 300r: test -1 striped directory =========== 03:38:30 (1713512310) [ 5372.488392] LustreError: 6967:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-MDT0001-osp-MDT0000: fail to cancel 1 llog-records: rc = -2 [ 5372.493330] LustreError: 6967:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-MDT0001-osp-MDT0000: fail to cancel 1 of 1 llog-records: rc = -2 [ 5374.083633] Lustre: DEBUG MARKER: == sanity test 300s: test lfs mkdir -c without -i ======== 03:38:31 (1713512311) [ 5375.953483] Lustre: DEBUG MARKER: == sanity test 300t: test max_mdt_stripecount ============ 03:38:33 (1713512313) [ 5378.422717] Lustre: DEBUG MARKER: == sanity test 310a: open unlink remote file ============= 03:38:36 (1713512316) [ 5380.340708] Lustre: DEBUG MARKER: == sanity test 310b: unlink remote file with multiple links while open ========================================================== 03:38:37 (1713512317) [ 5382.649677] Lustre: DEBUG MARKER: == sanity test 310c: open-unlink remote file with multiple links ========================================================== 03:38:40 (1713512320) [ 5383.263977] Lustre: DEBUG MARKER: SKIP: sanity test_310c needs >= 4 MDTs [ 5383.963653] Lustre: DEBUG MARKER: == sanity test 311: disable OSP precreate, and unlink should destroy objs ========================================================== 03:38:41 (1713512321) [ 5399.260270] Lustre: DEBUG MARKER: == sanity test 312: make sure ZFS adjusts its block size by write pattern ========================================================== 03:38:56 (1713512336) [ 5399.809263] Lustre: DEBUG MARKER: SKIP: sanity test_312 the test only applies to zfs [ 5400.231121] Lustre: DEBUG MARKER: == sanity test 313: io should fail after last_rcvd update fail ========================================================== 03:38:57 (1713512337) [ 5400.582682] Lustre: *** cfs_fail_loc=720, val=0*** [ 5400.585045] LustreError: 23785:0:(osd_handler.c:2091:osd_trans_stop()) lustre-OST0000: failed in transaction hook: rc = -5 [ 5400.589486] LustreError: 23785:0:(ofd_objects.c:987:ofd_object_punch()) lustre-OST0000: failed to stop transaction: rc = -5 [ 5402.561076] Lustre: DEBUG MARKER: == sanity test 314: OSP shouldn't fail after last_rcvd update failure ========================================================== 03:39:00 (1713512340) [ 5403.317809] Lustre: *** cfs_fail_loc=720, val=0*** [ 5403.317874] LustreError: 24420:0:(osd_handler.c:2091:osd_trans_stop()) lustre-OST0001: failed in transaction hook: rc = -5 [ 5403.317899] LustreError: 24420:0:(ofd_objects.c:1054:ofd_destroy()) lustre-OST0001 failed to stop transaction: -5 [ 5403.317943] LustreError: 24420:0:(ofd_dev.c:1837:ofd_destroy_hdl()) lustre-OST0001: error destroying object [0x100010000:0x76d1:0x0]: -5 [ 5403.325845] Lustre: Skipped 1 previous similar message [ 5411.097632] Lustre: *** cfs_fail_loc=720, val=0*** [ 5411.100081] Lustre: Skipped 1 previous similar message [ 5411.102945] LustreError: 23785:0:(osd_handler.c:2091:osd_trans_stop()) lustre-OST0001: failed in transaction hook: rc = -5 [ 5411.109936] LustreError: 23785:0:(osd_handler.c:2091:osd_trans_stop()) Skipped 2 previous similar messages [ 5415.366205] Lustre: DEBUG MARKER: == sanity test 315: read should be accounted ============= 03:39:12 (1713512352) [ 5419.959656] Lustre: DEBUG MARKER: == sanity test 316: lfs migrate of file with large_xattr enabled ========================================================== 03:39:17 (1713512357) [ 5422.302336] Lustre: DEBUG MARKER: == sanity test 317: Verify blocks get correctly update after truncate ========================================================== 03:39:19 (1713512359) [ 5425.208104] Lustre: DEBUG MARKER: == sanity test 318: Verify async readahead tunables ====== 03:39:22 (1713512362) [ 5427.748051] Lustre: DEBUG MARKER: == sanity test 319: lost lease lock on migrate error ===== 03:39:25 (1713512365) [ 5435.277588] Lustre: DEBUG MARKER: == sanity test 398a: direct IO should cancel lock otherwise lockless ========================================================== 03:39:32 (1713512372) [ 5437.995410] Lustre: DEBUG MARKER: == sanity test 398b: DIO and buffer IO race ============== 03:39:35 (1713512375) [ 5442.543296] Lustre: DEBUG MARKER: == sanity test 398c: run fio to test AIO ================= 03:39:40 (1713512380) [ 5452.622965] Lustre: DEBUG MARKER: == sanity test 398d: run aiocp to verify block size > stripe size ========================================================== 03:39:50 (1713512390) [ 5456.968456] Lustre: DEBUG MARKER: == sanity test 398e: O_Direct open cleared by fcntl doesn't cause hang ========================================================== 03:39:54 (1713512394) [ 5458.655708] Lustre: DEBUG MARKER: == sanity test 398f: verify aio handles ll_direct_rw_pages errors correctly ========================================================== 03:39:56 (1713512396) [ 5460.765228] Lustre: DEBUG MARKER: == sanity test 398g: verify parallel dio async RPC submission ========================================================== 03:39:58 (1713512398) [ 5461.208281] LustreError: 23784:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 sleeping for 2000ms [ 5461.212242] LustreError: 23784:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 7 previous similar messages [ 5463.209930] LustreError: 18217:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 awake [ 5463.209934] LustreError: 21354:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 awake [ 5463.219084] LustreError: 18217:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 5463.434095] LustreError: 29689:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 sleeping for 2000ms [ 5465.436020] LustreError: 29689:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 awake [ 5465.437707] LustreError: 29689:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 4 previous similar messages [ 5465.447791] LustreError: 29689:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 sleeping for 2000ms [ 5469.557023] LustreError: 29689:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 awake [ 5469.560305] LustreError: 29689:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 5469.571307] LustreError: 29689:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 sleeping for 2000ms [ 5469.574975] LustreError: 29689:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 5477.898992] LustreError: 29689:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 awake [ 5477.900750] LustreError: 29689:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 3 previous similar messages [ 5477.909577] LustreError: 29689:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 214 sleeping for 2000ms [ 5477.911440] LustreError: 29689:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 3 previous similar messages [ 5482.166400] Lustre: DEBUG MARKER: == sanity test 398h: verify correctness of read [ 5486.114708] Lustre: DEBUG MARKER: == sanity test 398i: verify parallel dio handles ll_direct_rw_pages errors correctly ========================================================== 03:40:23 (1713512423) [ 5489.030317] Lustre: DEBUG MARKER: == sanity test 398j: test parallel dio where stripe size > rpc_size ========================================================== 03:40:26 (1713512426) [ 5492.673122] Lustre: DEBUG MARKER: == sanity test 398k: test enospc on first stripe ========= 03:40:30 (1713512430) [ 5506.682696] Lustre: DEBUG MARKER: SKIP: sanity test_398k 7027128 > 600000 skipping out-of-space test on OST0 [ 5507.282111] Lustre: DEBUG MARKER: == sanity test 398l: test enospc on intermediate stripe/RPC ========================================================== 03:40:44 (1713512444) [ 5512.685649] Lustre: DEBUG MARKER: SKIP: sanity test_398l 7004072 > 600000 skipping out-of-space test on OST0 [ 5513.386062] Lustre: DEBUG MARKER: == sanity test 398m: test RPC failures with parallel dio ========================================================== 03:40:50 (1713512450) [ 5513.868690] Lustre: *** cfs_fail_loc=20e, val=0*** [ 5513.871092] Lustre: Skipped 3 previous similar messages [ 5514.889789] Lustre: *** cfs_fail_loc=20e, val=0*** [ 5514.893963] Lustre: Skipped 7 previous similar messages [ 5516.891297] Lustre: *** cfs_fail_loc=20e, val=0*** [ 5516.891303] Lustre: *** cfs_fail_loc=20e, val=0*** [ 5516.891307] Lustre: Skipped 1 previous similar message [ 5523.467296] Lustre: *** cfs_fail_loc=20e, val=0*** [ 5523.468831] Lustre: Skipped 8 previous similar messages [ 5534.479149] Lustre: *** cfs_fail_loc=20e, val=0*** [ 5534.480624] Lustre: Skipped 7 previous similar messages [ 5558.494202] Lustre: *** cfs_fail_loc=20e, val=0*** [ 5558.495933] Lustre: Skipped 11 previous similar messages [ 5590.752991] Lustre: *** cfs_fail_loc=20f, val=0*** [ 5590.755250] Lustre: Skipped 35 previous similar messages [ 5661.416151] Lustre: *** cfs_fail_loc=20e, val=0*** [ 5661.417410] Lustre: Skipped 52 previous similar messages [ 5740.457322] Lustre: DEBUG MARKER: == sanity test 398n: test append with parallel DIO ======= 03:44:37 (1713512677) [ 5744.667554] Lustre: DEBUG MARKER: == sanity test 399a: fake write should not be slower than normal write ========================================================== 03:44:42 (1713512682) [ 5760.653886] Lustre: *** cfs_fail_loc=238, val=0*** [ 5778.946594] Lustre: DEBUG MARKER: sanity test_399a: @@@@@@ IGNORE (env=kvm): fake write is slower [ 5782.761235] Lustre: DEBUG MARKER: == sanity test 399b: fake read should not be slower than normal read ========================================================== 03:45:20 (1713512720) [ 5789.435866] Lustre: DEBUG MARKER: SKIP: sanity test_400a skipping excluded test 400a [ 5790.005672] Lustre: DEBUG MARKER: == sanity test 400b: packaged headers can be compiled ==== 03:45:27 (1713512727) [ 5792.085484] Lustre: DEBUG MARKER: == sanity test 401a: Verify if 'lctl list_param -R' can list parameters recursively ========================================================== 03:45:29 (1713512729) [ 5794.613372] Lustre: DEBUG MARKER: == sanity test 401b: Verify 'lctl get_param' set_param' continue after error ========================================================== 03:45:32 (1713512732) [ 5797.061731] Lustre: DEBUG MARKER: == sanity test 401c: Verify 'lctl set_param' without value fails in either format. ========================================================== 03:45:34 (1713512734) [ 5798.867560] Lustre: DEBUG MARKER: == sanity test 401d: Verify 'lctl set_param' accepts values containing '=' ========================================================== 03:45:36 (1713512736) [ 5800.650666] Lustre: DEBUG MARKER: == sanity test 401e: verify 'lctl get_param' works with NID in parameter ========================================================== 03:45:38 (1713512738) [ 5802.281392] Lustre: DEBUG MARKER: == sanity test 402: Return ENOENT to lod_generate_and_set_lovea ========================================================== 03:45:39 (1713512739) [ 5802.591097] Lustre: *** cfs_fail_loc=15c, val=0*** [ 5802.592543] Lustre: Skipped 2 previous similar messages [ 5802.594231] LustreError: 28014:0:(lod_lov.c:816:lod_gen_component_ea()) lustre-MDT0000-mdtlov: Can not locate [0x100010000:0x76e9:0x0]: rc = -2 [ 5804.197666] Lustre: DEBUG MARKER: == sanity test 403: i_nlink should not drop to zero due to aliasing ========================================================== 03:45:41 (1713512741) [ 5806.603125] Lustre: DEBUG MARKER: == sanity test 404: validate manual {de}activated works properly for OSPs ========================================================== 03:45:44 (1713512744) [ 5807.266084] Lustre: setting import lustre-OST0000_UUID INACTIVE by administrator request [ 5807.906586] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5807.916902] Lustre: Skipped 30 previous similar messages [ 5807.920636] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 5807.926578] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 5807.932917] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to (at 0@lo) [ 5807.933252] Lustre: lustre-OST0000: deleting orphan objects from 0x0:28684 to 0x0:28705 [ 5807.942334] Lustre: Skipped 34 previous similar messages [ 5808.503058] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 5809.140704] Lustre: lustre-OST0001: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 5809.148479] LustreError: 167-0: lustre-OST0001-osc-MDT0000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 5809.155831] Lustre: lustre-OST0001: deleting orphan objects from 0x0:30442 to 0x0:30465 [ 5809.740412] Lustre: setting import lustre-OST0000_UUID INACTIVE by administrator request [ 5810.270552] Lustre: lustre-OST0000: Client lustre-MDT0001-mdtlov_UUID (at 0@lo) reconnecting [ 5810.275156] LustreError: 167-0: lustre-OST0000-osc-MDT0001: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 5810.286065] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:36090 to 0x280000400:36129 [ 5811.474325] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:36787 to 0x2c0000400:36833 [ 5813.487946] Lustre: DEBUG MARKER: == sanity test 405: Various layout swap lock tests ======= 03:45:51 (1713512751) [ 5817.745520] Lustre: DEBUG MARKER: SKIP: sanity test_405 layout swap does not support DOM files so far [ 5818.381354] Lustre: DEBUG MARKER: == sanity test 406: DNE support fs default striping ====== 03:45:55 (1713512755) [ 5821.916826] LustreError: 28463:0:(qmt_pool.c:1406:qmt_pool_add_rem()) add to: can't lustre-QMT0000 lustre-OST0000_UUID pool test_406: rc = -17 [ 5825.860088] LustreError: 28680:0:(qmt_pool.c:1406:qmt_pool_add_rem()) remove: can't lustre-QMT0000 lustre-OST0000_UUID pool test_406: rc = -22 [ 5825.865800] LustreError: 28680:0:(qmt_pool.c:1406:qmt_pool_add_rem()) Skipped 1 previous similar message [ 5835.949744] Lustre: DEBUG MARKER: SKIP: sanity test_407 skipping ALWAYS excluded test 407 [ 5836.390037] Lustre: DEBUG MARKER: == sanity test 408: drop_caches should not hang due to page leaks ========================================================== 03:46:14 (1713512774) [ 5838.726352] Lustre: DEBUG MARKER: == sanity test 409: Large amount of cross-MDTs hard links on the same file ========================================================== 03:46:16 (1713512776) [ 5851.834424] Lustre: DEBUG MARKER: == sanity test 410: Test inode number returned from kernel thread ========================================================== 03:46:29 (1713512789) [ 5854.037531] Lustre: DEBUG MARKER: == sanity test 411: Slab allocation error with cgroup does not LBUG ========================================================== 03:46:31 (1713512791) [ 5856.711089] Lustre: DEBUG MARKER: == sanity test 412: mkdir on specific MDTs =============== 03:46:34 (1713512794) [ 5859.179661] Lustre: DEBUG MARKER: == sanity test 413a: QoS mkdir with 'lfs mkdir -i -1' ==== 03:46:36 (1713512796) [ 5966.444777] Lustre: DEBUG MARKER: == sanity test 413b: QoS mkdir under dir whose default LMV starting MDT offset is -1 ========================================================== 03:48:24 (1713512904) [ 5986.816261] Lustre: DEBUG MARKER: == sanity test 413c: mkdir with default LMV max inherit rr ========================================================== 03:48:44 (1713512924) [ 6006.329504] Lustre: DEBUG MARKER: == sanity test 413d: inherit ROOT default LMV ============ 03:49:03 (1713512943) [ 6010.323041] Lustre: DEBUG MARKER: == sanity test 413e: check default max-inherit value ===== 03:49:07 (1713512947) [ 6012.047791] Lustre: DEBUG MARKER: == sanity test 413f: lfs getdirstripe -D list ROOT default LMV if it's not set on dir ========================================================== 03:49:09 (1713512949) [ 6013.864671] Lustre: DEBUG MARKER: == sanity test 413z: 413 test cleanup ==================== 03:49:11 (1713512951) [ 6020.159885] Lustre: DEBUG MARKER: == sanity test 414: simulate ENOMEM in ptlrpc_register_bulk() ========================================================== 03:49:17 (1713512957) [ 6021.889598] Lustre: DEBUG MARKER: == sanity test 415: lock revoke is not missing =========== 03:49:19 (1713512959) [ 6029.165194] Lustre: DEBUG MARKER: == sanity test 416: transaction start failure won't cause system hung ========================================================== 03:49:26 (1713512966) [ 6029.407822] Lustre: *** cfs_fail_loc=19a, val=0*** [ 6029.510303] LustreError: 6948:0:(llog_cat.c:753:llog_cat_cancel_arr_rec()) lustre-OST0001-osc-MDT0000: fail to cancel 1 llog-records: rc = -5 [ 6029.513325] LustreError: 6948:0:(llog_cat.c:789:llog_cat_cancel_records()) lustre-OST0001-osc-MDT0000: fail to cancel 1 of 1 llog-records: rc = -5 [ 6029.516815] LustreError: 6948:0:(osp_sync.c:1088:osp_sync_process_committed()) lustre-OST0001-osc-MDT0000: can't cancel record: rc = -5 [ 6030.976986] Lustre: DEBUG MARKER: == sanity test 417: disable remote dir, striped dir and dir migration ========================================================== 03:49:28 (1713512968) [ 6036.765276] Lustre: DEBUG MARKER: == sanity test 418: df and lfs df outputs match ========== 03:49:34 (1713512974) [ 6059.679596] Lustre: DEBUG MARKER: == sanity test 419: Verify open file by name doesn't crash kernel ========================================================== 03:49:57 (1713512997) [ 6061.560609] Lustre: DEBUG MARKER: == sanity test 420: clear SGID bit on non-directories for non-members ========================================================== 03:49:59 (1713512999) [ 6063.904460] Lustre: DEBUG MARKER: == sanity test 421a: simple rm by fid ==================== 03:50:01 (1713513001) [ 6066.031440] Lustre: DEBUG MARKER: == sanity test 421b: rm by fid on open file ============== 03:50:03 (1713513003) [ 6068.236137] Lustre: DEBUG MARKER: == sanity test 421c: rm by fid against hardlinked files == 03:50:05 (1713513005) [ 6071.529149] Lustre: DEBUG MARKER: == sanity test 421d: rmfid en masse ====================== 03:50:09 (1713513009) [ 6087.029215] Lustre: DEBUG MARKER: == sanity test 421e: rmfid in DNE ======================== 03:50:24 (1713513024) [ 6091.505419] Lustre: DEBUG MARKER: == sanity test 421f: rmfid checks permissions ============ 03:50:29 (1713513029) [ 6093.763483] Lustre: DEBUG MARKER: == sanity test 421g: rmfid to return errors properly ===== 03:50:31 (1713513031) [ 6097.962362] Lustre: DEBUG MARKER: == sanity test 422: kill a process with RPC in progress == 03:50:35 (1713513035) [ 6098.970503] LustreError: 27059:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a sleeping for 50000ms [ 6118.988048] Lustre: lustre-MDT0001: Client a59cd3f3-d032-473b-8305-c75468fa43cf (at 192.168.203.40@tcp) reconnecting [ 6118.992915] Lustre: Skipped 1 previous similar message [ 6139.006742] Lustre: lustre-MDT0001: Client a59cd3f3-d032-473b-8305-c75468fa43cf (at 192.168.203.40@tcp) reconnecting [ 6139.009450] Lustre: Skipped 1 previous similar message [ 6139.093937] Lustre: mdt00_000: service thread pid 27059 was inactive for 40.123 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 6139.098573] Pid: 27059, comm: mdt00_000 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 6139.100799] Call Trace: [ 6139.101948] [<0>] __cfs_fail_timeout_set+0xe1/0x200 [libcfs] [ 6139.103285] [<0>] ptlrpc_server_handle_request+0x1fc/0xc30 [ptlrpc] [ 6139.104846] [<0>] ptlrpc_main+0xbd9/0x15f0 [ptlrpc] [ 6139.105979] [<0>] kthread+0xe4/0xf0 [ 6139.107107] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 6139.108791] [<0>] 0xfffffffffffffffe [ 6141.398021] Lustre: mdt00_004: service thread pid 29441 was inactive for 40.122 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 6141.404260] Pid: 29441, comm: mdt00_004 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 6141.407079] Call Trace: [ 6141.408275] [<0>] __cfs_fail_timeout_set+0xe1/0x200 [libcfs] [ 6141.409895] [<0>] mdd_rename+0x159/0x1be0 [mdd] [ 6141.410852] [<0>] mdt_reint_rename+0x1bec/0x2880 [mdt] [ 6141.411938] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 6141.412987] [<0>] mdt_reint_internal+0x76c/0xb50 [mdt] [ 6141.415135] [<0>] mdt_reint+0x67/0x150 [mdt] [ 6141.416552] [<0>] tgt_request_handle+0x93a/0x19c0 [ptlrpc] [ 6141.417793] [<0>] ptlrpc_server_handle_request+0x250/0xc30 [ptlrpc] [ 6141.419522] [<0>] ptlrpc_main+0xbd9/0x15f0 [ptlrpc] [ 6141.420666] [<0>] kthread+0xe4/0xf0 [ 6141.421592] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 6141.423137] [<0>] 0xfffffffffffffffe [ 6148.374927] LustreError: 29441:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 722 awake [ 6148.376681] LustreError: 29441:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 6159.018190] Lustre: lustre-MDT0001: Client a59cd3f3-d032-473b-8305-c75468fa43cf (at 192.168.203.40@tcp) reconnecting [ 6159.020330] Lustre: Skipped 1 previous similar message [ 6161.435899] Lustre: DEBUG MARKER: == sanity test 423: statfs should return a right data ==== 03:51:39 (1713513099) [ 6165.073755] Lustre: DEBUG MARKER: == sanity test 424: simulate ENOMEM in ptl_send_rpc bulk reply ME attach ========================================================== 03:51:42 (1713513102) [ 6167.234224] Lustre: DEBUG MARKER: == sanity test 425: lock count should not exceed lru size ========================================================== 03:51:44 (1713513104) [ 6176.652023] Lustre: DEBUG MARKER: == sanity test 426: splice test on Lustre ================ 03:51:54 (1713513114) [ 6178.735023] Lustre: DEBUG MARKER: == sanity test 427: Failed DNE2 update request shouldn't corrupt updatelog ========================================================== 03:51:56 (1713513116) [ 6179.423357] LustreError: 5146:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 1708 sleeping [ 6179.431442] LustreError: 6456:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 1708 waking [ 6179.435187] LustreError: 5146:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 1708 awake: rc=4993 [ 6179.437859] LustreError: 5146:0:(libcfs_fail.h:180:cfs_race()) cfs_fail_race id 1708 waking [ 6179.535041] LustreError: 6456:0:(llog_cat.c:604:llog_cat_add_rec()) llog_write_rec -116: lh=ffff88012ef16400 [ 6179.538645] LustreError: 6456:0:(update_trans.c:1062:top_trans_stop()) lustre-MDT0000-osp-MDT0001: write updates failed: rc = -116 [ 6181.904834] Lustre: Failing over lustre-MDT0001 [ 6181.906044] Lustre: Skipped 23 previous similar messages [ 6182.438749] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 6182.440859] Lustre: Skipped 2 previous similar messages [ 6184.057399] Lustre: lustre-MDT0001: Not available for connect from 192.168.203.40@tcp (stopping) [ 6187.446381] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 6187.448652] Lustre: Skipped 2 previous similar messages [ 6187.500610] LustreError: 16454:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6187.505537] LustreError: 16454:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 68 previous similar messages [ 6187.540544] Lustre: server umount lustre-MDT0001 complete [ 6187.541638] Lustre: Skipped 23 previous similar messages [ 6189.064624] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 192.168.203.40@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6189.071336] LustreError: Skipped 30 previous similar messages [ 6200.344547] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6200.441735] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 6200.447749] Lustre: Skipped 4 previous similar messages [ 6200.455092] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 6200.460710] Lustre: Skipped 38 previous similar messages [ 6201.470232] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6201.676090] Lustre: lustre-MDT0001: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 6201.679151] Lustre: Skipped 17 previous similar messages [ 6205.452066] Lustre: lustre-MDT0001: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 6205.454892] Lustre: Skipped 17 previous similar messages [ 6205.468164] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:38591 to 0x280000400:38625 [ 6207.024125] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6207.395101] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6209.222579] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 40 [ 6211.328470] Lustre: DEBUG MARKER: == sanity test 428: large block size IO should not hang == 03:52:28 (1713513148) [ 6215.502854] Lustre: DEBUG MARKER: == sanity test 429: verify if opencache flag on client side does work ========================================================== 03:52:33 (1713513153) [ 6217.329035] Lustre: DEBUG MARKER: == sanity test 430a: lseek: SEEK_DATA/SEEK_HOLE basic functionality ========================================================== 03:52:34 (1713513154) [ 6220.059055] Lustre: DEBUG MARKER: == sanity test 430b: lseek: SEEK_DATA/SEEK_HOLE special cases ========================================================== 03:52:37 (1713513157) [ 6302.787489] Lustre: DEBUG MARKER: == sanity test 430c: lseek: external tools check ========= 03:54:00 (1713513240) [ 6304.617789] Lustre: DEBUG MARKER: == sanity test 431: Restart transaction for IO =========== 03:54:02 (1713513242) [ 6304.972874] Lustre: *** cfs_fail_loc=251, val=0*** [ 6308.138576] Lustre: DEBUG MARKER: == sanity test 432: mv dir from outside Lustre =========== 03:54:05 (1713513245) [ 6353.710376] Lustre: DEBUG MARKER: == sanity test 434: Client should not send RPCs for security.selinux with SElinux disabled ========================================================== 03:54:51 (1713513291) [ 6356.741183] Lustre: DEBUG MARKER: == sanity test 801a: write barrier user interfaces and stat machine ========================================================== 03:54:54 (1713513294) [ 6357.785785] LustreError: 22123:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 2202 sleeping for 5000ms [ 6357.788847] LustreError: 22123:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 6360.291949] LustreError: 22123:0:(fail.c:144:__cfs_fail_timeout_set()) cfs_fail_timeout interrupted [ 6384.690029] LustreError: 22716:0:(fail.c:144:__cfs_fail_timeout_set()) cfs_fail_timeout interrupted [ 6385.516204] Lustre: *** cfs_fail_loc=2203, val=0*** [ 6385.518803] Lustre: Skipped 1 previous similar message [ 6388.762132] Lustre: DEBUG MARKER: == sanity test 801b: modification will be blocked by write barrier ========================================================== 03:55:26 (1713513326) [ 6399.096539] Lustre: DEBUG MARKER: == sanity test 801c: rescan barrier bitmap =============== 03:55:36 (1713513336) [ 6400.400933] Lustre: Failing over lustre-MDT0001 [ 6400.439106] LustreError: 24792:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6400.441533] LustreError: 24792:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 21 previous similar messages [ 6400.475062] Lustre: server umount lustre-MDT0001 complete [ 6400.854800] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 6400.858941] LustreError: Skipped 40 previous similar messages [ 6405.259060] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6405.386796] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 6405.398481] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 6405.407621] mount.lustre (25855) used greatest stack depth: 9616 bytes left [ 6406.434857] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6406.675059] Lustre: lustre-MDT0001: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 6409.891137] Lustre: DEBUG MARKER: == sanity test 802a: simulate readonly device ============ 03:55:47 (1713513347) [ 6410.391718] Lustre: lustre-MDT0001-lwp-OST0000: Connection restored to 192.168.203.140@tcp (at 0@lo) [ 6410.395188] Lustre: Skipped 6 previous similar messages [ 6410.401645] Lustre: lustre-MDT0001: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 6410.406997] Lustre: DEBUG MARKER: SKIP: sanity test_802a ZFS specific test [ 6410.420537] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:38640 to 0x280000400:38657 [ 6410.421225] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:39287 to 0x2c0000400:39329 [ 6411.116474] Lustre: DEBUG MARKER: == sanity test 802b: be able to set MDTs to readonly ===== 03:55:48 (1713513348) [ 6416.290607] Lustre: DEBUG MARKER: == sanity test 803a: verify agent object for remote object ========================================================== 03:55:53 (1713513353) [ 6431.735295] Lustre: DEBUG MARKER: == sanity test 803b: remote object can getattr from cache ========================================================== 03:56:09 (1713513369) [ 6434.399809] Lustre: DEBUG MARKER: == sanity test 804: verify agent entry for remote entry == 03:56:12 (1713513372) [ 6436.473084] Lustre: lustre-MDT0000: Not available for connect from 192.168.203.40@tcp (stopping) [ 6440.406553] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6440.411580] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6440.417410] Lustre: Skipped 9 previous similar messages [ 6441.480668] Lustre: lustre-MDT0000: Not available for connect from 192.168.203.40@tcp (stopping) [ 6441.484497] Lustre: Skipped 4 previous similar messages [ 6444.842246] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6444.889545] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6444.893280] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c9a430209 to 0xb530840c9a638eb0 [ 6444.965953] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 6446.304527] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6450.005997] Lustre: lustre-OST0000: deleting orphan objects from 0x0:31047 to 0x0:31073 [ 6450.006076] Lustre: lustre-OST0001: deleting orphan objects from 0x0:32804 to 0x0:32833 [ 6454.827104] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6456.289595] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6458.731320] Lustre: DEBUG MARKER: == sanity test 805: ZFS can remove from full fs ========== 03:56:36 (1713513396) [ 6459.596017] Lustre: DEBUG MARKER: SKIP: sanity test_805 ZFS specific test [ 6459.980710] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:39287 to 0x2c0000400:39361 [ 6459.981623] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:38659 to 0x280000400:38689 [ 6460.176246] Lustre: DEBUG MARKER: == sanity test 806: Verify Lazy Size on MDS ============== 03:56:37 (1713513397) [ 6463.230703] Lustre: DEBUG MARKER: == sanity test 807: verify LSOM syncing tool ============= 03:56:40 (1713513400) [ 6464.119449] Lustre: lustre-MDD0000: changelog on [ 6464.122370] Lustre: Skipped 1 previous similar message [ 6467.138999] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing cancel_lru_locks osc [ 6473.069183] Lustre: lustre-MDD0000: changelog off [ 6473.070822] Lustre: Skipped 1 previous similar message [ 6477.055452] Lustre: DEBUG MARKER: == sanity test 808: Check trusted.som xattr not logged in Changelogs ========================================================== 03:56:54 (1713513414) [ 6483.827613] Lustre: DEBUG MARKER: == sanity test 809: Verify no SOM xattr store for DoM-only files ========================================================== 03:57:01 (1713513421) [ 6486.289074] Lustre: DEBUG MARKER: == sanity test 810: partial page writes on ZFS (LU-11663) ========================================================== 03:57:03 (1713513423) [ 6489.592838] Lustre: DEBUG MARKER: set checksum type to crc32c, rc = 0 [ 6490.149960] Lustre: DEBUG MARKER: == sanity test 812a: do not drop reqs generated when imp is going to idle (LU-11951) ========================================================== 03:57:07 (1713513427) [ 6492.139555] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid 40 [ 6492.586359] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid in FULL state after 0 sec [ 6494.654121] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state CONNECTING osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid 40 [ 6506.584833] LustreError: 13366:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 245 sleeping for 8000ms [ 6506.593337] LustreError: 13366:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 6514.600107] LustreError: 13366:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 245 awake [ 6514.604442] LustreError: 13366:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 1 previous similar message [ 6517.477576] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid in CONNECTING state after 22 sec [ 6517.799999] LustreError: 13366:0:(fail.c:144:__cfs_fail_timeout_set()) cfs_fail_timeout interrupted [ 6519.557044] Lustre: DEBUG MARKER: == sanity test 812b: do not drop no resend request for idle connect ========================================================== 03:57:37 (1713513457) [ 6521.063275] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid 40 [ 6521.407570] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid in FULL state after 0 sec [ 6522.948522] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state CONNECTING osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid 40 [ 6542.748268] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid in CONNECTING state after 19 sec [ 6543.039966] LustreError: 19283:0:(fail.c:144:__cfs_fail_timeout_set()) cfs_fail_timeout interrupted [ 6544.752844] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid 40 [ 6567.733484] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid in IDLE state after 22 sec [ 6569.735666] Lustre: DEBUG MARKER: == sanity test 812c: idle import vs lock enqueue race ==== 03:58:27 (1713513507) [ 6571.524175] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid 40 [ 6572.018480] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid in FULL state after 0 sec [ 6589.702551] Lustre: DEBUG MARKER: == sanity test 813: File heat verfication ================ 03:58:47 (1713513527) [ 6717.940264] Lustre: DEBUG MARKER: == sanity test 814: sparse cp works as expected (LU-12361) ========================================================== 04:00:55 (1713513655) [ 6720.232691] Lustre: DEBUG MARKER: == sanity test 815: zero byte tiny write doesn't hang (LU-12382) ========================================================== 04:00:57 (1713513657) [ 6722.291016] Lustre: DEBUG MARKER: == sanity test 816: do not reset lru_resize on idle reconnect ========================================================== 04:00:59 (1713513659) [ 6723.827691] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid 40 [ 6724.328415] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid in FULL state after 0 sec [ 6726.286344] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid 40 [ 6748.228882] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff880097e2a800.ost_server_uuid in IDLE state after 21 sec [ 6750.204765] Lustre: DEBUG MARKER: == sanity test 817: nfsd won't cache write lock for exec file ========================================================== 04:01:27 (1713513687) [ 6752.636817] Lustre: DEBUG MARKER: == sanity test 818: unlink with failed llog ============== 04:01:30 (1713513690) [ 6753.311487] Lustre: Failing over lustre-MDT0000 [ 6753.313092] Lustre: Skipped 2 previous similar messages [ 6753.343049] LustreError: 8340:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 6753.345491] LustreError: 8340:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 58 previous similar messages [ 6753.371100] Lustre: server umount lustre-MDT0000 complete [ 6753.372136] Lustre: Skipped 2 previous similar messages [ 6755.414371] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6755.417517] LustreError: Skipped 1 previous similar message [ 6756.401230] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6756.456289] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6756.462299] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c9a638eb0 to 0xb530840c9a63abcf [ 6756.527820] Lustre: *** cfs_fail_loc=2105, val=0*** [ 6756.548121] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 6756.551120] Lustre: Skipped 2 previous similar messages [ 6757.364162] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 6757.368085] Lustre: Skipped 2 previous similar messages [ 6757.751733] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6761.529939] LustreError: 9016:0:(osp_sync.c:1236:osp_sync_thread()) can't get appropriate context [ 6761.547301] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 6761.552686] Lustre: Skipped 2 previous similar messages [ 6761.568646] Lustre: lustre-OST0000: deleting orphan objects from 0x0:31084 to 0x0:31105 [ 6761.568655] Lustre: lustre-OST0001: deleting orphan objects from 0x0:32839 to 0x0:32865 [ 6761.570721] LustreError: 27060:0:(osp_sync.c:341:osp_sync_declare_add()) logging isn't available, run LFSCK [ 6773.558090] Lustre: 25550:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713513705/real 1713513705] req@ffff88009885aac0 x1796745142271936/t0(0) o400->MGC192.168.203.140@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1713513712 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:2.0' [ 6773.572952] Lustre: 25550:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 6773.576966] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6777.357501] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6779.599740] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c9a63abcf to 0xb530840c9a63b08a [ 6780.818413] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6784.726702] Lustre: lustre-OST0000: deleting orphan objects from 0x0:31084 to 0x0:31137 [ 6784.726710] Lustre: lustre-OST0001: deleting orphan objects from 0x0:32839 to 0x0:32897 [ 6786.556539] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6787.100599] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6788.914910] Lustre: DEBUG MARKER: == sanity test 819a: too big niobuf in read ============== 04:02:06 (1713513726) [ 6789.192910] Lustre: *** cfs_fail_loc=248, val=0*** [ 6790.998375] Lustre: DEBUG MARKER: == sanity test 819b: too big niobuf in write ============= 04:02:08 (1713513728) [ 6791.226081] Lustre: *** cfs_fail_loc=248, val=0*** [ 6791.230785] LustreError: 18217:0:(sec.c:2543:sptlrpc_svc_unwrap_bulk()) @@@ truncated bulk GET 1048576(1052672) req@ffff8800988597c0 x1796745199387520/t0(0) o4->a59cd3f3-d032-473b-8305-c75468fa43cf@192.168.203.40@tcp:465/0 lens 488/448 e 0 to 0 dl 1713513735 ref 1 fl Interpret:/0/0 rc 0/0 job:'dd.0' [ 6791.236257] Lustre: lustre-OST0000: Bulk IO write error with a59cd3f3-d032-473b-8305-c75468fa43cf (at 192.168.203.40@tcp), client will retry: rc = -110 [ 6798.237331] Lustre: lustre-OST0000: Client a59cd3f3-d032-473b-8305-c75468fa43cf (at 192.168.203.40@tcp) reconnecting [ 6801.803995] Lustre: DEBUG MARKER: == sanity test 820: update max EA from open intent ======= 04:02:19 (1713513739) [ 6811.750007] Lustre: 25550:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713513743/real 1713513743] req@ffff88008c7ef6c0 x1796745142286464/t0(0) o13->lustre-OST0001-osc-MDT0000@0@lo:7/4 lens 224/368 e 0 to 1 dl 1713513750 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'osp-pre-1-0.0' [ 6812.664746] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 6812.670180] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6812.763025] Lustre: 25549:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713513744/real 1713513744] req@ffff8800693db900 x1796745142287808/t0(0) o13->lustre-OST0001-osc-MDT0001@0@lo:7/4 lens 224/368 e 0 to 1 dl 1713513751 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'osp-pre-1-1.0' [ 6812.771821] Lustre: 25549:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 6812.775514] Lustre: lustre-OST0000: Not available for connect from 0@lo (not set up) [ 6812.778878] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6812.783705] LustreError: Skipped 49 previous similar messages [ 6814.016793] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6816.962835] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 6816.966612] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 6818.009615] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:38691 to 0x280000400:38721 [ 6818.010125] Lustre: lustre-OST0000: deleting orphan objects from 0x0:31139 to 0x0:31169 [ 6818.010539] Lustre: lustre-OST0001: deleting orphan objects from 0x0:32899 to 0x0:32929 [ 6818.020435] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:39363 to 0x2c0000400:39393 [ 6818.081957] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6821.996848] Lustre: DEBUG MARKER: == sanity test 822: test precreate failure =============== 04:02:39 (1713513759) [ 6829.881969] Lustre: *** cfs_fail_loc=532, val=2147492104*** [ 6829.883547] Lustre: 10506:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1713513768/real 1713513768] req@ffff8800a22a5f00 x1796745142299520/t0(0) o5->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 432/432 e 0 to 1 dl 1713513811 ref 2 fl Rpc:ReXNQ/0/ffffffff rc 0/-1 job:'osp-pre-0-0.0' [ 6829.889346] Lustre: 10506:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 6829.891532] LustreError: 10506:0:(osp_precreate.c:677:osp_precreate_send()) lustre-OST0000-osc-MDT0000: can't precreate: rc = -5 [ 6829.891850] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 6829.895984] LustreError: 10506:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 2108 sleeping for 2000ms [ 6829.898001] LustreError: 10506:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 2 previous similar messages [ 6830.498974] LustreError: 10506:0:(fail.c:144:__cfs_fail_timeout_set()) cfs_fail_timeout interrupted [ 6830.503991] Lustre: lustre-OST0000: deleting orphan objects from 0x0:31202 to 0x0:31233 [ 6832.571809] Lustre: DEBUG MARKER: == sanity test 823: Setting create_count > OST_MAX_PRECREATE is lowered to maximum ========================================================== 04:02:50 (1713513770) [ 6834.380349] Lustre: DEBUG MARKER: setting create_count to 100200: [ 6834.753117] Lustre: DEBUG MARKER: -result- count: 9984 with max: 20000, expecting: 9984 [ 6837.100699] Lustre: DEBUG MARKER: == sanity test 831: throttling unlink/setattr queuing on OSP ========================================================== 04:02:54 (1713513774) [ 6842.818369] Lustre: 5866:0:(osp_sync.c:318:osp_sync_declare_add()) lustre-OST0000-osc-MDT0000: queued changes counter exceeds limit 101 > 100 [ 6843.826227] Lustre: 5866:0:(osp_sync.c:318:osp_sync_declare_add()) lustre-OST0000-osc-MDT0000: queued changes counter exceeds limit 101 > 100 [ 6845.835604] Lustre: 28014:0:(osp_sync.c:318:osp_sync_declare_add()) lustre-OST0000-osc-MDT0000: queued changes counter exceeds limit 101 > 100 [ 6849.010610] Lustre: 28014:0:(osp_sync.c:318:osp_sync_declare_add()) lustre-OST0000-osc-MDT0000: queued changes counter exceeds limit 101 > 100 [ 6849.478043] Lustre: 25548:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713513743/real 1713513743] req@ffff88009d1184c0 x1796745142286080/t0(0) o400->lustre-OST0001-osc-MDT0001@0@lo:28/4 lens 224/224 e 0 to 1 dl 1713513787 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 6849.493029] Lustre: 25548:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 6855.151132] Lustre: 5831:0:(osp_sync.c:318:osp_sync_declare_add()) lustre-OST0000-osc-MDT0000: queued changes counter exceeds limit 101 > 100 [ 6855.156480] Lustre: 5831:0:(osp_sync.c:318:osp_sync_declare_add()) Skipped 2 previous similar messages [ 6864.190355] Lustre: 29441:0:(osp_sync.c:318:osp_sync_declare_add()) lustre-OST0000-osc-MDT0000: queued changes counter exceeds limit 101 > 100 [ 6864.192960] Lustre: 29441:0:(osp_sync.c:318:osp_sync_declare_add()) Skipped 4 previous similar messages [ 6882.764700] Lustre: 5831:0:(osp_sync.c:318:osp_sync_declare_add()) lustre-OST0000-osc-MDT0000: queued changes counter exceeds limit 101 > 100 [ 6882.767016] Lustre: 5831:0:(osp_sync.c:318:osp_sync_declare_add()) Skipped 8 previous similar messages [ 6914.843249] Lustre: DEBUG MARKER: == sanity test 900: umount should not race with any mgc requeue thread ========================================================== 04:04:12 (1713513852) [ 6919.926404] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 6919.929143] LustreError: Skipped 2 previous similar messages [ 6927.046147] Lustre: 25550:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713513858/real 1713513858] req@ffff88012bdb5580 x1796745142380608/t0(0) o400->MGC192.168.203.140@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1713513865 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:2.0' [ 6927.059163] Lustre: 25550:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [ 6927.063474] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6930.710296] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6933.086675] Lustre: Evicted from MGS (at 192.168.203.140@tcp) after server handle changed from 0xb530840c9a63b08a to 0xb530840c9a64d54f [ 6934.020260] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 6938.192818] Lustre: lustre-OST0001: deleting orphan objects from 0x0:32899 to 0x0:32961 [ 6938.192828] Lustre: lustre-OST0000: deleting orphan objects from 0x0:32234 to 0x0:41249 [ 6939.867928] Lustre: DEBUG MARKER: oleg340-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6940.368981] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6998.262849] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 6998.266363] Lustre: Skipped 2 previous similar messages [ 7005.261940] LustreError: 12030:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713513943 with bad export cookie 13056080509411841359 [ 7005.263854] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 7005.280330] LustreError: 12030:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 7014.623031] Lustre: 24510:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713513947/real 1713513947] req@ffff88009885da40 x1796745142416128/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713513953 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 7014.646261] LustreError: 24510:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 7014.652161] LustreError: 24510:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 67 previous similar messages [ 7014.729234] Lustre: server umount lustre-OST0000 complete [ 7014.732773] Lustre: Skipped 6 previous similar messages [ 7026.495945] device-mapper: core: cleaned up [ 7029.924822] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing unload_modules_local [ 7030.663742] Key type lgssc unregistered [ 7030.733291] LNet: 26053:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 7030.738254] LNet: Removed LNI 192.168.203.140@tcp [ 7036.661696] LNet: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 7036.668000] alg: No test for adler32 (adler32-zlib) [ 7037.446458] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_modules_local [ 7037.726101] Lustre: Lustre: Build Version: 2.15.4_18_g9f02020 [ 7037.879687] LNet: Added LNI 192.168.203.140@tcp [8/256/0/180] [ 7037.881176] LNet: Accept secure, port 988 [ 7039.429065] Key type lgssc registered [ 7039.875254] Lustre: Echo OBD driver; http://www.lustre.org/ [ 7045.829117] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing load_modules_local [ 7047.479120] device-mapper: uevent: version 1.0.3 [ 7047.481568] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 7050.196016] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7051.339281] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7051.354508] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 7052.659327] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 7056.335098] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7056.352643] LustreError: 137-5: lustre-MDT0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7056.449055] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 7057.835997] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 7061.333384] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 7061.336699] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 7061.418851] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 7062.411435] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 7062.417590] LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7062.419028] Lustre: lustre-OST0000: deleting orphan objects from 0x280000400:38691 to 0x280000400:48737 [ 7065.746035] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 7065.748577] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro,no_mbcache,nodelalloc [ 7065.784250] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 7066.785258] Lustre: lustre-OST0000: deleting orphan objects from 0x0:32234 to 0x0:41281 [ 7066.790308] Lustre: lustre-OST0001: deleting orphan objects from 0x0:32899 to 0x0:32993 [ 7066.794090] Lustre: lustre-OST0001: deleting orphan objects from 0x2c0000400:39363 to 0x2c0000400:49409 [ 7066.824932] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 7070.688806] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 7072.096111] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 7079.746734] Lustre: DEBUG MARKER: == sanity test 901: don't leak a mgc lock on client umount ========================================================== 04:06:57 (1713514017) [ 7083.834419] Lustre: DEBUG MARKER: == sanity test 902: test short write doesn't hang lustre ========================================================== 04:07:01 (1713514021) [ 7086.373270] Lustre: DEBUG MARKER: == sanity test 903: Test long page discard does not cause evictions ========================================================== 04:07:03 (1713514023) [ 7131.606084] Lustre: ll_ost00_004: service thread pid 32153 was inactive for 40.142 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 7131.614363] Pid: 32153, comm: ll_ost00_004 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 7131.618156] Call Trace: [ 7131.619602] [<0>] ldlm_completion_ast+0x7f7/0xa50 [ptlrpc] [ 7131.622236] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [ 7131.624938] [<0>] ofd_destroy_by_fid+0x1d1/0x520 [ofd] [ 7131.627338] [<0>] ofd_destroy_hdl+0x20f/0xa80 [ofd] [ 7131.629723] [<0>] tgt_request_handle+0x93a/0x19c0 [ptlrpc] [ 7131.632345] [<0>] ptlrpc_server_handle_request+0x250/0xc30 [ptlrpc] [ 7131.635320] [<0>] ptlrpc_main+0xbd9/0x15f0 [ptlrpc] [ 7131.637649] [<0>] kthread+0xe4/0xf0 [ 7131.639363] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 7131.641735] [<0>] 0xfffffffffffffffe [ 7225.454203] Lustre: DEBUG MARKER: == sanity test 904: virtual project ID xattr ============= 04:09:22 (1713514162) [ 7230.104924] Lustre: DEBUG MARKER: == sanity test 907: verify the format of some stats files ========================================================== 04:09:27 (1713514167) [ 7234.008400] Lustre: DEBUG MARKER: == sanity test complete, duration 7044 sec =============== 04:09:31 (1713514171) [ 7311.846389] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 7311.850231] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 7311.855655] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 7314.135959] LustreError: 8803:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 7314.188277] Lustre: server umount lustre-MDT0000 complete [ 7317.190548] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7317.202603] LustreError: Skipped 3 previous similar messages [ 7317.235836] LustreError: 8674:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713514255 with bad export cookie 3393130111947232691 [ 7317.237231] LustreError: 166-1: MGC192.168.203.140@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 7317.247272] LustreError: 8674:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 7317.279243] LustreError: 9404:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 7317.281643] LustreError: 9404:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 13 previous similar messages [ 7317.355507] Lustre: server umount lustre-MDT0001 complete [ 7326.470025] Lustre: 9997:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713514258/real 1713514258] req@ffff88007ab0bdc0 x1796749632608192/t0(0) o39->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713514264 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 7326.478328] LustreError: 9997:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 7326.480191] LustreError: 9997:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 5 previous similar messages [ 7326.489035] Lustre: server umount lustre-OST0000 complete [ 7334.397019] Lustre: 10597:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713514266/real 1713514266] req@ffff880071f7a600 x1796749632608704/t0(0) o39->lustre-MDT0001-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713514272 ref 2 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'umount.0' [ 7334.413625] LustreError: 10597:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 7334.417688] LustreError: 10597:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 3 previous similar messages [ 7334.494107] Lustre: server umount lustre-OST0001 complete [ 7336.687867] device-mapper: core: cleaned up [ 7339.820665] Lustre: DEBUG MARKER: oleg340-server.virtnet: executing unload_modules_local [ 7340.425143] Key type lgssc unregistered [ 7340.491322] LNet: 11535:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 7340.495760] LNet: Removed LNI 192.168.203.140@tcp