[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 3.0.0 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f53f0-0x000f53ff] mapped at [ffffffffff2003f0] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5200 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1d87 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1c23 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01BE3 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1c97 00090 (v03 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1d27 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1d5f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 268865728 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.382716] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.384285] pid_max: default: 32768 minimum: 301 [ 0.385343] Security Framework initialized [ 0.386182] SELinux: Initializing. [ 0.388133] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.390933] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.392678] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.394030] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.395937] Initializing cgroup subsys memory [ 0.396766] Initializing cgroup subsys devices [ 0.397576] Initializing cgroup subsys freezer [ 0.398365] Initializing cgroup subsys net_cls [ 0.399157] Initializing cgroup subsys blkio [ 0.399929] Initializing cgroup subsys perf_event [ 0.400745] Initializing cgroup subsys hugetlb [ 0.401557] Initializing cgroup subsys pids [ 0.402347] Initializing cgroup subsys net_prio [ 0.403416] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.405439] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.406813] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.408035] tlb_flushall_shift: 6 [ 0.408754] FEATURE SPEC_CTRL Present [ 0.409439] FEATURE IBPB_SUPPORT Present [ 0.410281] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.411554] Spectre V2 : Vulnerable [ 0.412295] Speculative Store Bypass: Vulnerable [ 0.413770] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.419877] ACPI: Core revision 20130517 [ 0.422517] ACPI: All ACPI Tables successfully acquired [ 0.424271] ftrace: allocating 30294 entries in 119 pages [ 0.468780] Enabling x2apic [ 0.469376] Enabled x2apic [ 0.470162] Switched APIC routing to physical x2apic. [ 0.472881] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.473960] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.476020] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.478611] ... version: 2 [ 0.479714] ... bit width: 48 [ 0.480505] ... generic registers: 4 [ 0.481634] ... value mask: 0000ffffffffffff [ 0.482577] ... max period: 00007fffffffffff [ 0.484073] ... fixed-purpose events: 3 [ 0.484972] ... event mask: 000000070000000f [ 0.485967] KVM setup paravirtual spinlock [ 0.488546] smpboot: Booting Node 0, Processors #1[ 0.489690] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.492045] KVM setup async PF for cpu 1 [ 0.492948] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.494737] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.497279] KVM setup async PF for cpu 2 [ 0.497786] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock #3 OK [ 0.499696] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.500941] Brought up 4 CPUs [ 0.500956] KVM setup async PF for cpu 3 [ 0.500962] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.502884] smpboot: Max logical packages: 1 [ 0.503669] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.506454] devtmpfs: initialized [ 0.507206] x86/mm: Memory block size: 128MB [ 0.510486] EVM: security.selinux [ 0.511154] EVM: security.ima [ 0.511652] EVM: security.capability [ 0.513724] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.515111] NET: Registered protocol family 16 [ 0.516081] cpuidle: using governor haltpoll [ 0.517086] ACPI: bus type PCI registered [ 0.517817] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.519052] PCI: Using configuration type 1 for base access [ 0.520120] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.526499] ACPI: Added _OSI(Module Device) [ 0.527775] ACPI: Added _OSI(Processor Device) [ 0.529029] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.530389] ACPI: Added _OSI(Processor Aggregator Device) [ 0.531917] ACPI: Added _OSI(Linux-Dell-Video) [ 0.536850] ACPI: Interpreter enabled [ 0.537941] ACPI: (supports S0 S3 S4 S5) [ 0.539018] ACPI: Using IOAPIC for interrupt routing [ 0.540463] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.543033] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.550167] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.551981] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.553940] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.555936] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.559996] acpiphp: Slot [2] registered [ 0.561282] acpiphp: Slot [5] registered [ 0.562539] acpiphp: Slot [6] registered [ 0.563715] acpiphp: Slot [7] registered [ 0.565058] acpiphp: Slot [8] registered [ 0.566216] acpiphp: Slot [9] registered [ 0.567450] acpiphp: Slot [10] registered [ 0.568777] acpiphp: Slot [3] registered [ 0.569996] acpiphp: Slot [4] registered [ 0.571292] acpiphp: Slot [11] registered [ 0.572768] acpiphp: Slot [12] registered [ 0.573993] acpiphp: Slot [13] registered [ 0.575229] acpiphp: Slot [14] registered [ 0.576433] acpiphp: Slot [15] registered [ 0.577612] acpiphp: Slot [16] registered [ 0.578789] acpiphp: Slot [17] registered [ 0.579999] acpiphp: Slot [18] registered [ 0.581352] acpiphp: Slot [19] registered [ 0.582606] acpiphp: Slot [20] registered [ 0.583820] acpiphp: Slot [21] registered [ 0.585060] acpiphp: Slot [22] registered [ 0.586508] acpiphp: Slot [23] registered [ 0.587654] acpiphp: Slot [24] registered [ 0.588745] acpiphp: Slot [25] registered [ 0.589944] acpiphp: Slot [26] registered [ 0.591170] acpiphp: Slot [27] registered [ 0.592393] acpiphp: Slot [28] registered [ 0.593615] acpiphp: Slot [29] registered [ 0.594773] acpiphp: Slot [30] registered [ 0.595991] acpiphp: Slot [31] registered [ 0.597169] PCI host bridge to bus 0000:00 [ 0.598486] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.600371] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.602344] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.604992] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.607391] pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38007fffffff window] [ 0.610079] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.623049] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.625079] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.626916] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.629560] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.632375] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.634454] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.777904] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 0.779119] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 0.780236] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 0.782879] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 0.784012] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 0.786229] vgaarb: loaded [ 0.786913] SCSI subsystem initialized [ 0.787689] ACPI: bus type USB registered [ 0.788352] usbcore: registered new interface driver usbfs [ 0.789295] usbcore: registered new interface driver hub [ 0.790345] usbcore: registered new device driver usb [ 0.791381] PCI: Using ACPI for IRQ routing [ 0.792495] NetLabel: Initializing [ 0.793178] NetLabel: domain hash size = 128 [ 0.793898] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.794707] NetLabel: unlabeled traffic allowed by default [ 0.795764] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.796667] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 0.800974] amd_nb: Cannot enumerate AMD northbridges [ 0.801965] Switched to clocksource kvm-clock [ 0.814071] pnp: PnP ACPI init [ 0.814652] ACPI: bus type PNP registered [ 0.816009] pnp: PnP ACPI: found 6 devices [ 0.816745] ACPI: bus type PNP unregistered [ 0.825119] NET: Registered protocol family 2 [ 0.826367] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 0.828143] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 0.830058] TCP: Hash tables configured (established 32768 bind 32768) [ 0.831114] TCP: reno registered [ 0.831787] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 0.832897] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 0.834723] NET: Registered protocol family 1 [ 0.836303] RPC: Registered named UNIX socket transport module. [ 0.837825] RPC: Registered udp transport module. [ 0.838971] RPC: Registered tcp transport module. [ 0.840120] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 0.841737] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 0.843557] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 0.845512] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 0.847648] Unpacking initramfs... [ 2.173391] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 2.176327] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 2.177417] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 2.179756] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 2.181367] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 2.182421] RAPL PMU: hw unit of domain package 2^-0 Joules [ 2.183809] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 2.186395] cryptomgr_test (51) used greatest stack depth: 14128 bytes left [ 2.186864] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 2.186927] Initialise system trusted keyring [ 2.214803] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 2.216067] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.220554] zpool: loaded [ 2.221297] zbud: loaded [ 2.222305] VFS: Disk quotas dquot_6.6.0 [ 2.223961] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.226985] NFS: Registering the id_resolver key type [ 2.228001] Key type id_resolver registered [ 2.228902] Key type id_legacy registered [ 2.230277] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.232851] Key type big_key registered [ 2.235603] cryptomgr_test (57) used greatest stack depth: 14048 bytes left [ 2.238146] cryptomgr_test (58) used greatest stack depth: 13968 bytes left [ 2.240220] NET: Registered protocol family 38 [ 2.241756] Key type asymmetric registered [ 2.243275] Asymmetric key parser 'x509' registered [ 2.245100] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.248093] io scheduler noop registered [ 2.249444] io scheduler deadline registered (default) [ 2.251372] io scheduler cfq registered [ 2.252691] io scheduler mq-deadline registered [ 2.254304] io scheduler kyber registered [ 2.258398] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.260294] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.262916] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 2.265571] ACPI: Power Button [PWRF] [ 2.267216] GHES: HEST is not enabled! [ 2.324684] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 2.375452] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 2.473182] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 2.526698] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 2.640538] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 2.666608] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.692675] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.695637] Non-volatile memory driver v1.3 [ 2.697172] Linux agpgart interface v0.103 [ 2.698453] crash memory driver: version 1.1 [ 2.700417] nbd: registered device at major 43 [ 2.710336] virtio_blk virtio1: [vda] 67344 512-byte logical blocks (34.4 MB/32.8 MiB) [ 2.720440] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 2.729198] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 2.741230] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 2.751619] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 2.760401] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 2.763980] rdac: device handler registered [ 2.765546] hp_sw: device handler registered [ 2.766299] emc: device handler registered [ 2.767089] libphy: Fixed MDIO Bus: probed [ 2.769217] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 2.770389] ehci-pci: EHCI PCI platform driver [ 2.771215] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 2.772270] ohci-pci: OHCI PCI platform driver [ 2.773109] uhci_hcd: USB Universal Host Controller Interface driver [ 2.774247] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 2.776328] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 2.777176] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 2.778361] mousedev: PS/2 mouse device common for all mice [ 2.779701] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 2.782386] rtc_cmos 00:05: RTC can wake from S4 [ 2.785664] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 2.789373] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 2.793432] hidraw: raw HID events driver (C) Jiri Kosina [ 2.795486] usbcore: registered new interface driver usbhid [ 2.797381] usbhid: USB HID core driver [ 2.798629] drop_monitor: Initializing network drop monitor service [ 2.800762] Netfilter messages via NETLINK v0.30. [ 2.802380] TCP: cubic registered [ 2.803413] Initializing XFRM netlink socket [ 2.804649] NET: Registered protocol family 10 [ 2.805891] NET: Registered protocol family 17 [ 2.807395] Key type dns_resolver registered [ 2.808929] mce: Using 10 MCE banks [ 2.810566] Loading compiled-in X.509 certificates [ 2.813284] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 2.816469] registered taskstats version 1 [ 2.820458] modprobe (71) used greatest stack depth: 13456 bytes left [ 2.824836] Key type trusted registered [ 2.828332] Key type encrypted registered [ 2.829113] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 2.831651] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 2.833666] rtc_cmos 00:05: setting system clock to 2024-04-18 07:22:01 UTC (1713424921) [ 2.835289] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 2.836648] Write protecting the kernel read-only data: 12288k [ 2.837731] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 2.838892] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 2.844836] random: systemd: uninitialized urandom read (16 bytes read) [ 2.847219] random: systemd: uninitialized urandom read (16 bytes read) [ 2.848516] random: systemd: uninitialized urandom read (16 bytes read) [ 2.851686] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 2.856604] systemd[1]: Detected virtualization kvm. [ 2.857711] systemd[1]: Detected architecture x86-64. [ 2.858661] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 2.864194] systemd[1]: No hostname configured. [ 2.866906] systemd[1]: Set hostname to . [ 2.870453] random: systemd: uninitialized urandom read (16 bytes read) [ 2.873429] systemd[1]: Initializing machine ID from random generator. [ 2.904923] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 2.907217] random: systemd: uninitialized urandom read (16 bytes read) [ 2.908695] random: systemd: uninitialized urandom read (16 bytes read) [ 2.909872] random: systemd: uninitialized urandom read (16 bytes read) [ 2.911069] random: systemd: uninitialized urandom read (16 bytes read) [ 2.913290] random: systemd: uninitialized urandom read (16 bytes read) [ 2.914855] random: systemd: uninitialized urandom read (16 bytes read) [ 2.922199] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 2.925239] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 2.927276] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 2.929376] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 2.931480] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 2.933340] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 2.936203] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 2.938143] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 2.940058] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 2.943319] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 2.946240] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 2.950221] systemd[1]: Starting Journal Service... Starting Journal Service... [ 2.953073] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 2.956416] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 2.958465] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 2.961143] systemd[1]: Started Load Kernel Modules. [ OK ] Started Load Kernel Modules. [ 2.965459] systemd[1]: Started Create list of required static device nodes for the current kernel. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ 2.969349] systemd[1]: Started Setup Virtual Console. [ OK ] Started Setup Virtual Console. [ 2.972957] systemd[1]: Starting Create Static Device Nodes in /dev... Starting Create Static Device Nodes in /dev... [ 2.976400] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables... [ 2.980252] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. [ OK ] Started Create Static Device Nodes in /dev. [ OK ] Started Apply Kernel Variables. [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook... [ 3.186286] random: fast init done [ 3.188012] tsc: Refined TSC clocksource calibration: 2399.969 MHz [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... Mounting Configuration File System... [ OK ] Mounted Configuration File System. [ OK ] Started udev Coldplug all Devices. Starting Show Plymouth Boot Screen... [ OK ] Reached target System Initialization. Starting dracut initqueue hook... [ 3.337192] scsi host0: ata_piix [ 3.341174] scsi host1: ata_piix [ 3.341832] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 3.343072] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 [ OK ] Started Show Plymouth Boot Screen. [ OK ] Reached target Paths. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Basic System. [ 3.383540] ip (311) used greatest stack depth: 13080 bytes left %G[ 3.422634] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 3.424530] ip (344) used greatest stack depth: 12464 bytes left [ 3.618143] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ 3.770117] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 5.687321] dracut-initqueue[277]: RTNETLINK answers: File exists [ 5.922773] dracut-initqueue[277]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... Mounting /sysroot... [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. [ 6.472015] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... Starting Plymouth switch root service... [ OK ] Stopped target Timers. [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Basic System. [ OK ] Stopped target Paths. [ OK ] Stopped target System Initialization. [ OK ] Stopped target Swap. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped target Local File Systems. [ OK ] Stopped target Slices. [ OK ] Stopped target Sockets. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Stopped udev Kernel Device Manager. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Closed udev Kernel Socket. [ OK ] Closed udev Control Socket. Starting Cleanup udevd DB... [ OK ] Started Plymouth switch root service. [ OK ] Started Cleanup udevd DB. [ OK ] Reached target Switch Root. Starting Switch Root... [ 6.913930] systemd-journald[104]: Received SIGTERM from PID 1 (systemd). [ 7.164745] SELinux: Disabled at runtime. [ 7.243303] ip_tables: (C) 2000-2006 Netfilter Core Team [ 7.247466] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Create list of required st... nodes for the current kernel... [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. [ OK ] Listening on udev Control Socket. Mounting POSIX Message Queue File System... [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Listening on udev Kernel Socket. Starting udev Coldplug all Devices... [ OK ] Reached target Local Encrypted Volumes. [ OK ] Created slice User and Session Slice. Mounting Debug File System... [ OK ] Stopped target Switch Root. [ OK ] Stopped target Initrd Root File System. Mounting Huge Pages File System... [ OK ] Stopped target Initrd File Systems. [ OK ] Reached target Slices. [ OK ] Created slice system-serial\x2dgetty.slice. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. Starting Remount Root and Kernel File Systems... Starting Load Kernel Modules... Starting Set Up Additional Binary Formats... [ OK ] Reached target rpc_pipefs.target. [ OK ] Created slice system-getty.slice. [ OK ] Started Create list of required sta...ce nodes for the current kernel. Starting Create Static Device Nodes in /dev... [ OK ] Mounted Huge Pages File System. [ OK ] Mounted POSIX Message Queue File System. [ OK ] Mounted Debug File System. [ OK ] Started Load Kernel Modules. Mounting Arbitrary Executable File Formats File System... Starting Apply Kernel Variables... [ OK ] Started Journal Service. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started Apply Kernel Variables. [ OK ] Started udev Coldplug all Devices. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. [ OK ] Started Create Static Device Nodes in /dev. Starting udev Kernel Device Manager... [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... Starting Configure read-only root support... Starting Flush Journal to Persistent Storage... [ OK ] Started Set Up Additional Binary Formats. [ OK ] Mounted /mnt. [ 7.632511] systemd-journald[568]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ 7.728635] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ 7.748505] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ OK ] Found device /dev/ttyS1. [ OK ] Found device /dev/ttyS0. [ OK ] Found device /dev/vda. [ 7.784566] cryptd: max_cpu_qlen set to 1000 Mounting /home/green/git/lustre-release... [ 7.806855] AVX version of gcm_enc/dec engaged. [ 7.809096] AES CTR mode by8 optimization enabled [ 7.819475] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ OK ] Found device /dev/disk/by-label/SWAP. [ OK ] Mounted /home/green/git/lustre-release. Activating swap /dev/disk/by-label/SWAP... [ 7.853160] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ 7.859160] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Reached target Swap. [ 7.866057] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) %G[ 7.960683] EDAC MC: Ver: 3.0.0 [ 7.965978] EDAC sbridge: Ver: 1.1.2 [ 9.889449] mount.nfs (771) used greatest stack depth: 10704 bytes left [ OK ] Started Configure read-only root support. Starting Load/Save Random Seed... [ OK ] Reached target Local File Systems. Starting Tell Plymouth To Write Out Runtime Data... Starting Preprocess NFS configuration... Starting Mark the need to relabel after reboot... Starting Create Volatile Files and Directories... Starting Rebuild Journal Catalog... [ OK ] Started Load/Save Random Seed. [ OK ] Started Mark the need to relabel after reboot. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. Starting Update UTMP about System Boot/Shutdown... [ OK ] Started Preprocess NFS configuration. [ OK ] Started Tell Plymouth To Write Out Runtime Data. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. Starting Update is Completed... [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting Login Service... Starting GSSAPI Proxy Daemon... Starting Dump dmesg to /var/log/dmesg... [ OK ] Started D-Bus System Message Bus. Starting Network Manager... [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Login Service. [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. Starting Network Manager Wait Online... [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... Starting Hostname Service... [ OK ] Started OpenSSH server daemon. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Network Manager Script Dispatcher Service... Starting Wait for Plymouth Boot Screen to Quit... Starting Terminate Plymouth Boot Screen... [ OK ] Started Network Manager Script Dispatcher Service. CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg136-server login: [ 18.453177] device-mapper: uevent: version 1.0.3 [ 18.454537] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 22.461632] libcfs: loading out-of-tree module taints kernel. [ 22.463121] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 22.487320] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing set_hostid [ 27.049389] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 27.228985] alg: No test for adler32 (adler32-zlib) [ 27.980017] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 28.090750] Lustre: Lustre: Build Version: 2.15.62_23_gb559b30 [ 28.241026] LNet: Added LNI 192.168.201.136@tcp [8/256/0/180] [ 28.242403] LNet: Accept secure, port 988 [ 29.782170] Key type lgssc registered [ 30.056568] Lustre: Echo OBD driver; http://www.lustre.org/ [ 32.714156] icp: module license 'CDDL' taints kernel. [ 32.715354] Disabling lock debugging due to kernel taint [ 35.199117] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 38.324928] LDISKFS-fs (vdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 42.560219] LDISKFS-fs (vdd): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 44.544404] LDISKFS-fs (vde): file extents enabled, maximum tree depth=5 [ 44.547366] LDISKFS-fs (vde): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 46.642444] LDISKFS-fs (vdf): file extents enabled, maximum tree depth=5 [ 46.646155] LDISKFS-fs (vdf): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 49.692251] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 52.671316] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 52.685058] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 52.692207] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 53.767044] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 53.774437] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space. [ 53.807312] Lustre: lustre-MDT0000: new disk, initializing [ 53.824240] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 53.830255] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 53.856441] mount.lustre (6908) used greatest stack depth: 10144 bytes left [ 54.581901] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 58.697459] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 58.717428] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 58.738418] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 58.745462] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space. [ 58.748475] Lustre: Skipped 1 previous similar message [ 58.792026] Lustre: lustre-MDT0001: new disk, initializing [ 58.807621] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 58.815505] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 58.817886] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 59.642929] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 64.920656] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 64.926676] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 64.953934] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 64.960219] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 65.039670] Lustre: lustre-OST0000: new disk, initializing [ 65.041643] Lustre: srv-lustre-OST0000: No data found on store. Initialize space. [ 65.054810] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 66.793504] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 70.717498] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 70.721782] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 70.732397] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 71.441175] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 71.444421] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 71.464249] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 71.467065] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 71.492215] Lustre: lustre-OST0001: new disk, initializing [ 71.493661] Lustre: srv-lustre-OST0001: No data found on store. Initialize space. [ 71.504405] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 72.699012] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 76.845790] random: crng init done [ 77.444383] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 77.448249] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 77.458063] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 77.461785] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 81.766367] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 87.387015] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing check_logdir /tmp/testlogs/ [ 88.161070] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing yml_node [ 89.048356] Lustre: DEBUG MARKER: Client: 2.15.62.23 [ 89.685686] Lustre: DEBUG MARKER: MDS: 2.15.62.23 [ 90.976083] Lustre: DEBUG MARKER: OSS: 2.15.62.23 [ 92.048155] Lustre: DEBUG MARKER: -----============= acceptance-small: sanityn ============----- Thu Apr 18 03:23:29 EDT 2024 [ 94.881676] Lustre: DEBUG MARKER: excepting tests: 27 28 [ 95.234013] Lustre: DEBUG MARKER: skipping tests SLOW=no: 33a [ 95.962423] Lustre: DEBUG MARKER: oleg136-client.virtnet: executing check_config_client /mnt/lustre [ 100.770044] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 101.607861] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 102.190382] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 104.056522] Lustre: DEBUG MARKER: == sanityn test 0: do_node_vp() and do_facet_vp() do the right thing ========================================================== 03:23:41 (1713425021) [ 107.448437] Lustre: DEBUG MARKER: == sanityn test 1: Check attribute updates on 2 mount points ========================================================== 03:23:44 (1713425024) [ 110.424353] Lustre: DEBUG MARKER: == sanityn test 2a: check cached attribute updates on 2 mtpt's ================================================================== 03:23:47 (1713425027) [ 113.304580] Lustre: DEBUG MARKER: == sanityn test 2b: check cached attribute updates on 2 mtpt's ================================================================== 03:23:50 (1713425030) [ 116.177011] Lustre: DEBUG MARKER: == sanityn test 2c: check cached attribute updates on 2 mtpt's root ============================================================= 03:23:53 (1713425033) [ 119.078696] Lustre: DEBUG MARKER: == sanityn test 2d: check cached attribute updates on 2 mtpt's root ============================================================= 03:23:56 (1713425036) [ 121.962437] Lustre: DEBUG MARKER: == sanityn test 2e: check chmod on root is propagated to others ========================================================== 03:23:59 (1713425039) [ 124.908582] Lustre: DEBUG MARKER: == sanityn test 2f: check attr/owner updates on DNE with 2 mtpt's ========================================================== 03:24:02 (1713425042) [ 127.959113] Lustre: DEBUG MARKER: == sanityn test 2g: check blocks update on sync write ==== 03:24:05 (1713425045) [ 131.024261] Lustre: DEBUG MARKER: == sanityn test 3: symlink on one mtpt, readlink on another ===================================================================== 03:24:08 (1713425048) [ 133.975972] Lustre: DEBUG MARKER: == sanityn test 4: fstat validation on multiple mount points ==================================================================== 03:24:11 (1713425051) [ 137.931290] Lustre: DEBUG MARKER: == sanityn test 5: create a file on one mount, truncate it on the other ========================================================== 03:24:15 (1713425055) [ 140.948841] Lustre: DEBUG MARKER: == sanityn test 6: remove of open file on other node ============================================================================ 03:24:18 (1713425058) [ 143.952803] Lustre: DEBUG MARKER: == sanityn test 7: remove of open directory on other node ======================================================================= 03:24:21 (1713425061) [ 146.947911] Lustre: DEBUG MARKER: == sanityn test 8: remove of open special file on other node ==================================================================== 03:24:24 (1713425064) [ 149.864932] Lustre: DEBUG MARKER: == sanityn test 9a: append of file with sub-page size on multiple mounts ========================================================== 03:24:27 (1713425067) [ 152.861205] Lustre: DEBUG MARKER: == sanityn test 9b: append to striped sparse file ======== 03:24:30 (1713425070) [ 155.895296] Lustre: DEBUG MARKER: == sanityn test 10a: write of file with sub-page size on multiple mounts ========================================================== 03:24:33 (1713425073) [ 159.084770] Lustre: DEBUG MARKER: == sanityn test 10b: write of file with sub-page size on multiple mounts ========================================================== 03:24:36 (1713425076) [ 162.060736] Lustre: DEBUG MARKER: == sanityn test 11: execution of file opened for write should return error ============================================================== 03:24:39 (1713425079) [ 165.136864] Lustre: DEBUG MARKER: == sanityn test 12: test lock ordering (link, stat, unlink) ========================================================== 03:24:42 (1713425082) [ 313.899361] Lustre: DEBUG MARKER: == sanityn test 13: test directory page revocation ======= 03:27:11 (1713425231) [ 317.042509] Lustre: DEBUG MARKER: == sanityn test 14aa: execution of file open for write returns -ETXTBSY ========================================================== 03:27:14 (1713425234) [ 319.970104] Lustre: DEBUG MARKER: == sanityn test 14ab: open(RDWR) of executing file returns -ETXTBSY ========================================================== 03:27:17 (1713425237) [ 322.889505] Lustre: DEBUG MARKER: == sanityn test 14b: truncate of executing file returns -ETXTBSY ================================================================ 03:27:20 (1713425240) [ 325.882691] Lustre: DEBUG MARKER: == sanityn test 14c: open(O_TRUNC) of executing file return -ETXTBSY ============================================================ 03:27:23 (1713425243) [ 328.865228] Lustre: DEBUG MARKER: == sanityn test 14d: chmod of executing file is still possible ================================================================== 03:27:26 (1713425246) [ 329.304803] Lustre: DEBUG MARKER: chmod [ 332.232268] Lustre: DEBUG MARKER: == sanityn test 15: test out-of-space with multiple writers ===================================================================== 03:27:29 (1713425249) [ 335.975606] Lustre: DEBUG MARKER: SKIP: oos2 test_15 oos2.sh: 7207420kB free gt MAXFREE 800000kB, increase 800000 (or reduce test fs size) to proceed [ 339.404699] Lustre: DEBUG MARKER: == sanityn test 16a: 2500 iterations of dual-mount fsx === 03:27:36 (1713425256) [ 389.900553] Lustre: DEBUG MARKER: == sanityn test 16b: 2500 iterations of dual-mount fsx at small size ========================================================== 03:28:27 (1713425307) [ 417.209751] Lustre: DEBUG MARKER: == sanityn test 16c: verify data consistency on ldiskfs with cache disabled (b=17397) ========================================================== 03:28:54 (1713425334) [ 462.947084] Lustre: DEBUG MARKER: == sanityn test 16d: Verify DIO and buffer IO with two clients ========================================================== 03:29:40 (1713425380) [ 475.736382] Lustre: DEBUG MARKER: == sanityn test 16e: Verify size consistency for O_DIRECT write ========================================================== 03:29:53 (1713425393) [ 479.938228] Lustre: DEBUG MARKER: == sanityn test 16f: rw sequential consistency vs drop_caches ========================================================== 03:29:57 (1713425397) [ 503.407514] Lustre: DEBUG MARKER: == sanityn test 16g: mmap rw sequential consistency vs drop_caches ========================================================== 03:30:20 (1713425420) [ 526.522650] Lustre: DEBUG MARKER: == sanityn test 16h: mmap read after truncate file ======= 03:30:44 (1713425444) [ 530.004089] Lustre: DEBUG MARKER: == sanityn test 16i: read after truncate file ============ 03:30:47 (1713425447) [ 533.804707] Lustre: DEBUG MARKER: == sanityn test 16j: race dio with buffered i/o ========== 03:30:51 (1713425451) [ 542.378770] Lustre: DEBUG MARKER: == sanityn test 16k: Parallel FSX and drop caches should not panic ========================================================== 03:30:59 (1713425459) [ 564.008693] Lustre: DEBUG MARKER: == sanityn test 17: resource creation/LVB creation race ========================================================================= 03:31:21 (1713425481) [ 564.404750] LustreError: 6931:0:(ldlm_resource.c:1561:ldlm_resource_get()) cfs_fail_timeout id 30a sleeping for 2000ms [ 566.407063] LustreError: 6931:0:(ldlm_resource.c:1561:ldlm_resource_get()) cfs_fail_timeout id 30a awake [ 569.622236] Lustre: DEBUG MARKER: == sanityn test 18: mmap sanity check =========================================================================================== 03:31:27 (1713425487) [ 579.477325] Lustre: DEBUG MARKER: == sanityn test 19: test concurrent uncached read races ========================================================================= 03:31:37 (1713425497) [ 581.701373] Lustre: DEBUG MARKER: loop 5 [ 582.982414] Lustre: DEBUG MARKER: loop 10 [ 584.335476] Lustre: DEBUG MARKER: loop 15 [ 585.725264] Lustre: DEBUG MARKER: loop 20 [ 589.535129] Lustre: DEBUG MARKER: == sanityn test 20: test extra readahead page left in cache ============================================================== 03:31:47 (1713425507) [ 593.140117] Lustre: DEBUG MARKER: == sanityn test 21: Try to remove mountpoint on another dir ============================================================== 03:31:50 (1713425510) [ 596.669665] Lustre: DEBUG MARKER: == sanityn test 23: others should see updated atime while another read============================================================== 03:31:54 (1713425514) [ 662.483096] Lustre: DEBUG MARKER: == sanityn test 24a: lfs df [-ih] [path] test =================================================================================== 03:32:59 (1713425579) [ 662.653377] Lustre: lustre-OST0000: Client 92018d71-5236-41f8-a99f-7b8aea033388 (at 192.168.201.36@tcp) reconnecting [ 667.250786] Lustre: DEBUG MARKER: == sanityn test 24b: lfs df should show both filesystems ========================================================================= 03:33:04 (1713425584) [ 671.573627] Lustre: DEBUG MARKER: == sanityn test 25a: change ACL on one mountpoint be seen on another ============================================================= 03:33:09 (1713425589) [ 676.490497] Lustre: DEBUG MARKER: == sanityn test 25b: change ACL under remote dir on one mountpoint be seen on another ========================================================== 03:33:13 (1713425593) [ 681.096711] Lustre: DEBUG MARKER: == sanityn test 26a: allow mtime to get older ============ 03:33:18 (1713425598) [ 686.225282] Lustre: DEBUG MARKER: == sanityn test 26b: sync mtime between ost and mds ====== 03:33:23 (1713425603) [ 691.367248] Lustre: DEBUG MARKER: SKIP: sanityn test_27 skipping excluded test 27 [ 691.751725] Lustre: DEBUG MARKER: SKIP: sanityn test_28 skipping ALWAYS excluded test 28 [ 693.737373] Lustre: DEBUG MARKER: == sanityn test 30: recreate file race =================== 03:33:31 (1713425611) [ 700.283025] Lustre: DEBUG MARKER: == sanityn test 31a: voluntary cancel / blocking ast race======================================================================== 03:33:37 (1713425617) [ 705.804991] Lustre: DEBUG MARKER: == sanityn test 31b: voluntary OST cancel / blocking ast race======================================================================== 03:33:43 (1713425623) [ 710.072214] Lustre: *** cfs_fail_loc=316, val=0*** [ 714.961030] Lustre: DEBUG MARKER: == sanityn test 31r: open-rename(replace) race =========== 03:33:52 (1713425632) [ 720.949348] Lustre: DEBUG MARKER: SKIP: sanityn test_33a skipping SLOW test 33a [ 723.000803] Lustre: DEBUG MARKER: == sanityn test 33b: COS: cross create/delete, 2 clients, benchmark under remote dir ========================================================== 03:34:00 (1713425640) [ 723.608136] Lustre: DEBUG MARKER: SKIP: sanityn test_33b Need two or more clients, have 1 [ 726.466291] Lustre: DEBUG MARKER: == sanityn test 33c: Cancel cross-MDT lock should trigger Sync-on-Lock-Cancel ========================================================== 03:34:03 (1713425643) [ 727.176549] Lustre: Failing over lustre-MDT0000 [ 727.230687] Lustre: server umount lustre-MDT0000 complete [ 728.484666] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 728.484669] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 728.484876] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 728.508043] Lustre: Skipped 1 previous similar message [ 730.255260] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 730.304463] LustreError: 166-1: MGC192.168.201.136@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 730.382611] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 730.393657] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 731.392454] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 731.466605] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 735.397950] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 735.408171] Lustre: lustre-MDT0000: Recovery over after 0:04, of 3 clients 3 recovered and 0 were evicted. [ 735.432690] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:52 to 0x280000401:97) [ 735.432697] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:49 to 0x2c0000401:65) [ 746.792016] Lustre: DEBUG MARKER: == sanityn test 33d: dependent transactions should trigger COS ========================================================== 03:34:24 (1713425664) [ 764.454773] Lustre: DEBUG MARKER: == sanityn test 33e: independent transactions shouldn't trigger COS ========================================================== 03:34:41 (1713425681) [ 772.182221] Lustre: DEBUG MARKER: == sanityn test 34: no lock timeout under IO ============= 03:34:49 (1713425689) [ 773.141793] Lustre: *** cfs_fail_loc=512, val=0*** [ 773.144839] Lustre: Skipped 2 previous similar messages [ 773.147653] LustreError: 6916:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 sleeping for 4000ms [ 775.444449] Lustre: *** cfs_fail_loc=512, val=0*** [ 775.445768] Lustre: Skipped 2 previous similar messages [ 775.448890] Lustre: *** cfs_fail_loc=512, val=0*** [ 775.453075] LustreError: 8070:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 sleeping for 4000ms [ 775.456068] LustreError: 8070:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 1 previous similar message [ 776.450129] Lustre: *** cfs_fail_loc=512, val=0*** [ 776.462865] Lustre: *** cfs_fail_loc=512, val=0*** [ 776.465422] Lustre: Skipped 325 previous similar messages [ 777.153145] LustreError: 6916:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 awake [ 777.452121] Lustre: *** cfs_fail_loc=512, val=0*** [ 777.546709] LustreError: 6930:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 sleeping for 4000ms [ 777.551995] LustreError: 6930:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 17 previous similar messages [ 778.580545] Lustre: *** cfs_fail_loc=512, val=0*** [ 778.582913] Lustre: Skipped 5 previous similar messages [ 779.456113] Lustre: *** cfs_fail_loc=512, val=0*** [ 779.458036] LustreError: 8070:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 awake [ 779.458041] LustreError: 8070:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 1 previous similar message [ 779.469707] Lustre: Skipped 1 previous similar message [ 781.556051] LustreError: 6930:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 awake [ 781.561269] LustreError: 6930:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 16 previous similar messages [ 781.569748] LustreError: 6930:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 sleeping for 4000ms [ 781.574989] LustreError: 6930:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 18 previous similar messages [ 783.604603] Lustre: *** cfs_fail_loc=512, val=0*** [ 783.607824] Lustre: Skipped 28 previous similar messages [ 785.571093] LustreError: 6929:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 awake [ 785.576579] LustreError: 6929:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 18 previous similar messages [ 785.583170] Lustre: *** cfs_fail_loc=512, val=0*** [ 789.690045] LustreError: 10522:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 sleeping for 4000ms [ 789.693505] LustreError: 10522:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 19 previous similar messages [ 791.892447] Lustre: *** cfs_fail_loc=512, val=0*** [ 791.895062] Lustre: Skipped 54 previous similar messages [ 793.695054] LustreError: 10522:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 awake [ 793.700142] LustreError: 10522:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 19 previous similar messages [ 806.336340] Lustre: *** cfs_fail_loc=512, val=0*** [ 806.339031] Lustre: Skipped 4 previous similar messages [ 806.343600] LustreError: 8070:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 sleeping for 4000ms [ 806.348560] LustreError: 8070:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 66 previous similar messages [ 808.660237] Lustre: *** cfs_fail_loc=512, val=0*** [ 808.661613] Lustre: Skipped 84 previous similar messages [ 810.353096] LustreError: 8070:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 512 awake [ 810.357738] LustreError: 8070:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 63 previous similar messages [ 821.156182] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 1s: evicting client at 192.168.201.36@tcp ns: filter-lustre-OST0000_UUID lock: ffff8800a188e880/0x505e36402bd0b4a0 lrc: 3/0,0 mode: PR/PR res: [0x280000401:0x95:0x0].0x0 rrc: 4 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400030020 nid: 192.168.201.36@tcp remote: 0xbe437fb360d0ab99 expref: 8 pid: 29844 timeout: 820 lvb_type: 0 [ 832.033596] Lustre: *** cfs_fail_loc=511, val=0*** [ 832.036015] Lustre: Skipped 13 previous similar messages [ 833.036240] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 1s: evicting client at 192.168.201.36@tcp ns: filter-lustre-OST0001_UUID lock: ffff8800a0e246c0/0x505e36402bd0b51e lrc: 3/0,0 mode: PW/PW res: [0x2c0000401:0x76:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.201.36@tcp remote: 0xbe437fb360d0abbc expref: 12 pid: 28640 timeout: 832 lvb_type: 0 [ 833.054173] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message [ 840.564509] LustreError: 29842:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 511 sleeping for 4000ms [ 840.570097] LustreError: 29842:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 131 previous similar messages [ 841.269833] Lustre: *** cfs_fail_loc=511, val=0*** [ 841.272960] Lustre: Skipped 195 previous similar messages [ 844.575002] LustreError: 29842:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout id 511 awake [ 844.579124] LustreError: 29842:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 115 previous similar messages [ 846.172052] LustreError: 29844:0:(service.c:2281:ptlrpc_server_handle_request()) cfs_fail_timeout interrupted [ 846.177293] LustreError: 29844:0:(service.c:2281:ptlrpc_server_handle_request()) Skipped 9 previous similar messages [ 847.058368] Lustre: DEBUG MARKER: oleg136-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff8800b6ccf800.ost_server_uuid,osc.lustre-OST0000-osc-ffff88012c479800.ost_server_uuid 50 [ 847.591772] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff8800b6ccf800.ost_server_uuid in IDLE state after 0 sec [ 848.133737] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88012c479800.ost_server_uuid in IDLE state after 0 sec [ 849.039114] Lustre: DEBUG MARKER: oleg136-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff8800b6ccf800.ost_server_uuid,osc.lustre-OST0001-osc-ffff88012c479800.ost_server_uuid 50 [ 849.593322] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-ffff8800b6ccf800.ost_server_uuid in FULL state after 0 sec [ 850.139335] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-ffff88012c479800.ost_server_uuid in FULL state after 0 sec [ 851.539946] Lustre: DEBUG MARKER: oleg136-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff8800b6ccf800.ost_server_uuid,osc.lustre-OST0000-osc-ffff88012c479800.ost_server_uuid 50 [ 852.096761] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff8800b6ccf800.ost_server_uuid in IDLE state after 0 sec [ 852.647985] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88012c479800.ost_server_uuid in IDLE state after 0 sec [ 853.563053] Lustre: DEBUG MARKER: oleg136-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff8800b6ccf800.ost_server_uuid,osc.lustre-OST0001-osc-ffff88012c479800.ost_server_uuid 50 [ 854.086301] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-ffff8800b6ccf800.ost_server_uuid in FULL state after 0 sec [ 854.627798] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-ffff88012c479800.ost_server_uuid in FULL state after 0 sec [ 857.410499] Lustre: DEBUG MARKER: oleg136-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff8800b6ccf800.ost_server_uuid,osc.lustre-OST0000-osc-ffff88012c479800.ost_server_uuid 50 [ 857.919753] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff8800b6ccf800.ost_server_uuid in IDLE state after 0 sec [ 858.450722] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-ffff88012c479800.ost_server_uuid in IDLE state after 0 sec [ 859.211271] Lustre: DEBUG MARKER: oleg136-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff8800b6ccf800.ost_server_uuid,osc.lustre-OST0001-osc-ffff88012c479800.ost_server_uuid 50 [ 859.715329] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-ffff8800b6ccf800.ost_server_uuid in IDLE state after 0 sec [ 860.238634] Lustre: DEBUG MARKER: osc.lustre-OST0001-osc-ffff88012c479800.ost_server_uuid in FULL state after 0 sec [ 863.084816] Lustre: DEBUG MARKER: == sanityn test 35: -EINTR cp_ast vs. bl_ast race does not evict client ========================================================== 03:36:20 (1713425780) [ 863.909168] Lustre: DEBUG MARKER: Race attempt 0 [ 865.496603] Lustre: DEBUG MARKER: Wait for 32626 32766 for 60 sec... [ 928.560124] Lustre: DEBUG MARKER: == sanityn test 36: handle ESTALE/open-unlink correctly == 03:37:26 (1713425846) [ 1101.039620] Lustre: DEBUG MARKER: == sanityn test 37: check i_size is not updated for directory on close (bug 18695) ======================================================================== 03:40:18 (1713426018) [ 1126.318541] Lustre: DEBUG MARKER: == sanityn test 39a: file mtime does not change after rename ========================================================== 03:40:43 (1713426043) [ 1131.167318] Lustre: DEBUG MARKER: == sanityn test 39b: file mtime the same on clients with/out lock ========================================================== 03:40:48 (1713426048) [ 1136.758550] Lustre: DEBUG MARKER: == sanityn test 39c: check truncate mtime update ================================================================================ 03:40:54 (1713426054) [ 1142.293882] Lustre: DEBUG MARKER: == sanityn test 39d: sync write should update mtime ====== 03:40:59 (1713426059) [ 1146.622414] Lustre: DEBUG MARKER: == sanityn test 40a: pdirops: create vs others ======================================================================== 03:41:04 (1713426064) [ 1148.054188] LustreError: 29840:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout id 145 sleeping for 15000ms [ 1148.057623] LustreError: 29840:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 15 previous similar messages [ 1152.862088] LustreError: 29840:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1157.244342] Lustre: DEBUG MARKER: == sanityn test 40b: pdirops: open|create and others ======================================================================== 03:41:14 (1713426074) [ 1163.508109] LustreError: 6930:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1167.950677] Lustre: DEBUG MARKER: == sanityn test 40c: pdirops: link and others ======================================================================== 03:41:25 (1713426085) [ 1174.297045] LustreError: 6929:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1178.605036] Lustre: DEBUG MARKER: == sanityn test 40d: pdirops: unlink and others ======================================================================== 03:41:35 (1713426095) [ 1184.829049] LustreError: 11250:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1189.476274] Lustre: DEBUG MARKER: == sanityn test 40e: pdirops: rename and others ======================================================================== 03:41:46 (1713426106) [ 1194.961065] LustreError: 6944:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1199.488818] Lustre: DEBUG MARKER: == sanityn test 41a: pdirops: create vs mkdir ======================================================================== 03:41:56 (1713426116) [ 1207.365325] Lustre: DEBUG MARKER: == sanityn test 41b: pdirops: create vs create ======================================================================== 03:42:04 (1713426124) [ 1214.874717] Lustre: DEBUG MARKER: == sanityn test 41c: pdirops: create vs link ======================================================================== 03:42:12 (1713426132) [ 1217.843007] LustreError: 29842:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1217.844820] LustreError: 29842:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 2 previous similar messages [ 1222.587827] Lustre: DEBUG MARKER: == sanityn test 41d: pdirops: create vs unlink ======================================================================== 03:42:19 (1713426139) [ 1230.527809] Lustre: DEBUG MARKER: == sanityn test 41e: pdirops: create and rename (tgt) ======================================================================== 03:42:27 (1713426147) [ 1238.432945] Lustre: DEBUG MARKER: == sanityn test 41f: pdirops: create and rename (src) ======================================================================== 03:42:35 (1713426155) [ 1246.721020] Lustre: DEBUG MARKER: == sanityn test 41g: pdirops: create vs getattr ======================================================================== 03:42:44 (1713426164) [ 1254.975225] Lustre: DEBUG MARKER: == sanityn test 41h: pdirops: create vs readdir ======================================================================== 03:42:52 (1713426172) [ 1257.728976] LustreError: 29840:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1257.733307] LustreError: 29840:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 4 previous similar messages [ 1262.723375] Lustre: DEBUG MARKER: == sanityn test 41i: reint_open: create vs create ======== 03:43:00 (1713426180) [ 1263.259110] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_race id 169 sleeping [ 1263.465948] LustreError: 10475:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 waking [ 1263.470673] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4794 [ 1264.423622] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_race id 169 sleeping [ 1264.630166] LustreError: 10475:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 waking [ 1264.634087] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4793 [ 1265.619108] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_race id 169 sleeping [ 1265.821888] LustreError: 10475:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 waking [ 1265.825905] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4798 [ 1267.932353] LustreError: 6929:0:(mdt_open.c:1495:mdt_reint_open()) cfs_race id 169 sleeping [ 1267.937674] LustreError: 6929:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 1 previous similar message [ 1268.136345] LustreError: 11251:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 waking [ 1268.140687] LustreError: 11251:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 1 previous similar message [ 1268.144796] LustreError: 6929:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4798 [ 1268.149796] LustreError: 6929:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 1 previous similar message [ 1272.342216] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_race id 169 sleeping [ 1272.344816] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 3 previous similar messages [ 1272.547743] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 waking [ 1272.551628] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 3 previous similar messages [ 1272.556195] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4791 [ 1272.561833] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 3 previous similar messages [ 1280.876102] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) cfs_race id 169 sleeping [ 1280.880301] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 6 previous similar messages [ 1281.082800] LustreError: 6931:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 waking [ 1281.087643] LustreError: 6931:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 6 previous similar messages [ 1281.092613] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4792 [ 1281.097829] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 6 previous similar messages [ 1297.953565] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) cfs_race id 169 sleeping [ 1297.958682] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 14 previous similar messages [ 1298.159480] LustreError: 29840:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 waking [ 1298.163090] LustreError: 29840:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 14 previous similar messages [ 1298.168359] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4797 [ 1298.173357] LustreError: 29846:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 14 previous similar messages [ 1331.175124] LustreError: 11251:0:(mdt_open.c:1495:mdt_reint_open()) cfs_race id 169 sleeping [ 1331.179125] LustreError: 11251:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 28 previous similar messages [ 1331.383452] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 waking [ 1331.388797] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 28 previous similar messages [ 1331.394173] LustreError: 11251:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4789 [ 1331.398779] LustreError: 11251:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 28 previous similar messages [ 1395.461155] LustreError: 29846:0:(mdt_open.c:1516:mdt_reint_open()) cfs_fail_race id 16a awake: rc=0 [ 1395.466902] LustreError: 29846:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 42 previous similar messages [ 1395.476615] LustreError: 8075:0:(mdt_open.c:1516:mdt_reint_open()) cfs_fail_race id 16a waking [ 1395.480560] LustreError: 8075:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 42 previous similar messages [ 1396.506974] LustreError: 11250:0:(mdt_open.c:1516:mdt_reint_open()) cfs_race id 16a sleeping [ 1396.510679] LustreError: 11250:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 43 previous similar messages [ 1523.538066] LustreError: 28563:0:(mdt_open.c:1516:mdt_reint_open()) cfs_fail_race id 16a awake: rc=0 [ 1523.540503] LustreError: 28563:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 21 previous similar messages [ 1523.548922] LustreError: 29836:0:(mdt_open.c:1516:mdt_reint_open()) cfs_fail_race id 16a waking [ 1523.552855] LustreError: 29836:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 21 previous similar messages [ 1529.904631] LustreError: 29836:0:(mdt_open.c:1516:mdt_reint_open()) cfs_race id 16a sleeping [ 1529.907063] LustreError: 29836:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 22 previous similar messages [ 1784.939060] LustreError: 6931:0:(mdt_open.c:1516:mdt_reint_open()) cfs_fail_race id 16a awake: rc=0 [ 1784.942245] LustreError: 6931:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 45 previous similar messages [ 1784.947431] LustreError: 6931:0:(mdt_open.c:1516:mdt_reint_open()) cfs_fail_race id 16a waking [ 1784.949594] LustreError: 6931:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 45 previous similar messages [ 1791.347887] LustreError: 6931:0:(mdt_open.c:1516:mdt_reint_open()) cfs_race id 16a sleeping [ 1791.350832] LustreError: 6931:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 45 previous similar messages [ 1952.794632] Lustre: DEBUG MARKER: == sanityn test 42a: pdirops: mkdir vs mkdir ======================================================================== 03:54:30 (1713426870) [ 1953.785011] LustreError: 29846:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout id 145 sleeping for 15000ms [ 1953.787675] LustreError: 29846:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 12 previous similar messages [ 1955.089009] LustreError: 29846:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1958.677277] Lustre: DEBUG MARKER: == sanityn test 42b: pdirops: mkdir vs create ======================================================================== 03:54:36 (1713426876) [ 1964.729544] Lustre: DEBUG MARKER: == sanityn test 42c: pdirops: mkdir vs link ======================================================================== 03:54:42 (1713426882) [ 1967.163981] LustreError: 29846:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1967.166633] LustreError: 29846:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 1 previous similar message [ 1970.955274] Lustre: DEBUG MARKER: == sanityn test 42d: pdirops: mkdir vs unlink ======================================================================== 03:54:48 (1713426888) [ 1972.000835] LustreError: 11250:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout id 145 sleeping for 15000ms [ 1972.005136] LustreError: 11250:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 2 previous similar messages [ 1976.969314] Lustre: DEBUG MARKER: == sanityn test 42e: pdirops: mkdir and rename (tgt) ======================================================================== 03:54:54 (1713426894) [ 1982.929095] Lustre: DEBUG MARKER: == sanityn test 42f: pdirops: mkdir and rename (src) ======================================================================== 03:55:00 (1713426900) [ 1985.347044] LustreError: 6929:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 1985.349113] LustreError: 6929:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 2 previous similar messages [ 1988.969127] Lustre: DEBUG MARKER: == sanityn test 42g: pdirops: mkdir vs getattr ======================================================================== 03:55:06 (1713426906) [ 1995.081328] Lustre: DEBUG MARKER: == sanityn test 42h: pdirops: mkdir vs readdir ======================================================================== 03:55:12 (1713426912) [ 2001.218370] Lustre: DEBUG MARKER: == sanityn test 43a: rmdir,mkdir doesn't return -EEXIST ======================================================================== 03:55:18 (1713426918) [ 2024.763922] Lustre: DEBUG MARKER: == sanityn test 43b: pdirops: unlink vs create ======================================================================== 03:55:42 (1713426942) [ 2025.782249] LustreError: 29836:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout id 145 sleeping for 15000ms [ 2025.785130] LustreError: 29836:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 4 previous similar messages [ 2027.187000] LustreError: 29836:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 2027.189717] LustreError: 29836:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 2 previous similar messages [ 2030.834836] Lustre: DEBUG MARKER: == sanityn test 43c: pdirops: unlink vs link ======================================================================== 03:55:48 (1713426948) [ 2036.953060] Lustre: DEBUG MARKER: == sanityn test 43d: pdirops: unlink vs unlink ======================================================================== 03:55:54 (1713426954) [ 2042.972182] Lustre: DEBUG MARKER: == sanityn test 43e: pdirops: unlink and rename (tgt) ======================================================================== 03:56:00 (1713426960) [ 2048.921070] Lustre: DEBUG MARKER: == sanityn test 43f: pdirops: unlink and rename (src) ======================================================================== 03:56:06 (1713426966) [ 2055.275252] Lustre: DEBUG MARKER: == sanityn test 43g: pdirops: unlink vs getattr ======================================================================== 03:56:12 (1713426972) [ 2061.693899] Lustre: DEBUG MARKER: == sanityn test 43h: pdirops: unlink vs readdir ======================================================================== 03:56:19 (1713426979) [ 2068.029794] Lustre: DEBUG MARKER: == sanityn test 43i: pdirops: unlink vs remote mkdir ===== 03:56:25 (1713426985) [ 2074.353285] Lustre: DEBUG MARKER: == sanityn test 43j: racy mkdir return EEXIST ======================================================================== 03:56:31 (1713426991) [ 2074.699612] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_race id 167 sleeping [ 2074.699613] LustreError: 29840:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 waking [ 2074.704001] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 awake: rc=5000 [ 2075.381840] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_race id 167 sleeping [ 2075.382045] LustreError: 29840:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 waking [ 2075.382047] LustreError: 29840:0:(mdt_reint.c:628:mdt_create()) Skipped 1 previous similar message [ 2075.388509] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) Skipped 1 previous similar message [ 2075.390431] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 awake: rc=5000 [ 2075.392605] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) Skipped 1 previous similar message [ 2076.386666] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_race id 167 sleeping [ 2076.386810] LustreError: 29840:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 waking [ 2076.386812] LustreError: 29840:0:(mdt_reint.c:628:mdt_create()) Skipped 2 previous similar messages [ 2076.394302] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) Skipped 2 previous similar messages [ 2076.396009] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 awake: rc=5000 [ 2076.398524] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) Skipped 2 previous similar messages [ 2078.492207] LustreError: 29846:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 waking [ 2078.492212] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_race id 167 sleeping [ 2078.492214] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) Skipped 5 previous similar messages [ 2078.503767] LustreError: 29846:0:(mdt_reint.c:628:mdt_create()) Skipped 5 previous similar messages [ 2078.507621] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 awake: rc=4985 [ 2078.510223] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) Skipped 5 previous similar messages [ 2082.747534] LustreError: 29838:0:(mdt_reint.c:628:mdt_create()) cfs_race id 167 sleeping [ 2082.747537] LustreError: 29841:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 waking [ 2082.747542] LustreError: 29841:0:(mdt_reint.c:628:mdt_create()) Skipped 11 previous similar messages [ 2082.756068] LustreError: 29838:0:(mdt_reint.c:628:mdt_create()) Skipped 11 previous similar messages [ 2082.759511] LustreError: 29838:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 awake: rc=5000 [ 2082.762539] LustreError: 29838:0:(mdt_reint.c:628:mdt_create()) Skipped 11 previous similar messages [ 2090.926160] LustreError: 29842:0:(mdt_reint.c:628:mdt_create()) cfs_race id 167 sleeping [ 2090.926162] LustreError: 11250:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 waking [ 2090.926168] LustreError: 11250:0:(mdt_reint.c:628:mdt_create()) Skipped 23 previous similar messages [ 2090.934545] LustreError: 29842:0:(mdt_reint.c:628:mdt_create()) Skipped 23 previous similar messages [ 2090.937855] LustreError: 29842:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 awake: rc=5000 [ 2090.941403] LustreError: 29842:0:(mdt_reint.c:628:mdt_create()) Skipped 23 previous similar messages [ 2107.049012] LustreError: 8075:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 waking [ 2107.049018] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_race id 167 sleeping [ 2107.049021] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) Skipped 49 previous similar messages [ 2107.055959] LustreError: 8075:0:(mdt_reint.c:628:mdt_create()) Skipped 49 previous similar messages [ 2107.059080] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) cfs_fail_race id 167 awake: rc=4990 [ 2107.062001] LustreError: 29836:0:(mdt_reint.c:628:mdt_create()) Skipped 49 previous similar messages [ 2110.784319] Lustre: DEBUG MARKER: == sanityn test 43k: unlink vs create ==================== 03:57:08 (1713427028) [ 2140.119458] LustreError: 8075:0:(mdt_reint.c:1154:mdt_reint_unlink()) cfs_fail_race id 169 waking [ 2140.122060] LustreError: 8075:0:(mdt_reint.c:1154:mdt_reint_unlink()) Skipped 17 previous similar messages [ 2204.438274] LustreError: 6929:0:(mdt_reint.c:1154:mdt_reint_unlink()) cfs_fail_race id 169 waking [ 2204.440232] LustreError: 6929:0:(mdt_reint.c:1154:mdt_reint_unlink()) Skipped 34 previous similar messages [ 2298.401704] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) cfs_fail_race id 169 awake: rc=4494 [ 2298.404471] LustreError: 29836:0:(mdt_open.c:1495:mdt_reint_open()) Skipped 128 previous similar messages [ 2303.763129] LustreError: 29836:0:(mdt_open.c:1516:mdt_reint_open()) cfs_race id 16a sleeping [ 2303.765789] LustreError: 29836:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 129 previous similar messages [ 2333.272361] LustreError: 6931:0:(mdt_reint.c:1155:mdt_reint_unlink()) cfs_fail_race id 16a waking [ 2333.275377] LustreError: 6931:0:(mdt_reint.c:1155:mdt_reint_unlink()) Skipped 66 previous similar messages [ 2495.670770] Lustre: DEBUG MARKER: == sanityn test 44a: pdirops: rename tgt vs mkdir ======================================================================== 04:03:33 (1713427413) [ 2496.702620] LustreError: 6944:0:(mdt_reint.c:2573:mdt_lock_two_dirs()) cfs_fail_timeout id 146 sleeping for 10000ms [ 2496.705136] LustreError: 6944:0:(mdt_reint.c:2573:mdt_lock_two_dirs()) Skipped 7 previous similar messages [ 2498.107062] LustreError: 6944:0:(mdt_reint.c:2573:mdt_lock_two_dirs()) cfs_fail_timeout interrupted [ 2498.109616] LustreError: 6944:0:(mdt_reint.c:2573:mdt_lock_two_dirs()) Skipped 7 previous similar messages [ 2501.722982] Lustre: DEBUG MARKER: == sanityn test 44b: pdirops: rename tgt vs create ======================================================================== 04:03:39 (1713427419) [ 2507.780526] Lustre: DEBUG MARKER: == sanityn test 44c: pdirops: rename tgt vs link ======================================================================== 04:03:45 (1713427425) [ 2513.911555] Lustre: DEBUG MARKER: == sanityn test 44d: pdirops: rename tgt vs unlink ======================================================================== 04:03:51 (1713427431) [ 2519.882170] Lustre: DEBUG MARKER: == sanityn test 44e: pdirops: rename tgt and rename (tgt) ======================================================================== 04:03:57 (1713427437) [ 2526.025126] Lustre: DEBUG MARKER: == sanityn test 44f: pdirops: rename tgt and rename (src) ======================================================================== 04:04:03 (1713427443) [ 2532.180963] Lustre: DEBUG MARKER: == sanityn test 44g: pdirops: rename tgt vs getattr ======================================================================== 04:04:09 (1713427449) [ 2538.252236] Lustre: DEBUG MARKER: == sanityn test 44h: pdirops: rename tgt vs readdir ======================================================================== 04:04:15 (1713427455) [ 2544.354645] Lustre: DEBUG MARKER: == sanityn test 44i: pdirops: rename tgt vs remote mkdir ========================================================== 04:04:21 (1713427461) [ 2550.412053] Lustre: DEBUG MARKER: == sanityn test 45a: rename,mkdir doesn't return -EEXIST ======================================================================== 04:04:27 (1713427467) [ 2582.013197] Lustre: DEBUG MARKER: == sanityn test 45b: pdirops: rename src vs create ======================================================================== 04:04:59 (1713427499) [ 2588.028005] Lustre: DEBUG MARKER: == sanityn test 45c: pdirops: rename src vs link ======================================================================== 04:05:05 (1713427505) [ 2594.011079] Lustre: DEBUG MARKER: == sanityn test 45d: pdirops: rename src vs unlink ======================================================================== 04:05:11 (1713427511) [ 2600.082553] Lustre: DEBUG MARKER: == sanityn test 45e: pdirops: rename src and rename (tgt) ======================================================================== 04:05:17 (1713427517) [ 2606.095053] Lustre: DEBUG MARKER: == sanityn test 45f: pdirops: rename src and rename (src) ======================================================================== 04:05:23 (1713427523) [ 2612.256910] Lustre: DEBUG MARKER: == sanityn test 45g: pdirops: rename src vs getattr ======================================================================== 04:05:29 (1713427529) [ 2618.406687] Lustre: DEBUG MARKER: == sanityn test 45h: pdirops: unlink vs readdir ======================================================================== 04:05:35 (1713427535) [ 2623.864591] Lustre: DEBUG MARKER: == sanityn test 45i: pdirops: rename src vs remote mkdir ========================================================== 04:05:41 (1713427541) [ 2624.894546] LustreError: 6944:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout id 145 sleeping for 15000ms [ 2624.898122] LustreError: 6944:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 15 previous similar messages [ 2626.300052] LustreError: 6944:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 2626.302465] LustreError: 6944:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) Skipped 15 previous similar messages [ 2630.050647] Lustre: DEBUG MARKER: == sanityn test 45j: read vs rename ====================== 04:05:47 (1713427547) [ 2631.648172] LustreError: 6944:0:(mdt_reint.c:2712:mdt_reint_rename()) cfs_fail_race id 169 waking [ 2631.650547] LustreError: 6944:0:(mdt_reint.c:2712:mdt_reint_rename()) Skipped 82 previous similar messages [ 2900.150593] LustreError: 11251:0:(mdt_open.c:1516:mdt_reint_open()) cfs_fail_race id 16a awake: rc=4494 [ 2900.155041] LustreError: 11251:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 240 previous similar messages [ 2905.314587] LustreError: 29836:0:(mdt_open.c:1516:mdt_reint_open()) cfs_race id 16a sleeping [ 2905.316527] LustreError: 29836:0:(mdt_open.c:1516:mdt_reint_open()) Skipped 240 previous similar messages [ 3015.432834] Lustre: DEBUG MARKER: == sanityn test 46a: pdirops: link vs mkdir ======================================================================== 04:12:12 (1713427932) [ 3016.386110] LustreError: 6929:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout id 145 sleeping for 15000ms [ 3017.789037] LustreError: 6929:0:(mdt_handler.c:3946:mdt_object_pdo_lock()) cfs_fail_timeout interrupted [ 3021.227392] Lustre: DEBUG MARKER: == sanityn test 46b: pdirops: link vs create ======================================================================== 04:12:18 (1713427938) [ 3027.121875] Lustre: DEBUG MARKER: == sanityn test 46c: pdirops: link vs link ======================================================================== 04:12:24 (1713427944) [ 3032.958155] Lustre: DEBUG MARKER: == sanityn test 46d: pdirops: link vs unlink ======================================================================== 04:12:30 (1713427950) [ 3038.828718] Lustre: DEBUG MARKER: == sanityn test 46e: pdirops: link and rename (tgt) ======================================================================== 04:12:36 (1713427956) [ 3044.705505] Lustre: DEBUG MARKER: == sanityn test 46f: pdirops: link and rename (src) ======================================================================== 04:12:42 (1713427962) [ 3050.657744] Lustre: DEBUG MARKER: == sanityn test 46g: pdirops: link vs getattr ======================================================================== 04:12:48 (1713427968) [ 3057.010523] Lustre: DEBUG MARKER: == sanityn test 46h: pdirops: link vs readdir ======================================================================== 04:12:54 (1713427974) [ 3062.721379] Lustre: DEBUG MARKER: == sanityn test 46i: pdirops: link vs remote mkdir ======= 04:13:00 (1713427980) [ 3068.459884] Lustre: DEBUG MARKER: == sanityn test 47a: pdirops: remote mkdir vs mkdir ====== 04:13:06 (1713427986) [ 3074.495272] Lustre: DEBUG MARKER: == sanityn test 47b: pdirops: remote mkdir vs create ===== 04:13:11 (1713427991) [ 3081.584083] Lustre: DEBUG MARKER: == sanityn test 47c: pdirops: remote mkdir vs link ======= 04:13:19 (1713427999) [ 3087.544613] Lustre: DEBUG MARKER: == sanityn test 47d: pdirops: remote mkdir vs unlink ===== 04:13:25 (1713428005) [ 3093.398301] Lustre: DEBUG MARKER: == sanityn test 47e: pdirops: remote mkdir and rename (tgt) ========================================================== 04:13:30 (1713428010) [ 3099.387994] Lustre: DEBUG MARKER: == sanityn test 47f: pdirops: remote mkdir and rename (src) ========================================================== 04:13:36 (1713428016) [ 3105.296499] Lustre: DEBUG MARKER: == sanityn test 47g: pdirops: remote mkdir vs getattr ==== 04:13:42 (1713428022) [ 3111.762193] Lustre: DEBUG MARKER: == sanityn test 50: osc lvb attrs: enqueue vs. CP AST ======================================================================== 04:13:49 (1713428029) [ 3119.840821] Lustre: DEBUG MARKER: == sanityn test 51a: layout lock: refresh layout should work ========================================================== 04:13:57 (1713428037) [ 3124.900867] Lustre: DEBUG MARKER: == sanityn test 51b: layout lock: glimpse should be able to restart if layout changed ========================================================== 04:14:02 (1713428042) [ 3140.022044] Lustre: DEBUG MARKER: == sanityn test 51c: layout lock: IT_LAYOUT blocked and correct layout can be returned ========================================================== 04:14:17 (1713428057) [ 3142.390990] LustreError: 29838:0:(mdt_open.c:948:mdt_object_open_lock()) cfs_fail_timeout id 172 awake [ 3142.394752] LustreError: 29838:0:(mdt_open.c:948:mdt_object_open_lock()) Skipped 12 previous similar messages [ 3147.566153] Lustre: DEBUG MARKER: == sanityn test 51d: layout lock: losing layout lock should clean up memory map region ========================================================== 04:14:25 (1713428065) [ 3151.735917] Lustre: DEBUG MARKER: == sanityn test 51e: lfs getstripe does not break leases, part 2 ========================================================== 04:14:29 (1713428069) [ 3156.797846] Lustre: DEBUG MARKER: == sanityn test 54: rename locking ======================= 04:14:34 (1713428074) [ 3162.088123] LustreError: 6944:0:(mdt_reint.c:2562:mdt_lock_two_dirs()) cfs_fail_timeout id 153 awake [ 3162.090202] LustreError: 6944:0:(mdt_reint.c:2562:mdt_lock_two_dirs()) Skipped 1 previous similar message [ 3178.131091] LustreError: 6944:0:(mdt_reint.c:2742:mdt_reint_rename()) cfs_fail_timeout id 154 awake [ 3178.133819] LustreError: 6944:0:(mdt_reint.c:2742:mdt_reint_rename()) Skipped 2 previous similar messages [ 3181.204590] Lustre: DEBUG MARKER: == sanityn test 55a: rename vs unlink target dir ========= 04:14:58 (1713428098) [ 3189.607052] Lustre: DEBUG MARKER: == sanityn test 55b: rename vs unlink source dir ========= 04:15:07 (1713428107) [ 3198.101957] Lustre: DEBUG MARKER: == sanityn test 55c: rename vs unlink orphan target dir == 04:15:15 (1713428115) [ 3211.639182] Lustre: DEBUG MARKER: == sanityn test 55d: rename file vs link ================= 04:15:29 (1713428129) [ 3217.004083] LustreError: 6944:0:(mdt_reint.c:2638:mdt_reint_rename()) cfs_fail_timeout id 155 awake [ 3217.007638] LustreError: 6944:0:(mdt_reint.c:2638:mdt_reint_rename()) Skipped 4 previous similar messages [ 3222.152458] Lustre: DEBUG MARKER: == sanityn test 56a: test llverdev with single large stripe ========================================================== 04:15:39 (1713428139) [ 3229.653834] Lustre: DEBUG MARKER: == sanityn test 56b: test llverdev and partial verify of wide stripe file ========================================================== 04:15:47 (1713428147) [ 3255.431452] Lustre: DEBUG MARKER: == sanityn test 60: Verify data_version behaviour ======== 04:16:12 (1713428172) [ 3259.135754] Lustre: DEBUG MARKER: == sanityn test 70a: cd directory [ 3262.774629] Lustre: DEBUG MARKER: == sanityn test 70b: remove files after calling rm_entry ========================================================== 04:16:20 (1713428180) [ 3265.977399] Lustre: DEBUG MARKER: == sanityn test 71a: correct file map just after write operation is finished ========================================================== 04:16:23 (1713428183) [ 3269.293338] Lustre: DEBUG MARKER: == sanityn test 71b: check fiemap support for stripecount > 1 ========================================================== 04:16:26 (1713428186) [ 3272.493865] Lustre: DEBUG MARKER: == sanityn test 71c: check FIEMAP_EXTENT_LAST flag with different extents number ========================================================== 04:16:30 (1713428190) [ 3280.200780] Lustre: DEBUG MARKER: == sanityn test 71d: fiemap corruption test with fm_extent_count=0 ========================================================== 04:16:37 (1713428197) [ 3288.945977] Lustre: DEBUG MARKER: == sanityn test 72: getxattr/setxattr cache should be consistent between nodes ========================================================== 04:16:46 (1713428206) [ 3292.274226] Lustre: DEBUG MARKER: == sanityn test 73: getxattr should not cause xattr lock cancellation ========================================================== 04:16:49 (1713428209) [ 3295.488895] Lustre: DEBUG MARKER: == sanityn test 74: flock deadlock: different mounts ======================================================================== 04:16:53 (1713428213) [ 3303.133881] Lustre: DEBUG MARKER: == sanityn test 75: osc: upcall after unuse lock============================================================================= 04:17:00 (1713428220) [ 3312.271520] Lustre: DEBUG MARKER: == sanityn test 76: Verify MDT open_files listing ======== 04:17:09 (1713428229) [ 3352.868988] Lustre: DEBUG MARKER: == sanityn test 77a: check FIFO NRS policy =============== 04:17:50 (1713428270) [ 3357.924030] Lustre: DEBUG MARKER: == sanityn test 77b: check CRR-N NRS policy ============== 04:17:55 (1713428275) [ 3362.685163] Lustre: DEBUG MARKER: == sanityn test 77c: check ORR NRS policy ================ 04:18:00 (1713428280) [ 3368.724765] Lustre: DEBUG MARKER: == sanityn test 77d: check TRR nrs policy ================ 04:18:06 (1713428286) [ 3374.245247] Lustre: DEBUG MARKER: == sanityn test 77e: check TBF NID nrs policy ============ 04:18:11 (1713428291) [ 3375.379333] ------------[ cut here ]------------ [ 3375.381022] WARNING: CPU: 1 PID: 9201 at lib/refcount.c:241 refcount_dec+0x3c/0x50 [ 3375.382955] refcount_t: decrement hit 0; leaking memory. [ 3375.384792] Modules linked in: zfs(PO) zunicode(PO) zlua(PO) zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) lustre(OE) osp(OE) ofd(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey dm_mod crc_t10dif crct10dif_generic rpcsec_gss_krb5 sb_edac edac_core iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel squashfs aesni_intel lrw gf128mul glue_helper ablk_helper cryptd i2c_piix4 i2c_core pcspkr binfmt_misc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw ata_piix libata [ 3375.400213] CPU: 1 PID: 9201 Comm: ll_ost_io00_002 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #1 [ 3375.403593] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 3375.405977] Call Trace: [ 3375.406625] [] dump_stack+0x19/0x1b [ 3375.407864] [] __warn+0xd8/0x100 [ 3375.409427] [] warn_slowpath_fmt+0x5f/0x80 [ 3375.410576] [] refcount_dec+0x3c/0x50 [ 3375.412604] [] nrs_tbf_res_get+0x294/0x510 [ptlrpc] [ 3375.414439] [] nrs_resource_get+0x7c/0x100 [ptlrpc] [ 3375.416416] [] nrs_resource_get_safe+0x89/0x110 [ptlrpc] [ 3375.418218] [] ptlrpc_nrs_req_initialize+0x83/0x100 [ptlrpc] [ 3375.420634] [] ptlrpc_server_request_add+0x10b/0xb00 [ptlrpc] [ 3375.422567] [] ? _raw_spin_unlock+0xe/0x20 [ 3375.423870] ------------[ cut here ]------------ [ 3375.423878] WARNING: CPU: 3 PID: 10513 at lib/refcount.c:241 refcount_dec+0x3c/0x50 [ 3375.423878] refcount_t: decrement hit 0; leaking memory. [ 3375.423920] Modules linked in: zfs(PO) zunicode(PO) zlua(PO) zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) lustre(OE) osp(OE) ofd(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey dm_mod crc_t10dif crct10dif_generic rpcsec_gss_krb5 sb_edac edac_core iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel squashfs aesni_intel lrw gf128mul glue_helper ablk_helper cryptd i2c_piix4 i2c_core pcspkr binfmt_misc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw ata_piix libata [ 3375.423923] CPU: 3 PID: 10513 Comm: ll_ost_io00_013 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #1 [ 3375.423924] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 3375.423925] Call Trace: [ 3375.423934] [] dump_stack+0x19/0x1b [ 3375.423942] [] __warn+0xd8/0x100 [ 3375.423959] [] warn_slowpath_fmt+0x5f/0x80 [ 3375.423961] [] refcount_dec+0x3c/0x50 [ 3375.424054] [] nrs_tbf_res_get+0x294/0x510 [ptlrpc] [ 3375.424097] [] nrs_resource_get+0x7c/0x100 [ptlrpc] [ 3375.424133] [] nrs_resource_get_safe+0x89/0x110 [ptlrpc] [ 3375.424170] [] ptlrpc_nrs_req_initialize+0x83/0x100 [ptlrpc] [ 3375.424205] [] ptlrpc_server_request_add+0x10b/0xb00 [ptlrpc] [ 3375.424212] [] ? _raw_spin_unlock+0xe/0x20 [ 3375.424247] [] ? ptlrpc_at_add_timed+0x177/0x260 [ptlrpc] [ 3375.424298] [] ptlrpc_server_handle_req_in+0x7af/0xa90 [ptlrpc] [ 3375.424335] [] ptlrpc_main+0xbee/0x1690 [ptlrpc] [ 3375.424339] [] ? do_raw_spin_unlock+0x49/0x90 [ 3375.424371] [] ? ptlrpc_wait_event+0x610/0x610 [ptlrpc] [ 3375.424374] [] kthread+0xe4/0xf0 [ 3375.424377] [] ? kthread_create_on_node+0x140/0x140 [ 3375.424379] [] ret_from_fork_nospec_begin+0x7/0x21 [ 3375.424381] [] ? kthread_create_on_node+0x140/0x140 [ 3375.424383] ---[ end trace c06fd4252916fa23 ]--- [ 3375.496639] [] ? ptlrpc_at_add_timed+0x177/0x260 [ptlrpc] [ 3375.499749] [] ptlrpc_server_handle_req_in+0x7af/0xa90 [ptlrpc] [ 3375.502908] [] ptlrpc_main+0xbee/0x1690 [ptlrpc] [ 3375.504521] [] ? put_prev_entity+0x31/0x400 [ 3375.506322] [] ? do_raw_spin_unlock+0x49/0x90 [ 3375.508227] [] ? ptlrpc_wait_event+0x610/0x610 [ptlrpc] [ 3375.510589] [] kthread+0xe4/0xf0 [ 3375.512695] [] ? kthread_create_on_node+0x140/0x140 [ 3375.514345] [] ret_from_fork_nospec_begin+0x7/0x21 [ 3375.516654] [] ? kthread_create_on_node+0x140/0x140 [ 3375.519321] ---[ end trace c06fd4252916fa24 ]--- [ 3381.737744] Lustre: DEBUG MARKER: == sanityn test 77f: check TBF JobID nrs policy ========== 04:18:19 (1713428299) [ 3383.039136] ------------[ cut here ]------------ [ 3383.041717] WARNING: CPU: 2 PID: 10509 at lib/refcount.c:161 refcount_inc+0x30/0x40 [ 3383.045266] refcount_t: increment on 0; use-after-free. [ 3383.046635] Modules linked in: zfs(PO) zunicode(PO) zlua(PO) zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) lustre(OE) osp(OE) ofd(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey dm_mod crc_t10dif crct10dif_generic rpcsec_gss_krb5 sb_edac edac_core iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel squashfs aesni_intel lrw gf128mul glue_helper ablk_helper cryptd i2c_piix4 i2c_core pcspkr binfmt_misc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw ata_piix libata [ 3383.062668] CPU: 2 PID: 10509 Comm: ll_ost_io00_010 Kdump: loaded Tainted: P W OE ------------ 3.10.0-7.9-debug #1 [ 3383.066363] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 3383.069625] Call Trace: [ 3383.070517] [] dump_stack+0x19/0x1b [ 3383.071624] [] __warn+0xd8/0x100 [ 3383.072705] [] warn_slowpath_fmt+0x5f/0x80 [ 3383.073937] [] ? libcfs_debug_msg+0x6d4/0xc20 [libcfs] [ 3383.075417] [] refcount_inc+0x30/0x40 [ 3383.076481] [] nrs_tbf_jobid_hop_get+0x15/0x20 [ptlrpc] [ 3383.077903] [] cfs_hash_bd_lookup_intent+0xe1/0x160 [libcfs] [ 3383.079479] [] cfs_hash_bd_lookup_locked+0x16/0x20 [libcfs] [ 3383.081081] [] nrs_tbf_jobid_hash_lookup+0x13/0x60 [ptlrpc] [ 3383.082616] [] nrs_tbf_jobid_cli_find+0x72/0xb0 [ptlrpc] [ 3383.084554] [] nrs_tbf_res_get+0x47/0x510 [ptlrpc] [ 3383.086236] [] nrs_resource_get+0x7c/0x100 [ptlrpc] [ 3383.087727] [] nrs_resource_get_safe+0x89/0x110 [ptlrpc] [ 3383.089228] [] ptlrpc_nrs_req_initialize+0x83/0x100 [ptlrpc] [ 3383.090793] [] ptlrpc_server_request_add+0x10b/0xb00 [ptlrpc] [ 3383.092354] [] ? _raw_spin_unlock+0xe/0x20 [ 3383.093672] [] ? ptlrpc_at_add_timed+0x177/0x260 [ptlrpc] [ 3383.095273] [] ptlrpc_server_handle_req_in+0x7af/0xa90 [ptlrpc] [ 3383.097024] [] ptlrpc_main+0xbee/0x1690 [ptlrpc] [ 3383.098449] [] ? put_prev_entity+0x31/0x400 [ 3383.099718] [] ? do_raw_spin_unlock+0x49/0x90 [ 3383.101131] [] ? ptlrpc_wait_event+0x610/0x610 [ptlrpc] [ 3383.102656] [] kthread+0xe4/0xf0 [ 3383.104014] [] ? kthread_create_on_node+0x140/0x140 [ 3383.107172] [] ret_from_fork_nospec_begin+0x7/0x21 [ 3383.109921] [] ? kthread_create_on_node+0x140/0x140 [ 3383.111942] ---[ end trace c06fd4252916fa25 ]--- [ 3383.131784] ------------[ cut here ]------------ [ 3383.133738] WARNING: CPU: 2 PID: 10509 at lib/refcount.c:292 refcount_dec_not_one+0x7d/0x90 [ 3383.136897] refcount_t: underflow; use-after-free. [ 3383.138751] Modules linked in: zfs(PO) zunicode(PO) zlua(PO) zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) lustre(OE) osp(OE) ofd(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE)[ 3383.146443] ------------[ cut here ]------------ [ 3383.146450] WARNING: CPU: 0 PID: 9201 at lib/refcount.c:292 refcount_dec_not_one+0x7d/0x90 [ 3383.146451] refcount_t: underflow; use-after-free. [ 3383.146452] Modules linked in: zfs(PO) zunicode(PO) zlua(PO) zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) lustre(OE) osp(OE) ofd(OE) lod(OE) ost(OE) mdt(OE) mdd(OE) mgs(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lfsck(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey dm_mod crc_t10dif crct10dif_generic rpcsec_gss_krb5 sb_edac edac_core iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel squashfs aesni_intel lrw gf128mul glue_helper ablk_helper cryptd i2c_piix4 i2c_core pcspkr binfmt_misc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw ata_piix libata [ 3383.146504] CPU: 0 PID: 9201 Comm: ll_ost_io00_002 Kdump: loaded Tainted: P W OE ------------ 3.10.0-7.9-debug #1 [ 3383.146505] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 3383.146507] Call Trace: [ 3383.146517] [] dump_stack+0x19/0x1b [ 3383.146522] [] __warn+0xd8/0x100 [ 3383.146524] [] warn_slowpath_fmt+0x5f/0x80 [ 3383.146612] [] ? nrs_tbf_jobid_hop_hash+0x19/0x70 [ptlrpc] [ 3383.146615] [] refcount_dec_not_one+0x7d/0x90 [ 3383.146617] [] refcount_dec_and_lock+0x16/0x60 [ 3383.146658] [] nrs_tbf_jobid_cli_put+0x90/0x1e0 [ptlrpc] [ 3383.146698] [] nrs_tbf_res_put+0x1b/0x20 [ptlrpc] [ 3383.146740] [] nrs_resource_put+0x48/0x60 [ptlrpc] [ 3383.146788] [] nrs_resource_put_safe+0x41/0x70 [ptlrpc] [ 3383.146854] [] ptlrpc_nrs_req_finalize+0x22/0x30 [ptlrpc] [ 3383.146888] [] ptlrpc_server_finish_active_request+0x53/0x140 [ptlrpc] [ 3383.146925] [] ptlrpc_server_handle_request+0x424/0xcb0 [ptlrpc] [ 3383.146961] [] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 3383.146982] [] ? put_prev_entity+0x31/0x400 [ 3383.146986] [] ? do_raw_spin_unlock+0x49/0x90 [ 3383.147017] [] ? ptlrpc_wait_event+0x610/0x610 [ptlrpc] [ 3383.147022] [] kthread+0xe4/0xf0 [ 3383.147024] [] ? kthread_create_on_node+0x140/0x140 [ 3383.147027] [] ret_from_fork_nospec_begin+0x7/0x21 [ 3383.147029] [] ? kthread_create_on_node+0x140/0x140 [ 3383.147031] ---[ end trace c06fd4252916fa26 ]--- [ 3383.207512] osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) dm_flakey dm_mod crc_t10dif crct10dif_generic rpcsec_gss_krb5 sb_edac edac_core iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel squashfs aesni_intel lrw gf128mul glue_helper ablk_helper cryptd i2c_piix4 i2c_core pcspkr binfmt_misc ip_tables ext4 mbcache jbd2 ata_generic pata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw ata_piix libata [ 3383.224930] CPU: 2 PID: 10509 Comm: ll_ost_io00_010 Kdump: loaded Tainted: P W OE ------------ 3.10.0-7.9-debug #1 [ 3383.228372] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 3383.230065] Call Trace: [ 3383.230664] [] dump_stack+0x19/0x1b [ 3383.231652] [] __warn+0xd8/0x100 [ 3383.232674] [] warn_slowpath_fmt+0x5f/0x80 [ 3383.234400] [] ? nrs_tbf_jobid_hop_hash+0x19/0x70 [ptlrpc] [ 3383.235933] [] refcount_dec_not_one+0x7d/0x90 [ 3383.237357] [] refcount_dec_and_lock+0x16/0x60 [ 3383.239051] [] nrs_tbf_jobid_cli_put+0x90/0x1e0 [ptlrpc] [ 3383.240911] [] nrs_tbf_res_put+0x1b/0x20 [ptlrpc] [ 3383.242725] [] nrs_resource_put+0x48/0x60 [ptlrpc] [ 3383.244830] [] nrs_resource_put_safe+0x41/0x70 [ptlrpc] [ 3383.246731] [] ptlrpc_nrs_req_finalize+0x22/0x30 [ptlrpc] [ 3383.248659] [] ptlrpc_server_finish_active_request+0x53/0x140 [ptlrpc] [ 3383.250493] [] ptlrpc_server_handle_request+0x424/0xcb0 [ptlrpc] [ 3383.252823] [] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 3383.254616] [] ? put_prev_entity+0x31/0x400 [ 3383.257042] [] ? do_raw_spin_unlock+0x49/0x90 [ 3383.258316] [] ? ptlrpc_wait_event+0x610/0x610 [ptlrpc] [ 3383.260165] [] kthread+0xe4/0xf0 [ 3383.262706] [] ? kthread_create_on_node+0x140/0x140 [ 3383.264938] [] ret_from_fork_nospec_begin+0x7/0x21 [ 3383.267556] [] ? kthread_create_on_node+0x140/0x140 [ 3383.269560] ---[ end trace c06fd4252916fa27 ]--- [ 3389.683548] Lustre: DEBUG MARKER: == sanityn test 77g: Change TBF type directly ============ 04:18:27 (1713428307) [ 3394.386982] Lustre: DEBUG MARKER: == sanityn test 77h: Wrong policy name should report error, not LBUG ========================================================== 04:18:31 (1713428311) [ 3399.602217] Lustre: DEBUG MARKER: == sanityn test 77i: Change rank of TBF rule ============= 04:18:37 (1713428317) [ 3407.248766] Lustre: DEBUG MARKER: == sanityn test 77j: check TBF-OPCode NRS policy ========= 04:18:44 (1713428324) [ 3448.103575] Lustre: DEBUG MARKER: == sanityn test 77ja: check TBF-UID/GID NRS policy ======= 04:19:25 (1713428365) [ 3561.948496] Lustre: DEBUG MARKER: == sanityn test 77jb: check TBF-UID/GID NRS policy on files that don't belong to us ========================================================== 04:21:19 (1713428479) [ 3672.782240] Lustre: DEBUG MARKER: == sanityn test 77k: check TBF policy with NID/JobID/OPCode expression ========================================================== 04:23:10 (1713428590) [ 3938.484852] Lustre: DEBUG MARKER: == sanityn test 77l: check the output of NRS policies for generic TBF ========================================================== 04:27:36 (1713428856) [ 3942.480297] Lustre: DEBUG MARKER: == sanityn test 77m: check NRS Delay slows write RPC processing ========================================================== 04:27:40 (1713428860) [ 3993.936736] Lustre: DEBUG MARKER: == sanityn test 77n: check wildcard support for TBF JobID NRS policy ========================================================== 04:28:31 (1713428911) [ 4038.794136] Lustre: DEBUG MARKER: == sanityn test 77o: Changing rank should not panic ====== 04:29:16 (1713428956) [ 4043.656549] Lustre: DEBUG MARKER: == sanityn test 77q: Parallel TBF rule definitions should not panic ========================================================== 04:29:21 (1713428961) [ 4071.551504] Lustre: DEBUG MARKER: == sanityn test 77p: Check validity of rule names for TBF policies ========================================================== 04:29:49 (1713428989) [ 4083.087981] Lustre: DEBUG MARKER: == sanityn test 77r: Change type of tbf policy at run time ========================================================== 04:30:00 (1713429000) [ 4122.804090] Lustre: DEBUG MARKER: == sanityn test 78: Enable policy and specify tunings right away ========================================================== 04:30:40 (1713429040) [ 4126.717664] Lustre: DEBUG MARKER: == sanityn test 79: xattr: intent error ================== 04:30:44 (1713429044) [ 4127.199790] LustreError: 6930:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4127.202881] LustreError: 6930:0:(mdt_handler.c:5180:mdt_intent_opc()) Skipped 27 previous similar messages [ 4137.206028] LustreError: 6930:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4137.209786] LustreError: 6930:0:(mdt_handler.c:5180:mdt_intent_opc()) Skipped 1 previous similar message [ 4137.213262] Lustre: *** cfs_fail_loc=131, val=0*** [ 4140.612698] Lustre: DEBUG MARKER: == sanityn test 80a: migrate directory when some children is being opened ========================================================== 04:30:58 (1713429058) [ 4141.987555] LustreError: 6944:0:(mdt_reint.c:2451:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000402:0x602:0x0]/f80a.sanityn failed: rc = -16 [ 4147.071456] Lustre: DEBUG MARKER: == sanityn test 80b: Accessing directory during migration ========================================================== 04:31:04 (1713429064) [ 4148.037334] LustreError: 27744:0:(mdt_reint.c:2451:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000402:0x629:0x0]/file3 failed: rc = -16 [ 4148.041427] LustreError: 27744:0:(mdt_reint.c:2451:mdt_reint_migrate()) Skipped 13 previous similar messages [ 4148.348293] LustreError: 27744:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) lustre-MDD0000: 'migrate_dir' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush migrate_dir' to finish migration: rc = -1 [ 4149.292507] LustreError: 6944:0:(mdt_reint.c:2451:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000402:0x61d:0x0]/migrate_dir failed: rc = -16 [ 4149.296831] LustreError: 6944:0:(mdt_reint.c:2451:mdt_reint_migrate()) Skipped 11 previous similar messages [ 4149.825087] LustreError: 6944:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) lustre-MDD0000: 'migrate_dir' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush migrate_dir' to finish migration: rc = -1 [ 4152.425546] LustreError: 6944:0:(mdt_reint.c:2451:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000402:0x61d:0x0]/migrate_dir failed: rc = -114 [ 4152.436172] LustreError: 6944:0:(mdt_reint.c:2451:mdt_reint_migrate()) Skipped 17 previous similar messages [ 4152.823881] LustreError: 27744:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) lustre-MDD0000: 'migrate_dir' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush migrate_dir' to finish migration: rc = -1 [ 4152.835327] LustreError: 27744:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) Skipped 3 previous similar messages [ 4155.801311] LustreError: 6942:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) lustre-MDD0000: 'migrate_dir' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush migrate_dir' to finish migration: rc = -1 [ 4155.809911] LustreError: 6942:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) Skipped 2 previous similar messages [ 4158.232395] LustreError: 20078:0:(mdt_reint.c:2451:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000402:0xd6d:0x0]/file1 failed: rc = -16 [ 4158.241128] LustreError: 20078:0:(mdt_reint.c:2451:mdt_reint_migrate()) Skipped 14 previous similar messages [ 4166.357574] LustreError: 6942:0:(mdt_reint.c:2451:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000402:0xddb:0x0]/file3 failed: rc = -16 [ 4166.364149] LustreError: 6942:0:(mdt_reint.c:2451:mdt_reint_migrate()) Skipped 10 previous similar messages [ 4169.311693] LustreError: 6942:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) lustre-MDD0001: 'migrate_dir' migration was interrupted, run 'lfs migrate -m 0 -c 1 -H crush migrate_dir' to finish migration: rc = -1 [ 4177.745512] LustreError: 27744:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) lustre-MDD0000: 'migrate_dir' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush migrate_dir' to finish migration: rc = -1 [ 4177.751379] LustreError: 27744:0:(mdd_dir.c:4470:mdd_migrate_cmd_check()) Skipped 15 previous similar messages [ 4182.484826] LustreError: 20078:0:(mdt_reint.c:2451:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000402:0x7fd:0x0]/file3 failed: rc = -16 [ 4182.488997] LustreError: 20078:0:(mdt_reint.c:2451:mdt_reint_migrate()) Skipped 112 previous similar messages [ 4227.476188] Lustre: mdt_io00_004: service thread pid 20078 was inactive for 40.072 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 4227.485488] Pid: 20078, comm: mdt_io00_004 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 4227.492735] Call Trace: [ 4227.493632] [<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc] [ 4227.496685] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [ 4227.498642] [<0>] mdt_object_pdo_lock+0x4d9/0x7e0 [mdt] [ 4227.502814] [<0>] mdt_parent_lock+0x76/0x2a0 [mdt] [ 4227.505656] [<0>] mdt_reint_migrate+0xd68/0x2420 [mdt] [ 4227.509049] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 4227.511381] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 4227.513249] [<0>] mdt_reint+0x67/0x150 [mdt] [ 4227.516749] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 4227.518876] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 4227.521106] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 4227.524711] [<0>] kthread+0xe4/0xf0 [ 4227.526552] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 4227.528387] [<0>] 0xfffffffffffffffe [ 4259.476056] Lustre: mdt00_005: service thread pid 28563 was inactive for 72.073 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 4259.483355] Pid: 28563, comm: mdt00_005 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 4259.487835] Call Trace: [ 4259.490974] [<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc] [ 4259.495704] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [ 4259.501237] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 4259.506524] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [ 4259.512942] [<0>] mdt_object_open_lock+0x669/0xb50 [mdt] [ 4259.515049] [<0>] mdt_reint_open+0x24cb/0x2e40 [mdt] [ 4259.518625] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 4259.521913] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 4259.526039] [<0>] mdt_intent_open+0x93/0x480 [mdt] [ 4259.528729] [<0>] mdt_intent_opc+0x1c9/0xc70 [mdt] [ 4259.530377] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [ 4259.533587] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [ 4259.537823] [<0>] ldlm_handle_enqueue+0x35b/0x1820 [ptlrpc] [ 4259.540324] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [ 4259.544540] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 4259.548199] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 4259.552125] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 4259.553314] [<0>] kthread+0xe4/0xf0 [ 4259.554556] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 4259.558204] [<0>] 0xfffffffffffffffe [ 4287.636101] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 192.168.201.36@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff880097342ac0/0x505e36402bec66ea lrc: 3/0,0 mode: PW/PW res: [0x200000402:0xf7a:0x0].0x0 bits 0x4/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.36@tcp remote: 0xbe437fb360d6a5cf expref: 505 pid: 29846 timeout: 4286 lvb_type: 0 [ 4287.694091] LustreError: 20078:0:(mdt_reint.c:2451:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000402:0xf7d:0x0]/file4 failed: rc = -16 [ 4287.702206] LustreError: 20078:0:(mdt_reint.c:2451:mdt_reint_migrate()) Skipped 61 previous similar messages [ 4294.932359] Lustre: DEBUG MARKER: == sanityn test 81a: rename and stat under striped directory ========================================================== 04:33:32 (1713429212) [ 4302.092470] Lustre: DEBUG MARKER: == sanityn test 81b: rename under striped directory doesn't deadlock ========================================================== 04:33:39 (1713429219) [ 4307.706773] Lustre: lustre-MDT0000: Client 92018d71-5236-41f8-a99f-7b8aea033388 (at 192.168.201.36@tcp) reconnecting [ 4307.712032] Lustre: Skipped 1 previous similar message [ 4468.712087] Lustre: 29836:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008faeed80 x1796656315450048/t8589973749(0) o101->92018d71-5236-41f8-a99f-7b8aea033388@192.168.201.36@tcp:83/0 lens 4632/51392 e 0 to 0 dl 1713429548 ref 1 fl Interpret:H/202/0 rc 0/0 job:'createmany.0' uid:0 gid:0 [ 4548.131448] Lustre: 27975:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1713429466/real 1713429466] req@ffff88008a91ed80 x1796656292665408/t0(0) o104->lustre-MDT0000@192.168.201.36@tcp:15/16 lens 328/224 e 0 to 1 dl 1713429482 ref 1 fl Rpc:ReXQU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 4579.363427] Lustre: DEBUG MARKER: == sanityn test 81c: rename revoke LOOKUP lock for remote object ========================================================== 04:38:16 (1713429496) [ 4579.988163] Lustre: DEBUG MARKER: SKIP: sanityn test_81c needs >= 4 MDTs [ 4582.876200] Lustre: DEBUG MARKER: == sanityn test 81d: parallel rename file cross-dir on same MDT ========================================================== 04:38:20 (1713429500) [ 4658.600168] Lustre: DEBUG MARKER: == sanityn test 82: fsetxattr and fgetxattr on orphan files ========================================================== 04:39:35 (1713429575) [ 4663.368172] Lustre: DEBUG MARKER: == sanityn test 83: access striped directory while it is being created/unlinked ========================================================== 04:39:40 (1713429580) [ 4787.196368] Lustre: DEBUG MARKER: == sanityn test 84: 0-nlink race in lu_object_find() ===== 04:41:44 (1713429704) [ 4787.617268] LustreError: 29836:0:(lu_object.c:863:lu_object_find_at()) cfs_race id 60b sleeping [ 4792.508836] LustreError: 6929:0:(mdt_reint.c:1337:mdt_reint_unlink()) cfs_fail_race id 60b waking [ 4792.512601] LustreError: 29836:0:(lu_object.c:863:lu_object_find_at()) cfs_fail_race id 60b awake: rc=0 [ 4796.459699] Lustre: DEBUG MARKER: == sanityn test 85: Lustre API root cache race =========== 04:41:53 (1713429713) [ 4801.242631] Lustre: DEBUG MARKER: == sanityn test 90: open/create and unlink striped directory ========================================================== 04:41:58 (1713429718) [ 4985.475719] Lustre: DEBUG MARKER: == sanityn test 91: chmod and unlink striped directory === 04:45:02 (1713429902) [ 5168.614658] Lustre: DEBUG MARKER: == sanityn test 92: create remote directory under orphan directory ========================================================== 04:48:06 (1713430086) [ 5171.861250] Lustre: DEBUG MARKER: == sanityn test 93: alloc_rr should not allocate on same ost ========================================================== 04:48:09 (1713430089) [ 5172.622456] LustreError: 6929:0:(lod_qos.c:674:lod_check_and_reserve_ost()) cfs_fail_timeout id 163 sleeping for 2000ms [ 5174.627045] LustreError: 6929:0:(lod_qos.c:674:lod_check_and_reserve_ost()) cfs_fail_timeout id 163 awake [ 5175.738067] LustreError: 8075:0:(lod_qos.c:674:lod_check_and_reserve_ost()) cfs_fail_timeout id 163 awake [ 5177.840092] LustreError: 8075:0:(lod_qos.c:674:lod_check_and_reserve_ost()) cfs_fail_timeout id 163 awake [ 5177.843425] LustreError: 8075:0:(lod_qos.c:674:lod_check_and_reserve_ost()) Skipped 1 previous similar message [ 5181.707463] Lustre: DEBUG MARKER: == sanityn test 94: signal vs CP callback race =========== 04:48:19 (1713430099) [ 5193.079162] Lustre: DEBUG MARKER: == sanityn test 95a: Check readpage() on a page that was removed from page cache ========================================================== 04:48:30 (1713430110) [ 5213.391963] Lustre: DEBUG MARKER: == sanityn test 95b: Check readpage() on a page that is no longer uptodate ========================================================== 04:48:50 (1713430130) [ 5222.551872] Lustre: DEBUG MARKER: == sanityn test 100a: DoM: glimpse RPCs for stat without IO lock (DoM only file) ========================================================== 04:49:00 (1713430140) [ 5222.895603] Lustre: DEBUG MARKER: SKIP: sanityn test_100a Reserved for glimpse-ahead [ 5224.839871] Lustre: DEBUG MARKER: == sanityn test 100b: DoM: no glimpse RPC for stat with IO lock (DoM only file) ========================================================== 04:49:02 (1713430142) [ 5228.107616] Lustre: DEBUG MARKER: == sanityn test 100c: DoM: write vs stat without IO lock (combined file) ========================================================== 04:49:05 (1713430145) [ 5231.315229] Lustre: DEBUG MARKER: == sanityn test 100d: DoM: write+truncate vs stat without IO lock (combined file) ========================================================== 04:49:08 (1713430148) [ 5234.350750] Lustre: DEBUG MARKER: == sanityn test 100e: DoM: read on open and file size ==== 04:49:11 (1713430151) [ 5237.501735] Lustre: DEBUG MARKER: == sanityn test 101a: Discard DoM data on unlink ========= 04:49:15 (1713430155) [ 5240.682333] Lustre: DEBUG MARKER: == sanityn test 101b: Discard DoM data on rename ========= 04:49:18 (1713430158) [ 5243.885867] Lustre: DEBUG MARKER: == sanityn test 101c: Discard DoM data on close-unlink === 04:49:21 (1713430161) [ 5248.033480] Lustre: DEBUG MARKER: == sanityn test 102: Test open by handle of unlinked file ========================================================== 04:49:25 (1713430165) [ 5251.580807] Lustre: DEBUG MARKER: == sanityn test 103: Test size correctness with lockahead ========================================================== 04:49:29 (1713430169) [ 5259.803399] Lustre: DEBUG MARKER: == sanityn test 104: Verify that MDS stores atime/mtime/ctime during close ========================================================== 04:49:37 (1713430177) [ 5279.092395] Lustre: DEBUG MARKER: == sanityn test 105: Glimpse and lock cancel race ======== 04:49:56 (1713430196) [ 5307.693321] Lustre: DEBUG MARKER: == sanityn test 106a: Verify the btime via statx() ======= 04:50:25 (1713430225) [ 5308.102091] Lustre: DEBUG MARKER: SKIP: sanityn test_106a Test only for ldiskfs and statx() supported [ 5310.118631] Lustre: DEBUG MARKER: == sanityn test 106b: Glimpse RPCs test for statx ======== 04:50:27 (1713430227) [ 5310.484788] Lustre: DEBUG MARKER: SKIP: sanityn test_106b statx() only test [ 5312.486273] Lustre: DEBUG MARKER: == sanityn test 106c: Verify statx attributes mask ======= 04:50:30 (1713430230) [ 5312.856010] Lustre: DEBUG MARKER: SKIP: sanityn test_106c statx() only test [ 5314.872398] Lustre: DEBUG MARKER: == sanityn test 107a: Basic grouplock conflict =========== 04:50:32 (1713430232) [ 5319.975835] Lustre: DEBUG MARKER: == sanityn test 107b: Grouplock is added to the head of waiting list ========================================================== 04:50:37 (1713430237) [ 5329.375890] Lustre: DEBUG MARKER: == sanityn test 108a: lseek: parallel updates ============ 04:50:46 (1713430246) [ 5337.043344] Lustre: DEBUG MARKER: == sanityn test 109: Race with several mount instances on 1 node ========================================================== 04:50:54 (1713430254) [ 5338.855346] Lustre: DEBUG MARKER: Iteration 1 [ 5345.464927] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5346.369538] Lustre: DEBUG MARKER: Iteration 2 [ 5352.144645] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5352.999634] Lustre: DEBUG MARKER: Iteration 3 [ 5359.712605] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5360.544512] Lustre: DEBUG MARKER: Iteration 4 [ 5367.396535] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5368.349942] Lustre: DEBUG MARKER: Iteration 5 [ 5375.115413] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5375.971393] Lustre: DEBUG MARKER: Iteration 6 [ 5382.813758] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5383.786779] Lustre: DEBUG MARKER: Iteration 7 [ 5390.750623] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5391.715557] Lustre: DEBUG MARKER: Iteration 8 [ 5398.707611] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5399.692969] Lustre: DEBUG MARKER: Iteration 9 [ 5406.818531] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5407.838303] Lustre: DEBUG MARKER: Iteration 10 [ 5415.115992] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5416.115713] Lustre: DEBUG MARKER: Iteration 11 [ 5423.013839] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5423.895883] Lustre: DEBUG MARKER: Iteration 12 [ 5430.756173] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5431.673912] Lustre: DEBUG MARKER: Iteration 13 [ 5438.524261] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5439.435146] Lustre: DEBUG MARKER: Iteration 14 [ 5446.305829] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5447.224956] Lustre: DEBUG MARKER: Iteration 15 [ 5453.992203] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5454.870445] Lustre: DEBUG MARKER: Iteration 16 [ 5461.566545] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5462.455581] Lustre: DEBUG MARKER: Iteration 17 [ 5469.236102] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5470.111619] Lustre: DEBUG MARKER: Iteration 18 [ 5476.935976] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5477.850875] Lustre: DEBUG MARKER: Iteration 19 [ 5484.818200] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5485.764028] Lustre: DEBUG MARKER: Iteration 20 [ 5492.662736] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5493.595658] Lustre: DEBUG MARKER: Iteration 21 [ 5500.523243] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5501.465794] Lustre: DEBUG MARKER: Iteration 22 [ 5508.231187] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5509.092692] Lustre: DEBUG MARKER: Iteration 23 [ 5516.053020] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5516.946421] Lustre: DEBUG MARKER: Iteration 24 [ 5523.760281] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5524.643223] Lustre: DEBUG MARKER: Iteration 25 [ 5531.316241] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5532.165528] Lustre: DEBUG MARKER: Iteration 26 [ 5539.272853] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5540.281398] Lustre: DEBUG MARKER: Iteration 27 [ 5547.493060] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5548.532544] Lustre: DEBUG MARKER: Iteration 28 [ 5555.874250] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5556.903011] Lustre: DEBUG MARKER: Iteration 29 [ 5564.222008] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5565.244906] Lustre: DEBUG MARKER: Iteration 30 [ 5572.351070] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5573.418342] Lustre: DEBUG MARKER: Iteration 31 [ 5580.562807] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5581.609674] Lustre: DEBUG MARKER: Iteration 32 [ 5588.820676] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5589.869090] Lustre: DEBUG MARKER: Iteration 33 [ 5596.931417] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5597.917438] Lustre: DEBUG MARKER: Iteration 34 [ 5604.937433] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5605.914413] Lustre: DEBUG MARKER: Iteration 35 [ 5612.841666] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5613.775108] Lustre: DEBUG MARKER: Iteration 36 [ 5620.781448] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5621.834802] Lustre: DEBUG MARKER: Iteration 37 [ 5628.950325] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5630.011871] Lustre: DEBUG MARKER: Iteration 38 [ 5637.275003] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5638.313720] Lustre: DEBUG MARKER: Iteration 39 [ 5645.464450] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5646.442989] Lustre: DEBUG MARKER: Iteration 40 [ 5653.528547] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5654.587434] Lustre: DEBUG MARKER: Iteration 41 [ 5660.851723] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5661.877339] Lustre: DEBUG MARKER: Iteration 42 [ 5669.068581] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5670.158248] Lustre: DEBUG MARKER: Iteration 43 [ 5677.199004] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5678.196982] Lustre: DEBUG MARKER: Iteration 44 [ 5685.189362] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5686.174710] Lustre: DEBUG MARKER: Iteration 45 [ 5693.378564] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5694.458183] Lustre: DEBUG MARKER: Iteration 46 [ 5701.622346] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5702.731768] Lustre: DEBUG MARKER: Iteration 47 [ 5709.736707] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5710.724007] Lustre: DEBUG MARKER: Iteration 48 [ 5717.959632] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5719.056025] Lustre: DEBUG MARKER: Iteration 49 [ 5726.125514] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5727.118711] Lustre: DEBUG MARKER: Iteration 50 [ 5734.426637] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing load_modules_local [ 5739.843987] Lustre: DEBUG MARKER: == sanityn test 110: do not grant another lock on resend ========================================================== 04:57:37 (1713430657) [ 5755.463069] LustreError: 11251:0:(mdt_handler.c:2463:mdt_getattr_name_lock()) cfs_fail_timeout id 534 awake [ 5755.466899] Lustre: 11251:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (11/3s); client may timeout req@ffff88008b3b0700 x1796662256211520/t0(0) o101->0079ea62-f8b0-440b-9ffc-87400a2d7b92@192.168.201.36@tcp:451/0 lens 584/640 e 0 to 0 dl 1713430671 ref 1 fl Complete:/200/0 rc 0/0 job:'stat.0' uid:0 gid:0 [ 5756.147356] Lustre: lustre-MDT0000: Client 0079ea62-f8b0-440b-9ffc-87400a2d7b92 (at 192.168.201.36@tcp) reconnecting [ 5772.170724] Lustre: lustre-MDT0000: Client 0079ea62-f8b0-440b-9ffc-87400a2d7b92 (at 192.168.201.36@tcp) reconnecting [ 5778.275036] LustreError: 28563:0:(service.c:2223:ptlrpc_server_handle_req_in()) cfs_fail_timeout id 534 awake [ 5778.277810] LustreError: 28563:0:(service.c:2223:ptlrpc_server_handle_req_in()) Skipped 1 previous similar message [ 5788.190553] Lustre: lustre-MDT0000: Client 0079ea62-f8b0-440b-9ffc-87400a2d7b92 (at 192.168.201.36@tcp) reconnecting [ 5804.211538] Lustre: lustre-MDT0000: Client 0079ea62-f8b0-440b-9ffc-87400a2d7b92 (at 192.168.201.36@tcp) reconnecting [ 5806.253003] Lustre: DEBUG MARKER: == sanityn test 111: A racy rename/link an open file should not cause fs corruption ========================================================== 04:58:43 (1713430723) [ 5806.592681] LustreError: 29846:0:(mdt_reint.c:1408:mdt_reint_link()) cfs_race id 18a sleeping [ 5806.596083] LustreError: 29846:0:(mdt_reint.c:1408:mdt_reint_link()) Skipped 2 previous similar messages [ 5808.608818] LustreError: 27744:0:(mdt_reint.c:2969:mdt_reint_rename()) cfs_fail_race id 18a waking [ 5808.611785] LustreError: 27744:0:(mdt_reint.c:2969:mdt_reint_rename()) Skipped 199 previous similar messages [ 5808.614499] LustreError: 29846:0:(mdt_reint.c:1408:mdt_reint_link()) cfs_fail_race id 18a awake: rc=2985 [ 5808.617767] LustreError: 29846:0:(mdt_reint.c:1408:mdt_reint_link()) Skipped 2 previous similar messages [ 5812.774632] Lustre: DEBUG MARKER: == sanityn test 112: update max-inherit in default LMV === 04:58:50 (1713430730) [ 5816.954783] Lustre: DEBUG MARKER: == sanityn test 113: check servers of specified fs ======= 04:58:54 (1713430734) [ 5820.124282] Lustre: DEBUG MARKER: == sanityn test 114: implicit default LMV inherit ======== 04:58:57 (1713430737) [ 5827.200488] Lustre: DEBUG MARKER: == sanityn test 115: ldiskfs doesn't check direntry for uniqueness ========================================================== 04:59:04 (1713430744) [ 5831.372090] LustreError: 29836:0:(mdt_reint.c:622:mdt_create()) cfs_fail_timeout id 2401 sleeping for 5000ms [ 5831.374415] LustreError: 29836:0:(mdt_reint.c:622:mdt_create()) Skipped 7 previous similar messages [ 5836.376069] LustreError: 29836:0:(mdt_reint.c:622:mdt_create()) cfs_fail_timeout id 2401 awake [ 5836.377705] LustreError: 29836:0:(mdt_reint.c:622:mdt_create()) Skipped 1 previous similar message [ 5838.466824] Lustre: DEBUG MARKER: cleanup: ====================================================== [ 5838.934984] Lustre: DEBUG MARKER: == sanityn test complete, duration 5747 sec ============== 04:59:16 (1713430756) [ 5888.628586] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5888.629245] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5888.638556] Lustre: Skipped 3 previous similar messages [ 5889.234959] Lustre: server umount lustre-MDT0000 complete [ 5891.421462] LustreError: 8070:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713430810 with bad export cookie 5791125820085909439 [ 5891.422331] LustreError: 166-1: MGC192.168.201.136@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5891.426347] LustreError: 8070:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 5891.612800] Lustre: server umount lustre-MDT0001 complete [ 5903.816271] Lustre: server umount lustre-OST0000 complete [ 5916.098423] Lustre: server umount lustre-OST0001 complete [ 5917.727415] device-mapper: core: cleaned up [ 5920.077977] Lustre: DEBUG MARKER: oleg136-server.virtnet: executing unload_modules_local [ 5920.533058] Key type lgssc unregistered [ 5920.599345] LNet: 12457:0:(lib-ptl.c:966:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5920.601859] LNet: Removed LNI 192.168.201.136@tcp