[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 3.0.0 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f53f0-0x000f53ff] mapped at [ffffffffff2003f0] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5200 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1d87 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1c23 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01BE3 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1c97 00090 (v03 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1d27 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1d5f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 510398022 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.546377] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.549475] pid_max: default: 32768 minimum: 301 [ 0.551396] Security Framework initialized [ 0.553051] SELinux: Initializing. [ 0.556550] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.562103] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.566934] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.575145] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.580062] Initializing cgroup subsys memory [ 0.581625] Initializing cgroup subsys devices [ 0.584169] Initializing cgroup subsys freezer [ 0.586290] Initializing cgroup subsys net_cls [ 0.588650] Initializing cgroup subsys blkio [ 0.590054] Initializing cgroup subsys perf_event [ 0.591858] Initializing cgroup subsys hugetlb [ 0.593745] Initializing cgroup subsys pids [ 0.595176] Initializing cgroup subsys net_prio [ 0.596980] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.600655] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.602152] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.603889] tlb_flushall_shift: 6 [ 0.604943] FEATURE SPEC_CTRL Present [ 0.606573] FEATURE IBPB_SUPPORT Present [ 0.608049] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.611800] Spectre V2 : Vulnerable [ 0.613183] Speculative Store Bypass: Vulnerable [ 0.616423] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.624680] ACPI: Core revision 20130517 [ 0.627863] ACPI: All ACPI Tables successfully acquired [ 0.629423] ftrace: allocating 30294 entries in 119 pages [ 0.683940] Enabling x2apic [ 0.684912] Enabled x2apic [ 0.685923] Switched APIC routing to physical x2apic. [ 0.689087] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.690770] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.694146] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.696394] ... version: 2 [ 0.697430] ... bit width: 48 [ 0.698319] ... generic registers: 4 [ 0.699297] ... value mask: 0000ffffffffffff [ 0.700520] ... max period: 00007fffffffffff [ 0.701816] ... fixed-purpose events: 3 [ 0.702751] ... event mask: 000000070000000f [ 0.704195] KVM setup paravirtual spinlock [ 0.707611] smpboot: Booting Node 0, Processors #1[ 0.710214] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.719010] KVM setup async PF for cpu 1 [ 0.720739] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.725343] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.731934] KVM setup async PF for cpu 2 #3 OK [ 0.734949] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.742415] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock [ 0.753060] KVM setup async PF for cpu 3 [ 0.754355] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.755541] Brought up 4 CPUs [ 0.756214] smpboot: Max logical packages: 1 [ 0.757060] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.761601] devtmpfs: initialized [ 0.763210] x86/mm: Memory block size: 128MB [ 0.771773] EVM: security.selinux [ 0.777149] EVM: security.ima [ 0.778198] EVM: security.capability [ 0.784966] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.788426] NET: Registered protocol family 16 [ 0.790168] cpuidle: using governor haltpoll [ 0.792688] ACPI: bus type PCI registered [ 0.795142] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.798823] PCI: Using configuration type 1 for base access [ 0.800576] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.817507] ACPI: Added _OSI(Module Device) [ 0.818774] ACPI: Added _OSI(Processor Device) [ 0.820063] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.824564] ACPI: Added _OSI(Processor Aggregator Device) [ 0.827506] ACPI: Added _OSI(Linux-Dell-Video) [ 0.836492] ACPI: Interpreter enabled [ 0.837712] ACPI: (supports S0 S3 S4 S5) [ 0.838967] ACPI: Using IOAPIC for interrupt routing [ 0.842122] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.846228] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.864389] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.873519] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.877987] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.888346] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.893312] acpiphp: Slot [2] registered [ 0.894129] acpiphp: Slot [5] registered [ 0.894896] acpiphp: Slot [6] registered [ 0.896152] acpiphp: Slot [7] registered [ 0.897557] acpiphp: Slot [8] registered [ 0.899133] acpiphp: Slot [9] registered [ 0.900678] acpiphp: Slot [10] registered [ 0.902651] acpiphp: Slot [3] registered [ 0.904960] acpiphp: Slot [4] registered [ 0.906433] acpiphp: Slot [11] registered [ 0.907770] acpiphp: Slot [12] registered [ 0.910314] acpiphp: Slot [13] registered [ 0.912176] acpiphp: Slot [14] registered [ 0.913540] acpiphp: Slot [15] registered [ 0.914612] acpiphp: Slot [16] registered [ 0.915845] acpiphp: Slot [17] registered [ 0.917035] acpiphp: Slot [18] registered [ 0.918615] acpiphp: Slot [19] registered [ 0.920154] acpiphp: Slot [20] registered [ 0.921829] acpiphp: Slot [21] registered [ 0.923169] acpiphp: Slot [22] registered [ 0.924627] acpiphp: Slot [23] registered [ 0.926050] acpiphp: Slot [24] registered [ 0.927588] acpiphp: Slot [25] registered [ 0.929104] acpiphp: Slot [26] registered [ 0.931375] acpiphp: Slot [27] registered [ 0.932707] acpiphp: Slot [28] registered [ 0.934093] acpiphp: Slot [29] registered [ 0.935551] acpiphp: Slot [30] registered [ 0.936752] acpiphp: Slot [31] registered [ 0.938410] PCI host bridge to bus 0000:00 [ 0.940215] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.942146] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.943542] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.946873] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.949257] pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38007fffffff window] [ 0.951802] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.965139] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.970151] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.972046] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.974407] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.978605] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.981286] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 1.246884] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 1.257063] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 1.262709] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 1.270300] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 1.276574] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 1.280595] vgaarb: loaded [ 1.284323] SCSI subsystem initialized [ 1.285653] ACPI: bus type USB registered [ 1.289125] usbcore: registered new interface driver usbfs [ 1.295368] usbcore: registered new interface driver hub [ 1.303167] usbcore: registered new device driver usb [ 1.311885] PCI: Using ACPI for IRQ routing [ 1.320765] NetLabel: Initializing [ 1.321987] NetLabel: domain hash size = 128 [ 1.323877] NetLabel: protocols = UNLABELED CIPSOv4 [ 1.326292] NetLabel: unlabeled traffic allowed by default [ 1.329497] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 1.331175] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 1.335747] amd_nb: Cannot enumerate AMD northbridges [ 1.338589] Switched to clocksource kvm-clock [ 1.367751] pnp: PnP ACPI init [ 1.369207] ACPI: bus type PNP registered [ 1.373810] pnp: PnP ACPI: found 6 devices [ 1.374971] ACPI: bus type PNP unregistered [ 1.394139] NET: Registered protocol family 2 [ 1.396879] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 1.400911] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 1.404972] TCP: Hash tables configured (established 32768 bind 32768) [ 1.408416] TCP: reno registered [ 1.409673] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 1.412392] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 1.417232] NET: Registered protocol family 1 [ 1.419904] RPC: Registered named UNIX socket transport module. [ 1.421880] RPC: Registered udp transport module. [ 1.424000] RPC: Registered tcp transport module. [ 1.425243] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 1.431567] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 1.433215] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 1.434743] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 1.438567] Unpacking initramfs... [ 3.320336] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 3.323756] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 3.326107] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 3.329118] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 3.332153] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 3.334289] RAPL PMU: hw unit of domain package 2^-0 Joules [ 3.336555] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 3.342096] cryptomgr_test (52) used greatest stack depth: 14480 bytes left [ 3.343768] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 3.343814] Initialise system trusted keyring [ 3.382331] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 3.385147] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 3.392971] zpool: loaded [ 3.394065] zbud: loaded [ 3.395147] VFS: Disk quotas dquot_6.6.0 [ 3.396371] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 3.400048] NFS: Registering the id_resolver key type [ 3.402429] Key type id_resolver registered [ 3.403932] Key type id_legacy registered [ 3.405391] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 3.408324] Key type big_key registered [ 3.414040] cryptomgr_test (58) used greatest stack depth: 14048 bytes left [ 3.417466] cryptomgr_test (63) used greatest stack depth: 13984 bytes left [ 3.419832] NET: Registered protocol family 38 [ 3.419842] Key type asymmetric registered [ 3.419845] Asymmetric key parser 'x509' registered [ 3.419972] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 3.425933] io scheduler noop registered [ 3.425938] io scheduler deadline registered (default) [ 3.426053] io scheduler cfq registered [ 3.426058] io scheduler mq-deadline registered [ 3.426062] io scheduler kyber registered [ 3.429826] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 3.429835] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 3.454456] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 3.457419] ACPI: Power Button [PWRF] [ 3.459707] GHES: HEST is not enabled! [ 3.560044] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 3.640461] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 3.795453] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 3.883580] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 4.070475] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 4.106553] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 4.140176] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 4.144754] Non-volatile memory driver v1.3 [ 4.146809] Linux agpgart interface v0.103 [ 4.148685] crash memory driver: version 1.1 [ 4.150794] nbd: registered device at major 43 [ 4.169147] virtio_blk virtio1: [vda] 67344 512-byte logical blocks (34.4 MB/32.8 MiB) [ 4.198473] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 4.229323] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 4.252670] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 4.271843] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 4.299143] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 4.313252] rdac: device handler registered [ 4.315240] hp_sw: device handler registered [ 4.316582] emc: device handler registered [ 4.318445] libphy: Fixed MDIO Bus: probed [ 4.327694] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 4.331139] ehci-pci: EHCI PCI platform driver [ 4.335440] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 4.339568] tsc: Refined TSC clocksource calibration: 2399.964 MHz [ 4.340359] ohci-pci: OHCI PCI platform driver [ 4.340429] uhci_hcd: USB Universal Host Controller Interface driver [ 4.340925] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 4.345751] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 4.345768] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 4.347721] mousedev: PS/2 mouse device common for all mice [ 4.349748] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 4.354924] rtc_cmos 00:05: RTC can wake from S4 [ 4.355349] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 4.358110] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 4.370309] hidraw: raw HID events driver (C) Jiri Kosina [ 4.370610] usbcore: registered new interface driver usbhid [ 4.370611] usbhid: USB HID core driver [ 4.370688] drop_monitor: Initializing network drop monitor service [ 4.370786] Netfilter messages via NETLINK v0.30. [ 4.370863] TCP: cubic registered [ 4.370869] Initializing XFRM netlink socket [ 4.371202] NET: Registered protocol family 10 [ 4.374442] NET: Registered protocol family 17 [ 4.374528] Key type dns_resolver registered [ 4.377007] mce: Using 10 MCE banks [ 4.377571] Loading compiled-in X.509 certificates [ 4.379873] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 4.379927] registered taskstats version 1 [ 4.405212] modprobe (72) used greatest stack depth: 13456 bytes left [ 4.411767] Key type trusted registered [ 4.417227] Key type encrypted registered [ 4.417322] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 4.419450] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 4.420631] rtc_cmos 00:05: setting system clock to 2024-04-17 08:54:53 UTC (1713344093) [ 4.447151] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 4.451294] Write protecting the kernel read-only data: 12288k [ 4.454189] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 4.457559] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 4.475128] random: systemd: uninitialized urandom read (16 bytes read) [ 4.479246] random: systemd: uninitialized urandom read (16 bytes read) [ 4.483940] random: systemd: uninitialized urandom read (16 bytes read) [ 4.488630] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 4.497212] systemd[1]: Detected virtualization kvm. [ 4.498694] systemd[1]: Detected architecture x86-64. [ 4.501022] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 4.511119] systemd[1]: No hostname configured. [ 4.513427] systemd[1]: Set hostname to . [ 4.517356] random: systemd: uninitialized urandom read (16 bytes read) [ 4.521595] systemd[1]: Initializing machine ID from random generator. [ 4.604269] dracut-rootfs-g (86) used greatest stack depth: 13024 bytes left [ 4.607928] random: systemd: uninitialized urandom read (16 bytes read) [ 4.610590] random: systemd: uninitialized urandom read (16 bytes read) [ 4.613234] random: systemd: uninitialized urandom read (16 bytes read) [ 4.615216] random: systemd: uninitialized urandom read (16 bytes read) [ 4.619978] random: systemd: uninitialized urandom read (16 bytes read) [ 4.623017] random: systemd: uninitialized urandom read (16 bytes read) [ 4.636502] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 4.641075] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 4.646416] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 4.652177] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 4.656218] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 4.663522] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 4.672220] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 4.682773] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 4.694115] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 4.702345] systemd[1]: Starting Journal Service... Starting Journal Service... [ 4.709163] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 4.717691] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 4.723028] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 4.738786] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 4.749838] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 4.756027] systemd[1]: Started Create list of required static device nodes for the current kernel. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ 4.778823] systemd[1]: Started Load Kernel Modules. [ OK ] Started Load Kernel Modules. [ 4.784646] systemd[1]: Started Setup Virtual Console. [ OK ] Started Setup Virtual Console. [ 4.790988] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. Starting Apply Kernel Variables... Starting Create Static Device Nodes in /dev... [ OK ] Started Apply Kernel Variables. [ OK ] Started Create Static Device Nodes in /dev. [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook... [ 5.210852] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... [ OK ] Started udev Coldplug all Devices. Mounting Configuration File System... Starting Show Plymouth Boot Screen... Starting dracut initqueue hook... [ OK ] Mounted Configuration File System. [ OK ] Reached target System Initialization. [ OK ] Started Show Plymouth Boot Screen. [ OK ] Reached target Paths. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Basic System. [ 5.797156] random: fast init done [ 5.862790] scsi host0: ata_piix [ 5.880079] scsi host1: ata_piix [ 5.883003] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 5.887103] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 %G[ 6.175051] ip (344) used greatest stack depth: 12592 bytes left [ 6.184876] ip (345) used greatest stack depth: 11392 bytes left [ 7.719397] dracut-initqueue[272]: RTNETLINK answers: File exists [ 8.589780] dracut-initqueue[272]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... Mounting /sysroot... [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. [ 9.674933] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... [ OK ] Stopped dracut pre-pivot and cleanup hook. Starting Plymouth switch root service... [ OK ] Stopped target Timers. [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Basic System. [ OK ] Stopped target Sockets. [ OK ] Stopped target System Initialization. [ OK ] Stopped target Local File Systems. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped target Slices. [ OK ] Stopped target Swap. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped target Paths. [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Stopped udev Kernel Device Manager. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Closed udev Kernel Socket. [ OK ] Closed udev Control Socket. Starting Cleanup udevd DB... [ OK ] Started Plymouth switch root service. [ OK ] Started Cleanup udevd DB. [ OK ] Reached target Switch Root. Starting Switch Root... [ 10.409139] systemd-journald[105]: Received SIGTERM from PID 1 (systemd). [ 10.837565] SELinux: Disabled at runtime. [ 10.960978] ip_tables: (C) 2000-2006 Netfilter Core Team [ 10.966936] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... Mounting POSIX Message Queue File System... [ OK ] Created slice User and Session Slice. Starting Create list of required st... nodes for the current kernel... [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Listening on udev Control Socket. [ OK ] Reached target Slices. Mounting Debug File System... [ OK ] Listening on udev Kernel Socket. [ OK ] Reached target Local Encrypted Volumes. Mounting Huge Pages File System... Starting Remount Root and Kernel File Systems... Starting Load Kernel Modules... [ OK ] Stopped target Switch Root. [ OK ] Stopped target Initrd Root File System. [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Stopped target Initrd File Systems. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. Starting Set Up Additional Binary Formats... Starting udev Coldplug all Devices... [ OK ] Reached target rpc_pipefs.target. [ OK ] Created slice system-serial\x2dgetty.slice. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Created slice system-getty.slice. [ OK ] Started Create list of required sta...ce nodes for the current kernel. Starting Create Static Device Nodes in /dev... [ OK ] Started Load Kernel Modules. Starting Apply Kernel Variables... Mounting Arbitrary Executable File Formats File System... [ OK ] Mounted Huge Pages File System. [ OK ] Mounted Debug File System. [ OK ] Mounted POSIX Message Queue File System. [ OK ] Started Journal Service. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Apply Kernel Variables. [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started udev Coldplug all Devices. [ OK ] Started Create Static Device Nodes in /dev. Starting udev Kernel Device Manager... [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... Starting Configure read-only root support... Starting Flush Journal to Persistent Storage... [ OK ] Mounted /mnt. [ OK ] Started Set Up Additional Binary Formats. [ 12.155688] systemd-journald[571]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ 12.721410] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ OK ] Found device /dev/ttyS1. [ OK ] Found device /dev/ttyS0. [ 12.865925] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ 12.932798] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/disk/by-label/SWAP. [ OK ] Found device /dev/vda. Mounting /home/green/git/lustre-release... Activating swap /dev/disk/by-label/SWAP... [ 13.300622] AVX version of gcm_enc/dec engaged. [ 13.317736] AES CTR mode by8 optimization enabled [ 13.347224] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Reached target Swap. [ 13.405514] squashfs: version 4.0 (2009/01/31) Phillip Lougher %G[ 13.425469] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ OK [ 13.432646] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) ] Mounted /home/green/git/lustre-release. [ 13.742677] EDAC MC: Ver: 3.0.0 [ 13.778428] EDAC sbridge: Ver: 1.1.2 [* ] A start job is running for Configur...nly root support (6s / no limit)[ 18.626189] mount.nfs (779) used greatest stack depth: 10712 bytes left [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Tell Plymouth To Write Out Runtime Data... Starting Mark the need to relabel after reboot... Starting Preprocess NFS configuration... Starting Create Volatile Files and Directories... Starting Rebuild Journal Catalog... Starting Load/Save Random Seed... [ OK ] Started Tell Plymouth To Write Out Runtime Data. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Preprocess NFS configuration. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. [ OK ] Started Load/Save Random Seed. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. Starting Update is Completed... Starting Update UTMP about System Boot/Shutdown... [ OK ] Started Update is Completed. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Reached target System Initialization. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. Starting Dump dmesg to /var/log/dmesg... [ OK ] Started D-Bus System Message Bus. Starting GSSAPI Proxy Daemon... Starting Network Manager... Starting Login Service... [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started Login Service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. Starting Network Manager Wait Online... [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... Starting Hostname Service... [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Terminate Plymouth Boot Screen... Starting Wait for Plymouth Boot Screen to Quit... Starting Network Manager Script Dispatcher Service... [ OK ] Started OpenSSH server daemon. CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg113-server login: [ 33.378713] libcfs: loading out-of-tree module taints kernel. [ 33.381162] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 33.425138] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_hostid [ 38.624731] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing load_modules_local [ 38.847107] alg: No test for adler32 (adler32-zlib) [ 39.598638] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 39.763141] Lustre: Lustre: Build Version: 2.15.62_23_ga8a649a [ 39.973790] LNet: Added LNI 192.168.201.113@tcp [8/256/0/180] [ 39.976082] LNet: Accept secure, port 988 [ 41.541797] Key type lgssc registered [ 41.927792] Lustre: Echo OBD driver; http://www.lustre.org/ [ 42.452498] icp: module license 'CDDL' taints kernel. [ 42.454170] Disabling lock debugging due to kernel taint [ 45.092450] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 47.683413] vdc: vdc1 vdc9 [ 50.741806] vde: vde1 vde9 [ 54.324000] vdf: vdf1 vdf9 [ 60.061821] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing load_modules_local [ 62.741605] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 63.882231] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 63.982368] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space. [ 64.022686] Lustre: lustre-MDT0000: new disk, initializing [ 64.169763] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 64.195329] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 64.287025] mount.lustre (6575) used greatest stack depth: 9792 bytes left [ 65.260796] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 68.954317] Lustre: lustre-OST0000: new disk, initializing [ 68.956321] Lustre: srv-lustre-OST0000: No data found on store. Initialize space. [ 68.958044] Lustre: Skipped 1 previous similar message [ 68.970467] random: crng init done [ 68.977546] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 69.758874] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:0:ost [ 69.763457] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:0:ost] [ 69.834001] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x240000400 [ 70.910942] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 74.975630] Lustre: lustre-OST0001: new disk, initializing [ 74.979165] Lustre: srv-lustre-OST0001: No data found on store. Initialize space. [ 75.042141] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 76.737352] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:1:ost [ 76.741249] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:1:ost] [ 76.827795] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x280000400 [ 77.524636] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 83.313836] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 87.631667] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 93.381709] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing check_logdir /tmp/testlogs/ [ 94.567781] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing yml_node [ 96.382255] Lustre: DEBUG MARKER: Client: 2.15.62.23 [ 97.497191] Lustre: DEBUG MARKER: MDS: 2.15.62.23 [ 99.617372] Lustre: DEBUG MARKER: OSS: 2.15.62.23 [ 101.289651] Lustre: DEBUG MARKER: -----============= acceptance-small: sanity-hsm ============----- Wed Apr 17 04:56:30 EDT 2024 [ 105.453659] Lustre: DEBUG MARKER: excepting tests: [ 106.593836] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing check_config_client /mnt/lustre [ 111.249085] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 112.196507] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 112.905193] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 136.363114] Lustre: DEBUG MARKER: == sanity-hsm test 1A: lfs hsm flags root/non-root access ========================================================== 04:57:05 (1713344225) [ 141.227698] Lustre: DEBUG MARKER: == sanity-hsm test 1a: mmap [ 147.511118] Lustre: DEBUG MARKER: == sanity-hsm test 1b: Archive, Release and Restore composite file ========================================================== 04:57:16 (1713344236) [ 155.404969] Lustre: DEBUG MARKER: == sanity-hsm test 1c: Check setting archive-id in lfs hsm_set ========================================================== 04:57:24 (1713344244) [ 160.142661] Lustre: DEBUG MARKER: == sanity-hsm test 1d: Archive, Release and Restore DoM file ========================================================== 04:57:28 (1713344248) [ 168.687198] Lustre: DEBUG MARKER: == sanity-hsm test 1e: Archive, Release and Restore SEL file ========================================================== 04:57:37 (1713344257) [ 170.722143] Lustre: 6664:0:(lod_lov.c:1433:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x11:0x0] with magic=0xbd60bd0 [ 171.835007] Lustre: 6663:0:(lod_lov.c:1433:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=50 set on component[2]=1 of non-SEL file [0x200000402:0x6:0x0] with magic=0xbd60bd0 [ 171.843241] Lustre: 6663:0:(lod_lov.c:1433:lod_parse_striping()) Skipped 5 previous similar messages [ 177.971786] Lustre: DEBUG MARKER: == sanity-hsm test 1f: DoM file release after restore ==== 04:57:46 (1713344266) [ 187.323292] Lustre: DEBUG MARKER: == sanity-hsm test 2: Check file dirtyness when doing setattr ========================================================== 04:57:56 (1713344276) [ 192.650852] Lustre: DEBUG MARKER: == sanity-hsm test 3: Check file dirtyness when opening for write ========================================================== 04:58:01 (1713344281) [ 198.013207] Lustre: DEBUG MARKER: == sanity-hsm test 4: Useless cancel must not be registered ========================================================== 04:58:06 (1713344286) [ 202.892749] Lustre: DEBUG MARKER: == sanity-hsm test 8: Test default archive number ======== 04:58:11 (1713344291) [ 207.429433] Lustre: DEBUG MARKER: == sanity-hsm test 9A: Use of explicit archive number, with dedicated copytool ========================================================== 04:58:16 (1713344296) [ 210.339848] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing pgrep --pidfile=/var/run/lhsmtool_posix.pid --list-full hsmtool [ 211.091560] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing get_client_uuid /mnt/lustre2 [ 217.156533] Lustre: DEBUG MARKER: == sanity-hsm test 9a: Multiple remote agents ============ 04:58:26 (1713344306) [ 217.666825] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_9a Need 3 or more clients, have 1 [ 220.480642] Lustre: DEBUG MARKER: == sanity-hsm test 10a: Archive a file =================== 04:58:29 (1713344309) [ 226.188262] Lustre: DEBUG MARKER: == sanity-hsm test 10b: Archive of non dirty file must work without doing request ========================================================== 04:58:35 (1713344315) [ 230.861315] Lustre: DEBUG MARKER: == sanity-hsm test 10c: Check forbidden archive ========== 04:58:39 (1713344319) [ 234.539047] Lustre: DEBUG MARKER: == sanity-hsm test 10d: Archive a file on the default archive id ========================================================== 04:58:43 (1713344323) [ 240.468854] Lustre: DEBUG MARKER: == sanity-hsm test 11a: Import a file ==================== 04:58:49 (1713344329) [ 244.219852] Lustre: DEBUG MARKER: == sanity-hsm test 11b: Import a deleted file using its FID ========================================================== 04:58:53 (1713344333) [ 249.331679] Lustre: DEBUG MARKER: == sanity-hsm test 11c: Import a file to a directory with a pool ========================================================== 04:58:58 (1713344338) [ 265.124436] Lustre: DEBUG MARKER: == sanity-hsm test 12a: Restore an imported file explicitly ========================================================== 04:59:13 (1713344353) [ 274.569146] Lustre: DEBUG MARKER: == sanity-hsm test 12b: Restore an imported file implicitly ========================================================== 04:59:22 (1713344362) [ 284.469943] Lustre: DEBUG MARKER: == sanity-hsm test 12c: Restore a file with stripe of 2 == 04:59:32 (1713344372) [ 291.043214] Lustre: DEBUG MARKER: == sanity-hsm test 12d: Restore of a non archived, non released file must work ========================================================== 04:59:39 (1713344379) [ 299.116599] Lustre: DEBUG MARKER: == sanity-hsm test 12e: Check forbidden restore ========== 04:59:47 (1713344387) [ 304.470788] Lustre: DEBUG MARKER: == sanity-hsm test 12f: Restore a released file explicitly ========================================================== 04:59:53 (1713344393) [ 312.959630] Lustre: DEBUG MARKER: == sanity-hsm test 12g: Restore a released file implicitly ========================================================== 05:00:01 (1713344401) [ 322.069689] Lustre: DEBUG MARKER: == sanity-hsm test 12h: Restore a released file implicitly from a second node ========================================================== 05:00:10 (1713344410) [ 322.625393] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_12h Need 2 or more clients, have 1 [ 326.347803] Lustre: DEBUG MARKER: == sanity-hsm test 12m: Archive/release/implicit restore ========================================================== 05:00:15 (1713344415) [ 334.673019] Lustre: DEBUG MARKER: == sanity-hsm test 12n: Import/implicit restore/release == 05:00:23 (1713344423) [ 340.049121] Lustre: DEBUG MARKER: == sanity-hsm test 12o: Layout-swap failure during Restore leaves file released ========================================================== 05:00:28 (1713344428) [ 341.615568] Lustre: *** cfs_fail_loc=152, val=0*** [ 359.572193] Lustre: DEBUG MARKER: == sanity-hsm test 12p: implicit restore of a file on copytool mount point ========================================================== 05:00:48 (1713344448) [ 365.907528] Lustre: DEBUG MARKER: == sanity-hsm test 12q: file attributes are refreshed after restore ========================================================== 05:00:54 (1713344454) [ 378.799908] Lustre: DEBUG MARKER: == sanity-hsm test 12r: lseek restores released file ===== 05:01:07 (1713344467) [ 385.154459] Lustre: DEBUG MARKER: == sanity-hsm test 12s: race between restore requests ==== 05:01:14 (1713344474) [ 387.393371] LustreError: 6663:0:(mdt_hsm_cdt_client.c:369:mdt_hsm_register_hal()) cfs_race id 18b sleeping [ 392.396692] LustreError: 6663:0:(mdt_hsm_cdt_client.c:369:mdt_hsm_register_hal()) cfs_fail_race id 18b awake: rc=0 [ 398.998656] Lustre: DEBUG MARKER: == sanity-hsm test 12t: Multiple parallel reads for a HSM imported file ========================================================== 05:01:27 (1713344487) [ 406.801259] Lustre: DEBUG MARKER: == sanity-hsm test 12u: Multiple reads on multiple HSM imported files in parallel ========================================================== 05:01:35 (1713344495) [ 418.729404] Lustre: DEBUG MARKER: == sanity-hsm test 13: Recursively import and restore a directory ========================================================== 05:01:47 (1713344507) [ 435.759629] Lustre: DEBUG MARKER: == sanity-hsm test 14: Rebind archived file to a new fid ========================================================== 05:02:04 (1713344524) [ 442.898863] Lustre: DEBUG MARKER: == sanity-hsm test 15: Rebind a list of files ============ 05:02:11 (1713344531) [ 454.449681] Lustre: DEBUG MARKER: == sanity-hsm test 16: Test CT bandwith control option === 05:02:23 (1713344543) [ 479.386183] Lustre: DEBUG MARKER: == sanity-hsm test 20: Release is not permitted ========== 05:02:48 (1713344568) [ 484.070899] Lustre: DEBUG MARKER: == sanity-hsm test 21: Simple release tests ============== 05:02:53 (1713344573) [ 494.624788] Lustre: DEBUG MARKER: == sanity-hsm test 22: Could not swap a release file ===== 05:03:03 (1713344583) [ 499.980909] Lustre: DEBUG MARKER: == sanity-hsm test 23: Release does not change a/mtime (utime) ========================================================== 05:03:08 (1713344588) [ 504.784889] Lustre: DEBUG MARKER: == sanity-hsm test 24a: Archive, release, and restore does not change a/mtime (i/o) ========================================================== 05:03:13 (1713344593) [ 514.471693] Lustre: DEBUG MARKER: == sanity-hsm test 24b: root can archive, release, and restore user files ========================================================== 05:03:23 (1713344603) [ 523.391733] Lustre: DEBUG MARKER: == sanity-hsm test 24c: check that user,group,other request masks work ========================================================== 05:03:32 (1713344612) [ 531.372400] Lustre: DEBUG MARKER: == sanity-hsm test 24d: check that read-only mounts are respected ========================================================== 05:03:40 (1713344620) [ 540.515930] Lustre: DEBUG MARKER: == sanity-hsm test 24e: tar succeeds on HSM released files ========================================================== 05:03:49 (1713344629) [ 546.401204] Lustre: DEBUG MARKER: == sanity-hsm test 24f: root can archive, release, and restore tar files ========================================================== 05:03:55 (1713344635) [ 551.115656] Lustre: DEBUG MARKER: == sanity-hsm test 24g: write by non-owner still sets dirty ========================================================== 05:04:00 (1713344640) [ 556.061793] Lustre: DEBUG MARKER: == sanity-hsm test 25a: Restore lost file (HS_LOST flag) from import ========================================================== 05:04:05 (1713344645) [ 559.827232] Lustre: DEBUG MARKER: == sanity-hsm test 25b: Restore lost file (HS_LOST flag) after release ========================================================== 05:04:08 (1713344648) [ 565.168678] Lustre: DEBUG MARKER: == sanity-hsm test 26A: Remove the archive of a valid file ========================================================== 05:04:14 (1713344654) [ 572.314116] Lustre: DEBUG MARKER: == sanity-hsm test 26a: Remove Archive On Last Unlink (RAoLU) policy ========================================================== 05:04:21 (1713344661) [ 592.057757] Lustre: DEBUG MARKER: == sanity-hsm test 26b: RAoLU policy when CDT off ======== 05:04:40 (1713344680) [ 601.629117] Lustre: DEBUG MARKER: == sanity-hsm test 26c: RAoLU effective when file closed ========================================================== 05:04:50 (1713344690) [ 621.437516] Lustre: DEBUG MARKER: == sanity-hsm test 26d: RAoLU when Client eviction ======= 05:05:10 (1713344710) [ 624.633776] Lustre: 13687:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting af6edf50-d9a5-41c4-84b2-c213521a6f01 at adminstrative request [ 632.859208] Lustre: DEBUG MARKER: == sanity-hsm test 27a: Remove the archive of an imported file (Operation not permitted) ========================================================== 05:05:21 (1713344721) [ 636.766880] Lustre: DEBUG MARKER: == sanity-hsm test 27b: Remove the archive of a relased file (Operation not permitted) ========================================================== 05:05:25 (1713344725) [ 641.894087] Lustre: DEBUG MARKER: == sanity-hsm test 28: Concurrent archive/file remove ==== 05:05:30 (1713344730) [ 649.355245] Lustre: DEBUG MARKER: == sanity-hsm test 29a: Tests --mntpath and --archive options ========================================================== 05:05:38 (1713344738) [ 652.835628] Lustre: DEBUG MARKER: == sanity-hsm test 29b: Archive/delete/remove by FID from the archive. ========================================================== 05:05:41 (1713344741) [ 658.595163] Lustre: DEBUG MARKER: == sanity-hsm test 29c: Archive/delete/remove by FID, using a file list. ========================================================== 05:05:47 (1713344747) [ 667.485323] Lustre: DEBUG MARKER: == sanity-hsm test 29d: hsm_remove by FID with archive_id 0 for unlinked file cause ========================================================== 05:05:56 (1713344756) [ 667.998004] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_29d Need 3 or more clients, have 1 [ 671.086646] Lustre: DEBUG MARKER: == sanity-hsm test 30a: Restore at exec (import case) ==== 05:05:59 (1713344759) [ 671.583547] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_30a Need 2 or more clients, have 1 [ 674.283610] Lustre: DEBUG MARKER: == sanity-hsm test 30b: Restore at exec (release case) === 05:06:03 (1713344763) [ 674.718571] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_30b Need 2 or more clients, have 1 [ 677.566283] Lustre: DEBUG MARKER: == sanity-hsm test 30c: Update during exec of released file must fail ========================================================== 05:06:06 (1713344766) [ 678.076695] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_30c Need 2 or more clients, have 1 [ 680.549653] Lustre: DEBUG MARKER: == sanity-hsm test 31a: Import a large file and check size during restore ========================================================== 05:06:09 (1713344769) [ 695.670979] Lustre: DEBUG MARKER: == sanity-hsm test 31b: Restore a large unaligned file and check size during restore ========================================================== 05:06:24 (1713344784) [ 712.447762] Lustre: DEBUG MARKER: == sanity-hsm test 31c: Restore a large aligned file and check size during restore ========================================================== 05:06:41 (1713344801) [ 728.709685] Lustre: DEBUG MARKER: == sanity-hsm test 33: Kill a restore waiting process ==== 05:06:57 (1713344817) [ 734.528107] Lustre: DEBUG MARKER: == sanity-hsm test 34: Remove file during restore ======== 05:07:03 (1713344823) [ 740.938832] Lustre: DEBUG MARKER: == sanity-hsm test 35: Overwrite file during restore ===== 05:07:09 (1713344829) [ 749.136181] Lustre: DEBUG MARKER: == sanity-hsm test 36: Move file during restore ========== 05:07:18 (1713344838) [ 756.952367] Lustre: DEBUG MARKER: == sanity-hsm test 37: re-archive a dirty file =========== 05:07:25 (1713344845) [ 781.029787] Lustre: DEBUG MARKER: == sanity-hsm test 40: Parallel archive requests ========= 05:07:49 (1713344869) [ 818.238220] Lustre: DEBUG MARKER: == sanity-hsm test 50: Archive with large number of pending HSM actions ========================================================== 05:08:27 (1713344907) [ 1067.232466] Lustre: DEBUG MARKER: == sanity-hsm test 52: Opened for write file on an evicted client should be set dirty ========================================================== 05:12:36 (1713345156) [ 1068.423935] Lustre: 11993:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting af6edf50-d9a5-41c4-84b2-c213521a6f01 at adminstrative request [ 1074.108398] Lustre: DEBUG MARKER: == sanity-hsm test 53: Opened for read file on an evicted client should not be set dirty ========================================================== 05:12:43 (1713345163) [ 1077.157258] Lustre: 12922:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting af6edf50-d9a5-41c4-84b2-c213521a6f01 at adminstrative request [ 1082.960863] Lustre: DEBUG MARKER: == sanity-hsm test 54: Write during an archive cancels it ========================================================== 05:12:51 (1713345171) [ 1125.845756] Lustre: DEBUG MARKER: == sanity-hsm test 55: Truncate during an archive cancels it ========================================================== 05:13:34 (1713345214) [ 1132.257215] Lustre: DEBUG MARKER: == sanity-hsm test 56: Setattr during an archive is ok === 05:13:41 (1713345221) [ 1173.930220] Lustre: DEBUG MARKER: == sanity-hsm test 57: Archive a file with dirty cache on another node ========================================================== 05:14:22 (1713345262) [ 1174.294008] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_57 Need 2 or more clients, have 1 [ 1176.258263] Lustre: DEBUG MARKER: == sanity-hsm test 58: Truncate a released file will trigger restore ========================================================== 05:14:25 (1713345265) [ 1186.447962] Lustre: DEBUG MARKER: == sanity-hsm test 59: Release stripeless file with non-zero size ========================================================== 05:14:35 (1713345275) [ 1191.431417] Lustre: DEBUG MARKER: == sanity-hsm test 60: Changing progress update interval from default ========================================================== 05:14:40 (1713345280) [ 1205.531346] Lustre: DEBUG MARKER: == sanity-hsm test 61: Waiting archive of a removed file should fail ========================================================== 05:14:54 (1713345294) [ 1211.449332] Lustre: DEBUG MARKER: == sanity-hsm test 70: Copytool logs JSON register/unregister events to FIFO ========================================================== 05:15:00 (1713345300) [ 1216.613669] Lustre: DEBUG MARKER: == sanity-hsm test 71: Copytool logs JSON archive events to FIFO ========================================================== 05:15:05 (1713345305) [ 1223.150420] Lustre: DEBUG MARKER: == sanity-hsm test 72: Copytool logs JSON restore events to FIFO ========================================================== 05:15:12 (1713345312) [ 1230.161286] Lustre: DEBUG MARKER: == sanity-hsm test 90: Archive/restore a file list ======= 05:15:19 (1713345319) [ 1249.957044] Lustre: DEBUG MARKER: == sanity-hsm test 100: Set coordinator /proc tunables === 05:15:38 (1713345338) [ 1259.768890] Lustre: DEBUG MARKER: == sanity-hsm test 102: Verify coordinator control ======= 05:15:48 (1713345348) [ 1265.041023] Lustre: DEBUG MARKER: == sanity-hsm test 103: Purge all requests =============== 05:15:54 (1713345354) [ 1270.911575] Lustre: DEBUG MARKER: == sanity-hsm test 103a: Purge pending restore requests == 05:15:59 (1713345359) [ 1278.797213] Lustre: DEBUG MARKER: == sanity-hsm test 104: Copy tool data field ============= 05:16:07 (1713345367) [ 1283.020930] Lustre: DEBUG MARKER: == sanity-hsm test 105: Restart of coordinator =========== 05:16:12 (1713345372) [ 1290.549296] Lustre: DEBUG MARKER: == sanity-hsm test 106: Copytool register/unregister ===== 05:16:19 (1713345379) [ 1291.153395] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing pgrep --pidfile=/var/run/lhsmtool_posix.pid --list-full hsmtool [ 1291.725525] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing get_client_uuid /mnt/lustre2 [ 1292.911261] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing pgrep --pidfile=/var/run/lhsmtool_posix.pid --list-full hsmtool [ 1293.487076] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing get_client_uuid /mnt/lustre2 [ 1297.028000] Lustre: DEBUG MARKER: == sanity-hsm test 107: Copytool re-register after MDS restart ========================================================== 05:16:26 (1713345386) [ 1299.048921] Lustre: Failing over lustre-MDT0000 [ 1299.200401] Lustre: server umount lustre-MDT0000 complete [ 1311.080803] LustreError: 166-1: MGC192.168.201.113@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1311.177147] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1311.183735] Lustre: Skipped 1 previous similar message [ 1311.208766] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1311.226562] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1311.228786] mount.lustre (9209) used greatest stack depth: 9656 bytes left [ 1311.951069] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1313.667025] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1315.902933] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 1315.918045] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:409 to 0x240000400:449) [ 1315.921629] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:406 to 0x280000400:449) [ 1316.200664] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 1316.425995] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1316.769598] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1319.209829] Lustre: 3033:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713345392/real 1713345392] req@ffff88008ce9c700 x1796571533901504/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713345408 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1321.559439] Lustre: DEBUG MARKER: == sanity-hsm test 109: Policy display/change ============ 05:16:50 (1713345410) [ 1324.207608] Lustre: 3032:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713345397/real 1713345397] req@ffff88008c04b800 x1796571533901696/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713345413 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1324.217629] Lustre: 3032:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 1324.588700] Lustre: 11735:0:(mdt_coordinator.c:2157:mdt_hsm_policy_seq_write()) lustre-MDT0000: 'wrong' is unknown, supported policies are: [ 1328.362179] Lustre: DEBUG MARKER: == sanity-hsm test 110a: Non blocking restore policy (import case) ========================================================== 05:16:57 (1713345417) [ 1333.747466] Lustre: DEBUG MARKER: == sanity-hsm test 110b: Non blocking restore policy (release case) ========================================================== 05:17:02 (1713345422) [ 1340.395583] Lustre: DEBUG MARKER: == sanity-hsm test 111a: No retry policy (import case), restore will error ========================================================== 05:17:09 (1713345429) [ 1345.561870] Lustre: DEBUG MARKER: == sanity-hsm test 111b: No retry policy (release case), restore will error ========================================================== 05:17:14 (1713345434) [ 1352.092050] Lustre: DEBUG MARKER: == sanity-hsm test 112: State of recorded request ======== 05:17:21 (1713345441) [ 1357.591851] Lustre: DEBUG MARKER: == sanity-hsm test 113: wrong stat after restore ========= 05:17:26 (1713345446) [ 1363.656955] Lustre: DEBUG MARKER: == sanity-hsm test 114: Incompatible request does not set other requests as STARTED ========================================================== 05:17:32 (1713345452) [ 1376.863264] Lustre: DEBUG MARKER: == sanity-hsm test 200: Register/Cancel archive ========== 05:17:45 (1713345465) [ 1381.965359] Lustre: DEBUG MARKER: == sanity-hsm test 201: Register/Cancel restore ========== 05:17:50 (1713345470) [ 1387.562308] Lustre: DEBUG MARKER: == sanity-hsm test 202: Register/Cancel remove =========== 05:17:56 (1713345476) [ 1393.971883] Lustre: DEBUG MARKER: == sanity-hsm test 220A: Changelog for archive =========== 05:18:03 (1713345483) [ 1394.716889] Lustre: lustre-MDD0000: changelog on [ 1397.742827] Lustre: lustre-MDD0000: changelog off [ 1399.842789] Lustre: DEBUG MARKER: == sanity-hsm test 220a: Changelog for failed archive ==== 05:18:08 (1713345488) [ 1400.563720] Lustre: lustre-MDD0000: changelog on [ 1403.835077] Lustre: lustre-MDD0000: changelog off [ 1405.925995] Lustre: DEBUG MARKER: == sanity-hsm test 221: Changelog for archive canceled === 05:18:14 (1713345494) [ 1406.670586] Lustre: lustre-MDD0000: changelog on [ 1411.508693] Lustre: lustre-MDD0000: changelog off [ 1413.706749] Lustre: DEBUG MARKER: == sanity-hsm test 222a: Changelog for explicit restore == 05:18:22 (1713345502) [ 1414.629312] Lustre: lustre-MDD0000: changelog on [ 1417.860385] Lustre: lustre-MDD0000: changelog off [ 1420.228410] Lustre: DEBUG MARKER: == sanity-hsm test 222b: Changelog for implicit restore == 05:18:29 (1713345509) [ 1421.036537] Lustre: lustre-MDD0000: changelog on [ 1424.693430] Lustre: lustre-MDD0000: changelog off [ 1426.989027] Lustre: DEBUG MARKER: == sanity-hsm test 222c: Changelog for failed explicit restore ========================================================== 05:18:36 (1713345516) [ 1433.910336] Lustre: DEBUG MARKER: == sanity-hsm test 222d: Changelog for failed implicit restore ========================================================== 05:18:42 (1713345522) [ 1434.721907] Lustre: lustre-MDD0000: changelog on [ 1434.723220] Lustre: Skipped 1 previous similar message [ 1438.539655] Lustre: lustre-MDD0000: changelog off [ 1438.541046] Lustre: Skipped 1 previous similar message [ 1440.910898] Lustre: DEBUG MARKER: == sanity-hsm test 223a: Changelog for restore canceled (import case) ========================================================== 05:18:49 (1713345529) [ 1474.753499] Lustre: lustre-MDD0000: changelog off [ 1477.152597] Lustre: DEBUG MARKER: == sanity-hsm test 223b: Changelog for restore canceled (release case) ========================================================== 05:19:26 (1713345566) [ 1478.021482] Lustre: lustre-MDD0000: changelog on [ 1478.023699] Lustre: Skipped 1 previous similar message [ 1485.783584] Lustre: DEBUG MARKER: == sanity-hsm test 224A: Changelog for remove ============ 05:19:34 (1713345574) [ 1493.691350] Lustre: DEBUG MARKER: == sanity-hsm test 224a: Changelog for failed remove ===== 05:19:42 (1713345582) [ 1502.125299] Lustre: DEBUG MARKER: == sanity-hsm test 225: Changelog for remove canceled ==== 05:19:51 (1713345591) [ 1505.271391] Lustre: DEBUG MARKER: == sanity-hsm test 226: changelog for last rm/mv with exiting archive ========================================================== 05:19:54 (1713345594) [ 1511.050339] Lustre: lustre-MDD0000: changelog off [ 1511.051837] Lustre: Skipped 3 previous similar messages [ 1513.416295] Lustre: DEBUG MARKER: == sanity-hsm test 227: changelog when explicit setting of HSM flags ========================================================== 05:20:02 (1713345602) [ 1514.192394] Lustre: lustre-MDD0000: changelog on [ 1514.194072] Lustre: Skipped 3 previous similar messages [ 1519.351538] Lustre: DEBUG MARKER: == sanity-hsm test 228: On released file, return extend to FIEMAP. For [cp,tar] --sparse ========================================================== 05:20:08 (1713345608) [ 1526.118053] Lustre: DEBUG MARKER: == sanity-hsm test 250: Coordinator max request ========== 05:20:15 (1713345615) [ 1541.599007] Lustre: DEBUG MARKER: == sanity-hsm test 251: Coordinator request timeout ====== 05:20:30 (1713345630) [ 1548.395780] LustreError: 9217:0:(mdt_coordinator.c:1714:mdt_hsm_update_request_state()) lustre-MDT0000: Cannot find running request for cookie 0x661f93a7 on fid=[0x200000402:0x354:0x0] [ 1550.515537] Lustre: DEBUG MARKER: == sanity-hsm test 252: Timeout'ed running archive of a removed file should be canceled ========================================================== 05:20:39 (1713345639) [ 1554.759833] LustreError: 9805:0:(mdt_coordinator.c:1714:mdt_hsm_update_request_state()) lustre-MDT0000: Cannot find running request for cookie 0x661f93a8 on fid=[0x200000402:0x356:0x0] [ 1554.766339] LustreError: 9805:0:(mdt_coordinator.c:1714:mdt_hsm_update_request_state()) Skipped 1 previous similar message [ 1558.527726] Lustre: DEBUG MARKER: == sanity-hsm test 253: Check for wrong file size after release ========================================================== 05:20:47 (1713345647) [ 1563.566528] Lustre: DEBUG MARKER: == sanity-hsm test 254a: Request counters are initialized to zero ========================================================== 05:20:52 (1713345652) [ 1567.446707] Lustre: DEBUG MARKER: == sanity-hsm test 254b: Request counters are correctly incremented and decremented ========================================================== 05:20:56 (1713345656) [ 1580.826568] Lustre: DEBUG MARKER: == sanity-hsm test 255: Copytool registration wakes the coordinator up ========================================================== 05:21:09 (1713345669) [ 1592.865263] Lustre: DEBUG MARKER: == sanity-hsm test 260a: Restore request have priority over other requests ========================================================== 05:21:21 (1713345681) [ 1605.812131] Lustre: DEBUG MARKER: == sanity-hsm test 260b: Restore request have priority over other requests ========================================================== 05:21:34 (1713345694) [ 1619.006825] Lustre: DEBUG MARKER: == sanity-hsm test 260c: Requests are not reordered on the 'hot' path of the coordinator ========================================================== 05:21:48 (1713345708) [ 1633.441382] Lustre: DEBUG MARKER: == sanity-hsm test 261: Report 0 bytes size after HSM release ========================================================== 05:22:02 (1713345722) [ 1640.234247] Lustre: DEBUG MARKER: == sanity-hsm test 262: The client should return 1 block for HSM released files ========================================================== 05:22:09 (1713345729) [ 1645.219622] Lustre: DEBUG MARKER: == sanity-hsm test 300: On disk coordinator state kept between MDT umount/mount ========================================================== 05:22:14 (1713345734) [ 1645.942650] Lustre: Disabling parameter lustre.mdt.lustre-MDT0000.hsm_control in log params [ 1645.945255] Lustre: Skipped 1 previous similar message [ 1646.723963] Lustre: Failing over lustre-MDT0000 [ 1646.727679] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1648.176275] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.13@tcp (stopping) [ 1651.735896] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1651.738308] Lustre: Skipped 2 previous similar messages [ 1651.870804] Lustre: server umount lustre-MDT0000 complete [ 1663.588926] LustreError: 166-1: MGC192.168.201.113@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1663.708190] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1663.726458] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1664.455006] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1668.183815] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1668.229353] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1668.247681] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:544 to 0x280000400:577) [ 1668.247688] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:543 to 0x240000400:577) [ 1668.815625] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1669.227503] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1670.934810] Lustre: Setting parameter lustre.mdt.lustre-MDT0000.hsm_control in log params [ 1676.696958] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 1676.699515] Lustre: Skipped 1 previous similar message [ 1691.717981] Lustre: Failing over lustre-MDT0000 [ 1691.719734] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1691.719736] Lustre: Skipped 1 previous similar message [ 1691.856415] Lustre: server umount lustre-MDT0000 complete [ 1703.657276] LustreError: 166-1: MGC192.168.201.113@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1703.779535] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1703.802128] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1704.608173] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1706.387624] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1708.274394] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 1708.290684] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:544 to 0x280000400:609) [ 1708.290776] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:543 to 0x240000400:609) [ 1708.895336] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1709.269169] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1711.776976] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 1711.780375] Lustre: Skipped 1 previous similar message [ 1713.935788] Lustre: DEBUG MARKER: == sanity-hsm test 301: HSM tunnable are persistent ====== 05:23:22 (1713345802) [ 1714.453456] Lustre: Setting parameter lustre.mdt.lustre-MDT0000.hsm.default_archive_id in log params [ 1714.997312] Lustre: Failing over lustre-MDT0000 [ 1715.125380] Lustre: server umount lustre-MDT0000 complete [ 1726.778169] LustreError: 166-1: MGC192.168.201.113@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1726.783621] LustreError: 6660:0:(mgc_request.c:627:do_requeue()) failed processing log: -5 [ 1726.807853] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1726.808068] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1726.816332] Lustre: Skipped 2 previous similar messages [ 1726.895265] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1726.915863] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1727.632731] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1729.390758] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1731.334808] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 1731.351314] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:544 to 0x280000400:641) [ 1731.355433] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:543 to 0x240000400:641) [ 1731.896987] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.201.113@tcp (at 0@lo) [ 1731.900676] Lustre: Skipped 1 previous similar message [ 1731.913000] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1732.276370] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1732.879614] Lustre: 3033:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713345805/real 1713345805] req@ffff880134893480 x1796571534027456/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713345821 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1733.959252] Lustre: Disabling parameter lustre.mdt.lustre-MDT0000.hsm.default_archive_id in log params [ 1737.011648] Lustre: DEBUG MARKER: == sanity-hsm test 302: HSM tunnable are persistent when CDT is off ========================================================== 05:23:46 (1713345826) [ 1737.903542] Lustre: 3032:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713345810/real 1713345810] req@ffff880134893b80 x1796571534027648/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713345826 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1737.911799] Lustre: 3032:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 1738.427191] Lustre: Failing over lustre-MDT0000 [ 1738.542350] Lustre: server umount lustre-MDT0000 complete [ 1750.254200] LustreError: 166-1: MGC192.168.201.113@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1750.258215] LustreError: 6660:0:(mgc_request.c:627:do_requeue()) failed processing log: -5 [ 1750.344928] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1750.350282] Lustre: Skipped 1 previous similar message [ 1750.371713] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1750.388052] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1751.145453] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1752.864534] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1754.363591] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1754.378977] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:544 to 0x280000400:673) [ 1754.378979] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:543 to 0x240000400:673) [ 1754.905783] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1755.254858] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1755.368836] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 1755.371302] Lustre: Skipped 1 previous similar message [ 1757.155145] Lustre: Disabling parameter lustre.mdt.lustre-MDT0000.hsm.default_archive_id in log params [ 1757.157234] Lustre: Skipped 1 previous similar message [ 1757.367586] Lustre: 3031:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713345830/real 1713345830] req@ffff880135800380 x1796571534034880/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713345846 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1760.105040] Lustre: DEBUG MARKER: == sanity-hsm test 400: Single request is sent to the right MDT ========================================================== 05:24:09 (1713345849) [ 1760.455176] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_400 needs >= 2 MDTs [ 1762.356532] Lustre: DEBUG MARKER: == sanity-hsm test 401: Compound requests split and sent to their respective MDTs ========================================================== 05:24:11 (1713345851) [ 1762.708101] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_401 needs >= 2 MDTs [ 1764.638665] Lustre: DEBUG MARKER: == sanity-hsm test 402a: Copytool start fails if all MDTs are inactive ========================================================== 05:24:13 (1713345853) [ 1765.117968] Lustre: lustre-MDT0000: Client af6edf50-d9a5-41c4-84b2-c213521a6f01 (at 192.168.201.13@tcp) reconnecting [ 1768.249174] Lustre: DEBUG MARKER: == sanity-hsm test 402b: CDT must retry request upon slow start of CT ========================================================== 05:24:17 (1713345857) [ 1769.361542] LustreError: 12642:0:(mdt_hsm_cdt_agent.c:583:mdt_hsm_agent_send()) lustre-MDT0000: cannot send request to agent '9718f9a3-1534-4c32-af08-4f517cd00578': rc = -11 [ 1772.372342] LustreError: 12642:0:(mdt_hsm_cdt_agent.c:583:mdt_hsm_agent_send()) lustre-MDT0000: cannot send request to agent '9718f9a3-1534-4c32-af08-4f517cd00578': rc = -11 [ 1786.159722] Lustre: DEBUG MARKER: == sanity-hsm test 403: Copytool starts with inactive MDT and register on reconnect ========================================================== 05:24:35 (1713345875) [ 1786.535521] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_403 needs >= 2 MDTs [ 1788.663330] Lustre: DEBUG MARKER: == sanity-hsm test 404: Inactive MDT does not block requests for active MDTs ========================================================== 05:24:37 (1713345877) [ 1789.060973] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_404 needs >= 2 MDTs [ 1791.126613] Lustre: DEBUG MARKER: == sanity-hsm test 405: archive and release under striped directory ========================================================== 05:24:40 (1713345880) [ 1791.536259] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_405 needs >= 2 MDTs [ 1793.645591] Lustre: DEBUG MARKER: == sanity-hsm test 406: attempting to migrate HSM archived files is safe ========================================================== 05:24:42 (1713345882) [ 1794.015364] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_406 needs >= 2 MDTs [ 1796.080255] Lustre: DEBUG MARKER: == sanity-hsm test 407: Check for double RESTORE records in llog ========================================================== 05:24:45 (1713345885) [ 1796.796754] LustreError: 12586:0:(mdt_hsm_cdt_client.c:386:mdt_hsm_register_hal()) cfs_fail_timeout id 164 sleeping for 5000ms [ 1799.552192] Lustre: Failing over lustre-MDT0000 [ 1800.439911] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1800.440208] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1800.447756] Lustre: Skipped 1 previous similar message [ 1801.799600] LustreError: 12586:0:(mdt_hsm_cdt_client.c:386:mdt_hsm_register_hal()) cfs_fail_timeout id 164 awake [ 1801.910687] Lustre: server umount lustre-MDT0000 complete [ 1813.685272] LustreError: 166-1: MGC192.168.201.113@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1813.807780] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1813.829714] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1814.623757] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1818.423580] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1818.439457] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1818.458605] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:675 to 0x280000400:705) [ 1818.462824] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:675 to 0x240000400:705) [ 1819.018180] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1819.407305] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1820.808901] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.201.113@tcp (at 0@lo) [ 1820.812286] Lustre: Skipped 1 previous similar message [ 1831.422227] Lustre: DEBUG MARKER: == sanity-hsm test 408: Verify fiemap on release file ==== 05:25:20 (1713345920) [ 1831.958810] Lustre: DEBUG MARKER: SKIP: sanity-hsm test_408 ORI-366/LU-1941: FIEMAP unimplemented on ZFS [ 1834.125230] Lustre: DEBUG MARKER: == sanity-hsm test 409a: Coordinator should not stop when in use ========================================================== 05:25:23 (1713345923) [ 1836.012557] LustreError: 21119:0:(mdt_hsm_cdt_client.c:386:mdt_hsm_register_hal()) cfs_fail_timeout id 164 sleeping for 5000ms [ 1841.015586] LustreError: 21119:0:(mdt_hsm_cdt_client.c:386:mdt_hsm_register_hal()) cfs_fail_timeout id 164 awake [ 1847.756966] Lustre: DEBUG MARKER: == sanity-hsm test 409b: getattr released file with CDT stopped after remount ========================================================== 05:25:36 (1713345936) [ 1848.512348] Lustre: Modifying parameter lustre.mdt.lustre-MDT0000.hsm_control in log params [ 1869.313384] Lustre: Failing over lustre-MDT0000 [ 1869.423180] Lustre: server umount lustre-MDT0000 complete [ 1881.137179] LustreError: 166-1: MGC192.168.201.113@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1881.258623] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1881.278271] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1882.052055] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1883.536555] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1883.564810] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1883.581835] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:708 to 0x280000400:737) [ 1883.586741] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:709 to 0x240000400:737) [ 1884.443268] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1884.824268] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1886.232564] Lustre: 3031:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713345959/real 1713345959] req@ffff8800807c5500 x1796571534059904/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713345975 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1886.241354] Lustre: 3031:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 1886.244270] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1886.250537] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 1886.252712] Lustre: Skipped 1 previous similar message [ 1887.522463] Lustre: Modifying parameter lustre.mdt.lustre-MDT0000.hsm_control in log params [ 1909.795871] Lustre: DEBUG MARKER: == sanity-hsm test 500: various LLAPI HSM tests ========== 05:26:38 (1713345998) [ 1917.010231] Lustre: HSM agent af6edf50-d9a5-41c4-84b2-c213521a6f01 already registered [ 1926.508398] LustreError: 25665:0:(mdt_coordinator.c:1714:mdt_hsm_update_request_state()) lustre-MDT0000: Cannot find running request for cookie 0x661f95c1 on fid=[0x200002b11:0x15:0x0] [ 1936.556423] Lustre: DEBUG MARKER: == sanity-hsm test 600: Changelog fields 'u=' and 'nid=' ========================================================== 05:27:05 (1713346025) [ 1937.275236] Lustre: lustre-MDD0000: changelog on [ 1939.246962] Lustre: lustre-MDD0000: changelog off [ 1939.248280] Lustre: Skipped 1 previous similar message [ 1941.502130] Lustre: DEBUG MARKER: == sanity-hsm test 601: OPEN Changelog entry ============= 05:27:10 (1713346030) [ 1946.495259] Lustre: DEBUG MARKER: == sanity-hsm test 602: Changelog record CLOSE only if open+write or OPEN recorded ========================================================== 05:27:15 (1713346035) [ 1952.187185] Lustre: DEBUG MARKER: == sanity-hsm test 603: GETXATTR Changelog entry ========= 05:27:21 (1713346041) [ 1957.244755] Lustre: DEBUG MARKER: == sanity-hsm test 604: NOPEN Changelog entry ============ 05:27:26 (1713346046) [ 1983.154881] Lustre: DEBUG MARKER: == sanity-hsm test 605: Test OPEN and CLOSE rate limit in Changelogs ========================================================== 05:27:52 (1713346072) [ 1988.695444] Lustre: DEBUG MARKER: == sanity-hsm test 606: llog_reader groks changelog fields ========================================================== 05:27:57 (1713346077) [ 1990.741213] Lustre: Failing over lustre-MDT0000 [ 1990.866859] Lustre: server umount lustre-MDT0000 complete [ 1997.282656] LustreError: 166-1: MGC192.168.201.113@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1997.376003] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1997.383607] Lustre: Skipped 2 previous similar messages [ 1997.406572] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1997.429409] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1998.169117] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1998.720164] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1998.757012] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1998.773707] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:750 to 0x280000400:769) [ 1998.773870] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:750 to 0x240000400:769) [ 2001.418774] Lustre: DEBUG MARKER: == sanity-hsm test 607a: release a file that was migrated after being archived ========================================================== 05:28:10 (1713346090) [ 2002.408543] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 2002.410659] Lustre: Skipped 1 previous similar message [ 2007.381574] Lustre: 3032:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713346080/real 1713346080] req@ffff88012fc51c00 x1796571534080320/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713346096 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 2007.389984] Lustre: 3032:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [ 2007.713647] Lustre: DEBUG MARKER: == sanity-hsm test 607b: Migrate should not change the HSM attribute of dirty files ========================================================== 05:28:16 (1713346096) [ 2012.657558] Lustre: DEBUG MARKER: == sanity-hsm test 607c: 'lfs swap_layouts' should set dirty flag on HSM file ========================================================== 05:28:21 (1713346101) [ 2016.913185] Lustre: DEBUG MARKER: == sanity-hsm test complete, duration 1915 sec =========== 05:28:25 (1713346105) [ 2018.254726] Lustre: 6018:0:(lod_lov.c:1433:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x10:0x0] with magic=0xbd60bd0 [ 2018.258947] Lustre: 6018:0:(lod_lov.c:1433:lod_parse_striping()) Skipped 5 previous similar messages [ 2021.767858] Lustre: Failing over lustre-MDT0000 [ 2021.900714] Lustre: server umount lustre-MDT0000 complete [ 2033.811169] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:773 to 0x240000400:801) [ 2033.811194] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:775 to 0x280000400:801) [ 2034.383783] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2036.647242] Lustre: DEBUG MARKER: oleg113-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2037.014887] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2038.988820] Lustre: server umount lustre-MDT0000 complete [ 2039.864534] Lustre: 3031:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713346111/real 1713346111] req@ffff88013166b100 x1796571534095040/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713346127 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 2039.870895] Lustre: 3031:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [ 2039.979875] LustreError: 8164:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713346129 with bad export cookie 5446135486617366580 [ 2049.989851] Lustre: server umount lustre-OST0000 complete [ 2061.037410] Lustre: server umount lustre-OST0001 complete [ 2063.858655] Lustre: DEBUG MARKER: oleg113-server.virtnet: executing unload_modules_local [ 2064.269506] Key type lgssc unregistered [ 2064.327098] LNet: 13766:0:(lib-ptl.c:966:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 2064.329793] LNet: Removed LNI 192.168.201.113@tcp