************************ crashinfo ************************* /exports/testreports/42115/testresults/racer-ldiskfs-DNE-centos7_x86_64-centos7_x86_64/oleg432-client-timeout-core (3.10.0-7.9-debug) +==========================+ | *** Crashinfo v1.3.7 *** | +==========================+ +++WARNING+++ PARTIAL DUMP with size(vmcore) < 25% size(RAM) KERNEL: /tmp/crash-anaysis.tcDV8/vmlinux [TAINTED] DUMPFILE: /exports/testreports/42115/testresults/racer-ldiskfs-DNE-centos7_x86_64-centos7_x86_64/oleg432-client-timeout-core [PARTIAL DUMP] CPUS: 4 DATE: Thu Apr 18 20:36:12 EDT 2024 UPTIME: 00:21:21 LOAD AVERAGE: 0.00, 14.81, 45.74 TASKS: 168 NODENAME: oleg432-client.virtnet RELEASE: 3.10.0-7.9-debug VERSION: #1 SMP Sat Mar 26 23:28:42 EDT 2022 MACHINE: x86_64 (2399 Mhz) MEMORY: 4 GB PANIC: "" +--------------------------+ >------------------------| Per-cpu Stacks ('bt -a') |------------------------< +--------------------------+ -- CPU#0 -- PID=0 CPU=0 CMD=swapper/0 #-1 native_safe_halt+0xb, 449 bytes of data #0 default_idle+0x1e #1 default_enter_idle+0x45 #2 cpuidle_enter_state+0x40 #3 cpuidle_idle_call+0xd8 #4 arch_cpu_idle+0xe #5 cpu_startup_entry+0x14a #6 rest_init+0x8e #7 start_kernel+0x456 #8 x86_64_start_reservations+0x2a #9 x86_64_start_kernel+0x152 #10 start_cpu+0x5 -- CPU#1 -- PID=0 CPU=1 CMD=swapper/1 #-1 native_safe_halt+0xb, 449 bytes of data #0 default_idle+0x1e #1 default_enter_idle+0x45 #2 cpuidle_enter_state+0x40 #3 cpuidle_idle_call+0xd8 #4 arch_cpu_idle+0xe #5 cpu_startup_entry+0x14a #6 start_secondary+0x1eb #7 start_cpu+0x5 -- CPU#2 -- PID=0 CPU=2 CMD=swapper/2 #-1 native_safe_halt+0xb, 449 bytes of data #0 default_idle+0x1e #1 default_enter_idle+0x45 #2 cpuidle_enter_state+0x40 #3 cpuidle_idle_call+0xd8 #4 arch_cpu_idle+0xe #5 cpu_startup_entry+0x14a #6 start_secondary+0x1eb #7 start_cpu+0x5 -- CPU#3 -- PID=0 CPU=3 CMD=swapper/3 #-1 native_safe_halt+0xb, 449 bytes of data #0 default_idle+0x1e #1 default_enter_idle+0x45 #2 cpuidle_enter_state+0x40 #3 cpuidle_idle_call+0xd8 #4 arch_cpu_idle+0xe #5 cpu_startup_entry+0x14a #6 start_secondary+0x1eb #7 start_cpu+0x5 +--------------------------------+ >---------------------| How This Dump Has Been Created |---------------------< +--------------------------------+ Cannot identify the specific condition that triggered vmcore +---------------+ >------------------------------| Tasks Summary |------------------------------< +---------------+ Number of Threads That Ran Recently ----------------------------------- last second 16 last 5s 32 last 60s 43 ----- Total Numbers of Threads per State ------ TASK_INTERRUPTIBLE 164 TASK_RUNNING 1 +++WARNING+++ There are 3 threads running in their own namespaces Use 'taskinfo --ns' to get more details +-----------------------+ >--------------------------| 5 Most Recent Threads |--------------------------< +-----------------------+ PID CMD Age ARGS ----- -------------- ------ ---------------------------- 34 rcuos/3 0 ms (no user stack) 17 kworker/1:0 0 ms (no user stack) 9 rcu_sched 0 ms (no user stack) 1 systemd 8 ms /usr/lib/systemd/systemd --switched-root --system --deserialize 22 49 kworker/0:1 8 ms (no user stack) +------------------------+ >-------------------------| Memory Usage (kmem -i) |-------------------------< +------------------------+ PAGES TOTAL PERCENTAGE TOTAL MEM 955079 3.6 GB ---- FREE 719197 2.7 GB 75% of TOTAL MEM USED 235882 921.4 MB 24% of TOTAL MEM SHARED 16043 62.7 MB 1% of TOTAL MEM BUFFERS 5144 20.1 MB 0% of TOTAL MEM CACHED 52453 204.9 MB 5% of TOTAL MEM SLAB 29985 117.1 MB 3% of TOTAL MEM TOTAL HUGE 0 0 ---- HUGE FREE 0 0 0% of TOTAL HUGE TOTAL SWAP 262143 1024 MB ---- SWAP USED 0 0 0% of TOTAL SWAP SWAP FREE 262143 1024 MB 100% of TOTAL SWAP COMMIT LIMIT 739682 2.8 GB ---- COMMITTED 64146 250.6 MB 8% of TOTAL LIMIT +-------------------------------+ >----------------------| Scheduler Runqueues (per CPU) |----------------------< +-------------------------------+ ---+ CPU=0 ---- | CURRENT TASK , CMD=swapper/0 ---+ CPU=1 ---- | CURRENT TASK , CMD=swapper/1 ---+ CPU=2 ---- | CURRENT TASK , CMD=swapper/2 ---+ CPU=3 ---- | CURRENT TASK , CMD=swapper/3 +------------------------+ >-------------------------| Network Status Summary |-------------------------< +------------------------+ TCP Connection Info ------------------- ESTABLISHED 6 LISTEN 3 NAGLE disabled (TCP_NODELAY): 5 user_data set (NFS etc.): 4 UDP Connection Info ------------------- 2 UDP sockets, 0 in ESTABLISHED Unix Connection Info ------------------------ ESTABLISHED 26 CLOSE 18 LISTEN 8 Raw sockets info -------------------- ESTABLISHED 1 Interfaces Info --------------- How long ago (in seconds) interfaces transmitted/received? Name RX TX ---- ---------- --------- lo n/a 1279.4 eth0 n/a 0.0 RSS_TOTAL=84712 pages, %mem= 1.4 +------------+ >-------------------------------| Mounted FS |-------------------------------< +------------+ MOUNT SUPERBLK TYPE DEVNAME DIRNAME ffff880138cca000 ffff880139940800 rootfs rootfs / ffff880137668540 ffff8800b6cf2000 sysfs sysfs /sys ffff880137668700 ffff880139944000 proc proc /proc ffff8801376688c0 ffff880137678000 devtmpfs devtmpfs /dev ffff880137668a80 ffff8800b6cf1800 securityfs securityfs /sys/kernel/security ffff880137668c40 ffff8800b6cf2800 tmpfs tmpfs /dev/shm ffff880137668e00 ffff88013771f800 devpts devpts /dev/pts ffff880137668fc0 ffff8800b6cf3000 tmpfs tmpfs /run ffff880137669180 ffff8800b6cf3800 tmpfs tmpfs /sys/fs/cgroup ffff880137669340 ffff8800b6cf4000 cgroup cgroup /sys/fs/cgroup/systemd ffff880137669500 ffff8800b6cf4800 pstore pstore /sys/fs/pstore ffff8801376696c0 ffff8800b6cf6800 cgroup cgroup /sys/fs/cgroup/hugetlb ffff880137669880 ffff8800b6cf6000 cgroup cgroup /sys/fs/cgroup/net_cls,net_prio ffff880137669a40 ffff8800b6cf5800 cgroup cgroup /sys/fs/cgroup/blkio ffff880137669c00 ffff8800b6cf5000 cgroup cgroup /sys/fs/cgroup/cpu,cpuacct ffff880137669dc0 ffff8800b6cf7000 cgroup cgroup /sys/fs/cgroup/freezer ffff88012aad4000 ffff8800b6cf7800 cgroup cgroup /sys/fs/cgroup/perf_event ffff88012aad41c0 ffff88012aad8000 cgroup cgroup /sys/fs/cgroup/cpuset ffff88012aad4380 ffff88012aad8800 cgroup cgroup /sys/fs/cgroup/pids ffff88012aad4540 ffff88012aad9000 cgroup cgroup /sys/fs/cgroup/devices ffff88012aad4700 ffff88012aad9800 cgroup cgroup /sys/fs/cgroup/memory ffff880138ccb6c0 ffff8800b6ee4800 configfs configfs /sys/kernel/config ffff880138ccb880 ffff8800b6ee6000 ext4 /dev/nbd0 / ffff8800b58861c0 ffff88012aadc000 rpc_pipefs rpc_pipefs /var/lib/nfs/rpc_pipefs ffff880137013880 ffff8800b6343800 autofs systemd-1 /proc/sys/fs/binfmt_misc ffff880137013a40 ffff88012b279000 mqueue mqueue /dev/mqueue ffff880138ccba40 ffff880139947800 debugfs debugfs /sys/kernel/debug ffff88012aad4e00 ffff8800b63d1000 hugetlbfs hugetlbfs /dev/hugepages ffff880137013c00 ffff8800b60e8000 binfmt_misc binfmt_misc /proc/sys/fs/binfmt_misc/ ffff880137013dc0 ffff8800b6346000 ramfs none /mnt ffff8801370136c0 ffff88012a5bd000 tmpfs none /var/lib/stateless/writable ffff8800b5886380 ffff8800b60ed800 squashfs /dev/vda /home/green/git/lustre-release ffff88012aad4fc0 ffff88012a5bd000 tmpfs none /var/cache/man ffff88012aad5180 ffff88012a5bd000 tmpfs none /var/log ffff8800b5886540 ffff88012a5bd000 tmpfs none /var/lib/dbus ffff88012aad5340 ffff88012a5bd000 tmpfs none /tmp ffff8800b5886700 ffff88012a5bd000 tmpfs none /var/lib/dhclient ffff8800b58868c0 ffff88012a5bd000 tmpfs none /var/tmp ffff880137013340 ffff88012a5bd000 tmpfs none /var/lib/NetworkManager ffff880138ccbdc0 ffff88012a5bd000 tmpfs none /var/lib/systemd/random-seed ffff8800b41e6000 ffff88012a5bd000 tmpfs none /var/spool ffff880138ccb340 ffff88012a5bd000 tmpfs none /var/lib/nfs ffff88012aad5500 ffff88012a5bd000 tmpfs none /var/lib/gssproxy ffff8800b5886a80 ffff88012a5bd000 tmpfs none /var/lib/logrotate ffff8800b41e61c0 ffff88012a5bd000 tmpfs none /etc ffff88012aad56c0 ffff88012a5bd000 tmpfs none /var/lib/rsyslog ffff8800b5886c40 ffff88012a5bd000 tmpfs none /var/lib/dhclient/var/lib/dhclient ffff8800b41e6380 ffff88012a5ba000 nfs4 192.168.200.253:/exports/state/oleg432-client.virtnet /var/lib/stateless/state ffff8800b5886e00 ffff88012a5ba000 nfs4 192.168.200.253:/exports/state/oleg432-client.virtnet /boot ffff8800b5886fc0 ffff88012a5ba000 nfs4 192.168.200.253:/exports/state/oleg432-client.virtnet /etc/etc/kdump.conf ffff8800b5ae01c0 ffff88012aadc000 rpc_pipefs sunrpc /var/lib/nfs/var/lib/nfs/rpc_pipefs ffff8800b42ac380 ffff88012aadf800 nfs4 192.168.200.253://exports/testreports/42115/testresults/racer-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 /tmp/tmp/testlogs ffff8800b39d9880 ffff88012a5a3000 tmpfs tmpfs /run/user/0 ffff8800b41e6c40 ffff8800b60ed800 squashfs /dev/vda /usr/sbin/mount.lustre ffff8800b426e1c0 ffff88012a5bf800 lustre 192.168.204.132@tcp:/lustre /mnt/lustre ffff8800b5887880 ffff8800a9d86000 lustre 192.168.204.132@tcp:/lustre /mnt/lustre2 +-------------------------------+ >----------------------| Last 40 lines of dmesg buffer |----------------------< +-------------------------------+ [ 286.364080] LustreError: 11241:0:(lustre_lmv.h:185:lmv_stripe_object_dump()) Skipped 4 previous similar messages [ 286.366891] LustreError: 11241:0:(lustre_lmv.h:178:lmv_stripe_object_dump()) dump LMV: refs 215092432 magic=0x1 count=1 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= [ 286.459860] LustreError: 9813:0:(lustre_lmv.h:178:lmv_stripe_object_dump()) dump LMV: refs 215092432 magic=0x1 count=1 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= [ 286.465733] LustreError: 9813:0:(lustre_lmv.h:178:lmv_stripe_object_dump()) dump LMV: refs 215092432 magic=0x1 count=1 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= [ 288.742616] LustreError: 13330:0:(llite_nfs.c:446:ll_dir_get_parent_fid()) lustre: failure inode [0x240000402:0x3def:0x0] get parent: rc = -2 [ 295.337408] Lustre: 20239:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1713485986/real 1713485986] req@ffff8800a3e39500 x1796720073068352/t0(0) o101->lustre-MDT0001-mdc-ffff8800a9d86000@192.168.204.132@tcp:12/10 lens 992/66664 e 0 to 1 dl 1713486041 ref 2 fl Rpc:ReXPQU/200/ffffffff rc 0/-1 job:'ls.0' uid:0 gid:0 [ 295.350114] Lustre: lustre-MDT0001-mdc-ffff8800a9d86000: Connection to lustre-MDT0001 (at 192.168.204.132@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 295.366960] Lustre: lustre-MDT0001-mdc-ffff8800a9d86000: Connection restored to (at 192.168.204.132@tcp) [ 347.843752] LustreError: 22245:0:(llite_lib.c:1868:ll_update_lsm_md()) lustre: [0x200000402:0x4988:0x0] dir layout mismatch: [ 347.846830] LustreError: 22245:0:(llite_lib.c:1868:ll_update_lsm_md()) Skipped 1 previous similar message [ 347.849619] LustreError: 22245:0:(lustre_lmv.h:178:lmv_stripe_object_dump()) dump LMV: refs 215092432 magic=0x4 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= [ 347.857117] LustreError: 22245:0:(lustre_lmv.h:185:lmv_stripe_object_dump()) stripe[0] [0x200000400:0x16e:0x0] [ 347.860763] LustreError: 22245:0:(lustre_lmv.h:185:lmv_stripe_object_dump()) Skipped 3 previous similar messages [ 347.863911] LustreError: 22245:0:(lustre_lmv.h:178:lmv_stripe_object_dump()) dump LMV: refs 215092432 magic=0x1 count=4 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=crush:2000003 pool= [ 348.706745] 17[3054]: segfault at 8 ip 00007f0fad4aa7e8 sp 00007ffeb15981e0 error 4 in ld-2.17.so[7f0fad49f000+22000] [ 352.106450] LustreError: 17:0:(statahead.c:792:ll_statahead_interpret_work()) lustre: getattr callback for sleep [0x200000402:0x4946:0x0]: rc = -5 [ 352.110164] LustreError: 17:0:(statahead.c:792:ll_statahead_interpret_work()) Skipped 8 previous similar messages [ 352.606200] 3[5933]: segfault at 8 ip 00007f7de52857e8 sp 00007ffe713dc150 error 4 in ld-2.17.so[7f7de527a000+22000] [ 353.036870] LustreError: 6006:0:(vvp_io.c:1923:vvp_io_init()) lustre: refresh file layout [0x240000402:0x47ca:0x0] error -5. [ 353.040503] LustreError: 6006:0:(vvp_io.c:1923:vvp_io_init()) Skipped 4 previous similar messages [ 370.347952] 4[21981]: segfault at 8 ip 00007fe33c3407e8 sp 00007ffdf1de4a30 error 4 in ld-2.17.so[7fe33c335000+22000] [ 374.881919] LustreError: 25930:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff88012a5bf800: inode [0x200000402:0x4d2d:0x0] mdc close failed: rc = -2 [ 374.890887] LustreError: 25930:0:(file.c:264:ll_close_inode_openhandle()) Skipped 142 previous similar messages [ 380.296926] LustreError: 26883:0:(llite_lib.c:3691:ll_prep_inode()) new_inode -fatal: rc -2 [ 380.312316] LustreError: 26883:0:(llite_lib.c:3691:ll_prep_inode()) Skipped 1563 previous similar messages [ 380.854046] LustreError: 29009:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000402:0x4f19:0x0]: rc = -5 [ 380.862400] LustreError: 29009:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 293 previous similar messages [ 383.254322] hrtimer: interrupt took 6925682 ns [ 389.118679] Lustre: 1270:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1713486079/real 1713486079] req@ffff88012f2aa680 x1796720095367488/t0(0) o101->lustre-MDT0001-mdc-ffff8800a9d86000@192.168.204.132@tcp:12/10 lens 992/66664 e 0 to 1 dl 1713486134 ref 2 fl Rpc:ReXPQU/200/ffffffff rc 0/-1 job:'touch.0' uid:0 gid:0 [ 389.142107] Lustre: lustre-MDT0001-mdc-ffff8800a9d86000: Connection to lustre-MDT0001 (at 192.168.204.132@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 389.177423] Lustre: lustre-MDT0001-mdc-ffff8800a9d86000: Connection restored to (at 192.168.204.132@tcp) [ 391.112576] Lustre: dir [0x200000402:0x56bb:0x0] stripe 2 readdir failed: -2, directory is partially accessed! [ 391.116144] Lustre: Skipped 197 previous similar messages [ 394.609876] Lustre: 32519:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1713486085/real 1713486085] req@ffff880086249500 x1796720096225472/t0(0) o101->lustre-MDT0001-mdc-ffff88012a5bf800@192.168.204.132@tcp:12/10 lens 992/66664 e 0 to 1 dl 1713486140 ref 2 fl Rpc:ReXPQU/200/ffffffff rc 0/-1 job:'ls.0' uid:0 gid:0 [ 394.628975] Lustre: lustre-MDT0001-mdc-ffff88012a5bf800: Connection to lustre-MDT0001 (at 192.168.204.132@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 394.658051] Lustre: lustre-MDT0001-mdc-ffff88012a5bf800: Connection restored to (at 192.168.204.132@tcp) [ 408.019341] LustreError: 13225:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8800a9d86000: cannot apply new layout on [0x200000402:0x5967:0x0] : rc = -5 [ 408.026730] LustreError: 13225:0:(lov_object.c:1360:lov_layout_change()) Skipped 50 previous similar messages [ 412.213031] traps: 5[15672] general protection ip:4053b4 sp:7fff9106e148 error:0 in 5[400000+6000] [ 414.143625] LustreError: 17021:0:(file.c:5550:ll_inode_revalidate_fini()) lustre: revalidate FID [0x240000402:0x1:0x0] error: rc = -4 ****************************************************************************** ************************ A Summary Of Problems Found ************************* ****************************************************************************** -------------------- A list of all +++WARNING+++ messages -------------------- PARTIAL DUMP with size(vmcore) < 25% size(RAM) There are 3 threads running in their own namespaces Use 'taskinfo --ns' to get more details ------------------------------------------------------------------------------ ** Execution took 10.85s (real) 5.47s (CPU), Child processes: 5.33s