-----============= acceptance-small: conf-sanity ============----- Tue Apr 16 15:40:45 EDT 2024 excepting tests: 32b 32c 32newtarball 110 41c Stopping clients: oleg306-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg306-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg306-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg306-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg306-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg306-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg306-server oleg306-server: oleg306-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing load_modules_local oleg306-server: Loading modules from /home/green/git/lustre-release/lustre oleg306-server: detected 4 online CPUs by sysfs oleg306-server: Force libcfs to create 2 CPU partitions oleg306-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg306-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg306-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg306-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg306-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg306-server: oleg306-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg306-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg306-server: oleg306-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg306-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg306-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg306-server debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 45: long unlink handling in ptlrpcd == 15:42:08 (1713296528) start mds service on oleg306-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg306-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg306-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-OST0000 oleg306-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre setup single mount lustre success stop mds service on oleg306-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg306-server sleep 60 sec fail_loc=0x8000050f sleep 10 sec manual umount lustre on /mnt/lustre.... df: '/mnt/lustre': Cannot send after transport endpoint shutdown fail_loc=0x0 start mds service on oleg306-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg306-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client oleg306-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg306-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg306-server unloading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg306-client: oleg306-client: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-client: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 PASS 45 (146s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg306-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg306-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg306-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 oleg306-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg306-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 69: replace an OST with the same index ========================================================== 15:44:35 (1713296675) start mds service on oleg306-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing load_modules_local oleg306-server: Loading modules from /home/green/git/lustre-release/lustre oleg306-server: detected 4 online CPUs by sysfs oleg306-server: Force libcfs to create 2 CPU partitions oleg306-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg306-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg306-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg306-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-OST0000 oleg306-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre seq.cli-lustre-OST0000-super.width=0x1ffffff On OST0, 49504 inodes available. Want 99872. rc=0 - open/close 6402 (time 1713296713.75 total 10.00 last 640.19) total: 10000 open/close in 15.98 seconds: 625.78 ops/second - unlinked 0 (time 1713296720 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 6832 (time 1713296740.76 total 10.00 last 683.09) total: 10000 open/close in 14.77 seconds: 676.90 ops/second - unlinked 0 (time 1713296746 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 6155 (time 1713296766.04 total 10.00 last 615.48) total: 10000 open/close in 16.13 seconds: 619.78 ops/second - unlinked 0 (time 1713296773 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 6067 (time 1713296792.89 total 10.00 last 606.67) total: 10000 open/close in 16.35 seconds: 611.55 ops/second - unlinked 0 (time 1713296800 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 6607 (time 1713296819.98 total 10.00 last 660.68) total: 10000 open/close in 15.16 seconds: 659.79 ops/second - unlinked 0 (time 1713296825 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 5910 (time 1713296845.40 total 10.00 last 590.94) total: 10000 open/close in 16.43 seconds: 608.61 ops/second - unlinked 0 (time 1713296852 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 6388 (time 1713296872.23 total 10.00 last 638.80) total: 10000 open/close in 15.43 seconds: 647.98 ops/second - unlinked 0 (time 1713296878 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 5598 (time 1713296898.16 total 10.00 last 559.75) total: 10000 open/close in 17.96 seconds: 556.88 ops/second - unlinked 0 (time 1713296906 ; total 0 ; last 0) total: 10000 unlinks in 11 seconds: 909.090881 unlinks/second - open/close 5592 (time 1713296928.07 total 10.00 last 559.12) total: 10000 open/close in 17.70 seconds: 564.89 ops/second - unlinked 0 (time 1713296936 ; total 0 ; last 0) total: 10000 unlinks in 11 seconds: 909.090881 unlinks/second - open/close 5708 (time 1713296957.91 total 10.00 last 570.75) total: 10000 open/close in 17.57 seconds: 569.03 ops/second - unlinked 0 (time 1713296966 ; total 0 ; last 0) total: 10000 unlinks in 10 seconds: 1000.000000 unlinks/second rm: missing operand Try 'rm --help' for more information. umount lustre on /mnt/lustre..... Stopping client oleg306-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg306-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg306-server Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.106@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre-OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0000 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata start ost1 service on oleg306-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-OST0000 oleg306-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg306-server: oleg306-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg306-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 10 sec oleg306-server: oleg306-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg306-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec mount lustre on /mnt/lustre..... Starting client: oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre On OST0, 20429 used inodes rc=0 umount lustre on /mnt/lustre..... Stopping client oleg306-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg306-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg306-server unloading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg306-client: oleg306-client: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-client: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 PASS 69 (342s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg306-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg306-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg306-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 oleg306-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg306-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 111: Adding large_dir with over 2GB directory ========================================================== 15:50:19 (1713297019) oleg306-server: debugfs 1.46.2.wc5 (26-Mar-2022) Supported features: dir_prealloc imagic_inodes has_journal ext_attr resize_inode dir_index sparse_super2 fast_commit stable_inodes filetype needs_recovery journal_dev meta_bg extent 64bit mmp flex_bg ea_inode dirdata metadata_csum_seed large_dir inline_data encrypt casefold sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota bigalloc metadata_csum read-only project shared_blocks verity oleg306-server: debugfs 1.46.2.wc5 (26-Mar-2022) Supported features: dir_prealloc imagic_inodes has_journal ext_attr resize_inode dir_index sparse_super2 fast_commit stable_inodes filetype needs_recovery journal_dev meta_bg extent 64bit mmp flex_bg ea_inode dirdata metadata_csum_seed large_dir inline_data encrypt casefold sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota bigalloc metadata_csum read-only project shared_blocks verity umount lustre on /mnt/lustre..... stop ost1 service on oleg306-server stop mds service on oleg306-server stop mds service on oleg306-server LNET unconfigure error 22: (null) unloading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing unload_modules_local oleg306-server: LNET unconfigure error 22: (null) modules unloaded. MDT params: --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity --backfstype=ldiskfs --device-size=2400000 --mkfsoptions=\"-O large_dir -i 1048576 -b 4096 -E lazy_itable_init\" --reformat /dev/mapper/mds1_flakey Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing load_modules_local oleg306-server: Loading modules from /home/green/git/lustre-release/lustre oleg306-server: detected 4 online CPUs by sysfs oleg306-server: Force libcfs to create 2 CPU partitions oleg306-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg306-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 2400000 options -i 1048576 -b 4096 -J size=93 -I 1024 -q -O large_dir,uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -i 1048576 -b 4096 -J size=93 -I 1024 -q -O large_dir,uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 2400000k Writing CONFIGS/mountdata Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 OST params: --mgsnode=oleg306-server@tcp --fsname=lustre --ost --index=0 --param=sys.timeout=20 --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-O large_dir -b 4096 -E lazy_itable_init\" --reformat /dev/mapper/ost1_flakey Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.106@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O large_dir,uninit_bg,extents,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -b 4096 -I 512 -q -O large_dir,uninit_bg,extents,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=131072 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 mount lustre on /mnt/lustre..... Starting client: oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre Starting client oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre Started clients oleg306-client.virtnet: 192.168.203.106@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) mount lustre on /mnt/lustre2..... Starting client: oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre2 Starting client oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre2 Started clients oleg306-client.virtnet: 192.168.203.106@tcp:/lustre on /mnt/lustre2 type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2280828 1696 2159132 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 142216 1388 126828 2% /mnt/lustre[OST:0] filesystem_summary: 142216 1388 126828 2% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 2280 272 2008 12% /mnt/lustre[MDT:0] lustre-OST0000_UUID 50000 302 49698 1% /mnt/lustre[OST:0] filesystem_summary: 2280 272 2008 12% /mnt/lustre creating 60000 hardlinks to oleg306-client.virtnet-0 creating 60000 hardlinks to oleg306-client.virtnet-1 waiting for PIDs 26689 26703 to complete - link 2078 (time 1713297062.11 total 10.00 last 207.70) - link 4043 (time 1713297072.11 total 20.01 last 196.47) - link 5905 (time 1713297082.12 total 30.01 last 186.14) - link 7569 (time 1713297092.13 total 40.02 last 166.27) - link 9639 (time 1713297102.13 total 50.02 last 206.92) - link 11340 (time 1713297112.13 total 60.02 last 170.06) - link 13190 (time 1713297122.13 total 70.03 last 184.97) - link 15133 (time 1713297132.14 total 80.03 last 194.20) - link 17080 (time 1713297142.14 total 90.03 last 194.64) - link 19084 (time 1713297152.14 total 100.04 last 200.36) - link 20000 (time 1713297157.12 total 105.01 last 184.01) - link 21567 (time 1713297167.13 total 115.02 last 156.64) - link 23571 (time 1713297177.13 total 125.02 last 200.33) - link 25764 (time 1713297187.13 total 135.02 last 219.23) - link 27702 (time 1713297197.13 total 145.02 last 193.78) - link 29358 (time 1713297207.13 total 155.03 last 165.58) - link 30000 (time 1713297211.19 total 159.09 last 158.13) - link 32039 (time 1713297221.20 total 169.09 last 203.84) - link 34031 (time 1713297231.21 total 179.10 last 199.04) - link 36180 (time 1713297241.21 total 189.10 last 214.88) - link 37982 (time 1713297251.21 total 199.10 last 180.15) - link 39863 (time 1713297261.21 total 209.10 last 188.05) - link 41856 (time 1713297271.21 total 219.11 last 199.25) - link 43362 (time 1713297281.22 total 229.11 last 150.56) - link 44906 (time 1713297291.22 total 239.11 last 154.35) - link 46793 (time 1713297301.22 total 249.12 last 188.62) - link 48957 (time 1713297311.23 total 259.12 last 216.36) - link 50000 (time 1713297316.62 total 264.51 last 193.37) - link 51853 (time 1713297326.62 total 274.52 last 185.22) - link 53748 (time 1713297336.62 total 284.52 last 189.50) - link 55954 (time 1713297346.62 total 294.52 last 220.59) - link 58210 (time 1713297356.63 total 304.52 last 225.57) total: 60000 link in 312.96 seconds: 191.72 ops/second - link 2073 (time 1713297062.19 total 10.00 last 207.20) - link 4020 (time 1713297072.19 total 20.01 last 194.69) - link 5874 (time 1713297082.19 total 30.01 last 185.33) - link 7526 (time 1713297092.20 total 40.01 last 165.17) - link 9593 (time 1713297102.20 total 50.01 last 206.68) - link 10000 (time 1713297104.24 total 52.05 last 199.46) - link 11538 (time 1713297114.24 total 62.05 last 153.78) - link 13577 (time 1713297124.24 total 72.06 last 203.84) - link 15395 (time 1713297134.24 total 82.06 last 181.77) - link 17483 (time 1713297144.24 total 92.06 last 208.79) - link 19388 (time 1713297154.25 total 102.06 last 190.44) - link 20000 (time 1713297157.24 total 105.05 last 204.77) - link 21562 (time 1713297167.24 total 115.05 last 156.18) - link 23578 (time 1713297177.24 total 125.05 last 201.55) - link 25765 (time 1713297187.24 total 135.06 last 218.67) - link 27699 (time 1713297197.25 total 145.06 last 193.32) - link 29352 (time 1713297207.25 total 155.06 last 165.26) - link 30000 (time 1713297211.44 total 159.25 last 154.66) - link 32057 (time 1713297221.44 total 169.26 last 205.63) - link 34053 (time 1713297231.44 total 179.26 last 199.54) - link 36208 (time 1713297241.45 total 189.26 last 215.41) - link 38017 (time 1713297251.45 total 199.27 last 180.80) - link 39899 (time 1713297261.46 total 209.27 last 188.13) - link 41909 (time 1713297271.46 total 219.27 last 200.99) - link 43390 (time 1713297281.46 total 229.28 last 148.06) - link 44962 (time 1713297291.46 total 239.28 last 157.14) - link 46850 (time 1713297301.47 total 249.28 last 188.72) - link 49001 (time 1713297311.47 total 259.28 last 215.05) - link 50000 (time 1713297316.68 total 264.49 last 191.91) - link 51856 (time 1713297326.68 total 274.49 last 185.59) - link 53764 (time 1713297336.68 total 284.50 last 190.71) - link 55974 (time 1713297346.69 total 294.50 last 220.90) - link 58219 (time 1713297356.69 total 304.50 last 224.45) total: 60000 link in 312.89 seconds: 191.76 ops/second estimate 13284s left after 120000 files / 314s umount lustre on /mnt/lustre2..... Stopping client oleg306-client.virtnet /mnt/lustre2 (opts:-f) umount lustre on /mnt/lustre..... Stopping client oleg306-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg306-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg306-server unloading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing unload_modules_local modules unloaded. ETA 13284s after 120000 files / 314s is too long e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg306-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg306-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 19) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 162 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 163 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 165 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 166 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 167 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 168 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 169 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 170 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 171 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 172 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 173 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 174 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 175 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 176 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 177 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 178 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 179 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 180 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 181 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 182 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 183 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 184 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 185 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 189 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 190 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] Pass 1: Memory used: 380k/904k (124k/257k), time: 0.02/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 2MB, write: 0MB, rate: 98.65MB/s [Thread 0] Scanned group range [0, 19), inodes 279 Pass 2: Checking directory structure Pass 2: Memory used: 380k/0k (85k/296k), time: 0.31/ 0.27/ 0.04 Pass 2: I/O read: 48MB, write: 0MB, rate: 155.26MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 380k/0k (85k/296k), time: 0.34/ 0.28/ 0.05 Poleg306-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51232768, 266) != expected (51240960, 266) oleg306-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51232768, 266) != expected (51240960, 266) oleg306-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51232768, 266) != expected (51240960, 266) pdsh@oleg306-client: oleg306-server: ssh exited with exit code 4 ass 3: Memory used: 380k/0k (83k/298k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 380k/0k (69k/312k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 1MB, write: 0MB, rate: 8196.72MB/s Pass 5: Checking group summary information Pass 5: Memory used: 380k/0k (68k/313k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 733.14MB/s Update quota info for quota type 0? no Update quota info for quota type 1? no Update quota info for quota type 2? no lustre-MDT0000: ********** WARNING: Filesystem still has errors ********** 276 inodes used (12.11%, out of 2280) 3 non-contiguous files (1.1%) 1 non-contiguous directory (0.4%) # of inodes with ind/dind/tind blocks: 1/1/0 37846 blocks used (6.31%, out of 600000) 0 bad blocks 1 large file 148 regular files 118 directories 0 character device files 0 block device files 0 fifos 120000 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 120264 files Memory used: 380k/0k (67k/314k), time: 0.34/ 0.28/ 0.05 I/O read: 49MB, write: 0MB, rate: 142.51MB/s pdsh@oleg306-client: oleg306-client: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-client: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 Stopping clients: oleg306-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg306-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 oleg306-server: oleg306-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing load_modules_local oleg306-server: Loading modules from /home/green/git/lustre-release/lustre oleg306-server: detected 4 online CPUs by sysfs oleg306-server: Force libcfs to create 2 CPU partitions oleg306-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg306-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg306-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey PASS 111 (395s) debug_raw_pointers=0 debug_raw_pointers=0 Stopping clients: oleg306-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg306-client.virtnet /mnt/lustre2 (opts:-f) oleg306-server: oleg306-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing load_modules_local oleg306-server: Loading modules from /home/green/git/lustre-release/lustre oleg306-server: detected 4 online CPUs by sysfs oleg306-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey == conf-sanity test complete, duration 984 sec =========== 15:57:10 (1713297430) === conf-sanity: start cleanup 15:57:11 (1713297431) === === conf-sanity: finish cleanup 15:57:11 (1713297431) ===