== conf-sanity test 72: test fast symlink with extents flag enabled ========================================================== 12:01:13 (1713456073) umount lustre on /mnt/lustre..... stop ost1 service on oleg460-server stop mds service on oleg460-server stop mds service on oleg460-server LNET unconfigure error 22: (null) unloading modules on: 'oleg460-server' oleg460-server: oleg460-server.virtnet: executing unload_modules_local oleg460-server: LNET unconfigure error 22: (null) modules unloaded. Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg460-server' oleg460-server: oleg460-server.virtnet: executing load_modules_local oleg460-server: Loading modules from /home/green/git/lustre-release/lustre oleg460-server: detected 4 online CPUs by sysfs oleg460-server: Force libcfs to create 2 CPU partitions oleg460-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg460-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata tune2fs 1.46.2.wc5 (26-Mar-2022) Permanent disk data: Target: lustre:MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x61 (MDT first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.160@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds2_flakey target name lustre:MDT0001 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0001 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds2_flakey 200000k Writing CONFIGS/mountdata tune2fs 1.46.2.wc5 (26-Mar-2022) Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.160@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata start mds service on oleg460-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg460-server: oleg460-server.virtnet: executing set_default_debug -1 all pdsh@oleg460-client: oleg460-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg460-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg460-server: oleg460-server.virtnet: executing set_default_debug -1 all pdsh@oleg460-client: oleg460-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg460-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg460-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg460-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg460-server: oleg460-server.virtnet: executing set_default_debug -1 all pdsh@oleg460-client: oleg460-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg460-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg460-client.virtnet: -o user_xattr,flock oleg460-server@tcp:/lustre /mnt/lustre total: 3 open/close in 0.02 seconds: 144.58 ops/second create 3 short symlinks total 8 drwxr-xr-x 4 root root 4096 Apr 18 12:01 . drwxr-xr-x 4 root root 0 Apr 18 11:06 .. drwxr-xr-x 2 root root 4096 Apr 18 12:01 d72.conf-sanity lrwxrwxrwx 1 root root 45 Apr 18 12:01 f72.conf-sanity-1 -> /mnt/lustre/d72.conf-sanity/f72.conf-sanity-1 lrwxrwxrwx 1 root root 45 Apr 18 12:01 f72.conf-sanity-2 -> /mnt/lustre/d72.conf-sanity/f72.conf-sanity-2 lrwxrwxrwx 1 root root 45 Apr 18 12:01 f72.conf-sanity-3 -> /mnt/lustre/d72.conf-sanity/f72.conf-sanity-3 umount lustre on /mnt/lustre..... Stopping client oleg460-client.virtnet /mnt/lustre (opts:) stop mds service on oleg460-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg460-server stop mds service on oleg460-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg460-server stop ost1 service on oleg460-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg460-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg460-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg460-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 26693 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26721 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26722 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26723 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 53375 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53376 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53378 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53379 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53380 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53381 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53382 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 268k/0k (140k/129k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 326.26MB/s [Thread 0] Scanned group range [0, 3), inodes 281 Pass 2: Checking directory structure Pass 2: Memory used: 268k/0k (97k/172k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 348.07MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 268k/0k (97k/172k), time: 0.01/ 0.01/ 0.00 Pass 3: Memory used: 268k/0k (96k/173k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 268k/0k (67k/202k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 268k/0k (67k/202k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 346.14MB/s 280 inodes used (0.35%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 263 24546 blocks used (49.09%, out of 50000) 0 bad blocks 1 large file 149 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 3 symbolic links (3 fast symbolic links) 0 sockets ------------ 270 files Memory used: 268k/0k (66k/203k), time: 0.02/ 0.01/ 0.00 I/O read: 1MB, write: 0MB, rate: 55.29MB/s