-----============= acceptance-small: sanity-sec ============----- Tue Apr 16 15:40:48 EDT 2024 client=34553367 MDS=34553367 OSS=34553367 excepting tests: 27 skipping tests SLOW=no: 26 was USER0=sanityusr:x:500:500::/home/sanityusr:/bin/bash was USER1=sanityusr1:x:501:501::/home/sanityusr1:/bin/bash now USER0=sanityusr=500:500, USER1=sanityusr1=501:501 === sanity-sec: start setup 15:40:52 (1713296452) === oleg108-client.virtnet: executing check_config_client /mnt/lustre oleg108-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg108-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b3ec6800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b3ec6800.idle_timeout=debug disable quota as required oleg108-server: oleg108-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all osd-ldiskfs.track_declares_assert=1 === sanity-sec: finish setup 15:40:59 (1713296459) === without GSS support debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 0: uid permission ======================================================================================= 15:41:01 (1713296461) running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [-ali] [/mnt/lustre] total 8 144115188193296385 drwxr-xr-x 4 root root 4096 Apr 16 15:41 . 14181 drwxr-xr-x 3 root root 0 Apr 16 15:40 .. 144115205289279489 drwxr-xr-x 2 sanityusr root 4096 Apr 16 15:41 d0.sanity-sec running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/f0] touch: cannot touch '/mnt/lustre/f0': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0.sanity-sec/f1] running as uid/gid/euid/egid 501/501/501/501, groups: [touch] [/mnt/lustre/d0.sanity-sec/f2] touch: cannot touch '/mnt/lustre/d0.sanity-sec/f2': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0.sanity-sec/f4] running as uid/gid/euid/egid 501/501/501/501, groups: [touch] [/mnt/lustre/d0.sanity-sec/f5] touch: cannot touch '/mnt/lustre/d0.sanity-sec/f5': Permission denied PASS 0 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 1: setuid/gid ======================================================================================= 15:41:05 (1713296465) SKIP: sanity-sec test_1 without GSS support. SKIP 1 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 4: set supplementary group ========================================================================= 15:41:07 (1713296467) /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: illegal option -- p running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre/d4.sanity-sec] running as uid/gid/euid/egid 501/501/501/501, groups: 1 2 500 [ls] [/mnt/lustre/d4.sanity-sec] running as uid/gid/euid/egid 501/501/501/501, groups: 1 2 [ls] [/mnt/lustre/d4.sanity-sec] ls: cannot open directory /mnt/lustre/d4.sanity-sec: Permission denied PASS 4 (2s) debug_raw_pointers=0 debug_raw_pointers=0 On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_projid = nodemap.default.squash_projid=65534 waiting 10 secs for sync debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 7: nodemap create and delete ========== 15:41:44 (1713296504) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=1 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=2 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=3 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync PASS 7 (91s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 8: nodemap reject duplicates ========== 15:43:16 (1713296596) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=4 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=5 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=6 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync oleg108-server: error: 35043_0 existing nodemap name pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 nodemap_add 35043_0 failed with 1 On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync PASS 8 (113s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 9: nodemap range add ================== 15:45:11 (1713296711) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=7 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=8 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=9 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync PASS 9 (94s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 10a: nodemap reject duplicate ranges == 15:46:46 (1713296806) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=10 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=11 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=12 waiting 10 secs for sync oleg108-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg108-server: error: nodemap_add_range: cannot add range '44.0.0.[1-253]@tcp' to nodemap '35043_0': File exists pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 oleg108-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg108-server: error: nodemap_add_range: cannot add range '44.0.1.[1-253]@tcp' to nodemap '35043_0': File exists pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 oleg108-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg108-server: error: nodemap_add_range: cannot add range '44.1.0.[1-253]@tcp' to nodemap '35043_1': File exists pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 oleg108-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg108-server: error: nodemap_add_range: cannot add range '44.1.1.[1-253]@tcp' to nodemap '35043_1': File exists pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 oleg108-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg108-server: error: nodemap_add_range: cannot add range '44.2.0.[1-253]@tcp' to nodemap '35043_2': File exists pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 oleg108-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg108-server: error: nodemap_add_range: cannot add range '44.2.1.[1-253]@tcp' to nodemap '35043_2': File exists pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync PASS 10a (95s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 10b: delete range from the correct nodemap ========================================================== 15:48:23 (1713296903) oleg108-server: error: invalid ioctl: 000ce043 errno: 22: Invalid argument oleg108-server: error: nodemap_del_range: cannot delete range '192.168.19.[0-255]@o2ib20' to nodemap 'nodemap2': rc = -22 pdsh@oleg108-client: oleg108-server: ssh exited with exit code 22 PASS 10b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 10c: verfify contiguous range support ========================================================== 15:48:28 (1713296908) PASS 10c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 10d: verfify nodemap range format '*@' support ========================================================== 15:48:33 (1713296913) PASS 10d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 11: nodemap modify ==================== 15:48:37 (1713296917) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=17 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=18 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=19 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync PASS 11 (93s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 12: nodemap set squash ids ============ 15:50:12 (1713297012) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=20 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=21 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=22 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync PASS 12 (93s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 13: test nids ========================= 15:51:47 (1713297107) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=23 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=24 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=25 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync PASS 13 (94s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 14: test default nodemap nid lookup === 15:53:23 (1713297203) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=26 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=27 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=28 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync PASS 14 (94s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 15: test id mapping =================== 15:54:58 (1713297298) On MGS 192.168.201.108, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.201.108, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.201.108, 35043_0.id = nodemap.35043_0.id=29 waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = nodemap.35043_1.id=30 waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = nodemap.35043_2.id=31 waiting 10 secs for sync Start to add idmaps ... Start to test idmaps ... Start to add root idmaps ... Start to delete root idmaps ... Start to add root idmaps ... Start to delete root idmaps ... Start to update idmaps ... Start to delete idmaps ... On MGS 192.168.201.108, 35043_0.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_1.id = waiting 10 secs for sync On MGS 192.168.201.108, 35043_2.id = waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 15 (144s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 16: test nodemap all_off fileops ====== 15:57:24 (1713297444) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d16.sanity-sec ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d16.sanity-sec ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 16 (105s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 17: test nodemap trusted_noadmin fileops ========================================================== 15:59:10 (1713297550) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied On MGS 192.168.201.108, c0.map_mode = nodemap.c0.map_mode=projid waiting 10 secs for sync mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 17 (898s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 18: test nodemap mapped_noadmin fileops ========================================================== 16:14:10 (1713298450) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d18.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d18.sanity-sec': Permission denied mkdir -p /mnt/lustre/d18.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d18.sanity-sec': Permission denied On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 18 (485s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 19: test nodemap trusted_admin fileops ========================================================== 16:22:16 (1713298936) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d19.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d19.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 19 (224s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 20: test nodemap mapped_admin fileops ========================================================== 16:26:01 (1713299161) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d20.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d20.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 20 (225s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 21: test nodemap mapped_trusted_noadmin fileops ========================================================== 16:29:48 (1713299388) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d21.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d21.sanity-sec': Permission denied mkdir -p /mnt/lustre/d21.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d21.sanity-sec': Permission denied On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 21 (490s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 22: test nodemap mapped_trusted_admin fileops ========================================================== 16:38:00 (1713299880) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d22.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d22.sanity-sec On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3ec6800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b3ec6800.lru_size=clear On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 22 (222s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 23a: test mapped regular ACLs ========= 16:41:45 (1713300105) SKIP: sanity-sec test_23a Need 2 clients at least SKIP 23a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 23b: test mapped default ACLs ========= 16:41:48 (1713300108) SKIP: sanity-sec test_23b Need 2 clients at least SKIP 23b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 24: check nodemap proc files for LBUGs and Oopses ========================================================== 16:41:51 (1713300111) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync nodemap.active=1 nodemap.c0.admin_nodemap=0 nodemap.c0.audit_mode=1 nodemap.c0.deny_unknown=0 nodemap.c0.exports= [ { nid: 192.168.201.8@tcp, uuid: 31df6b58-6241-4415-ae67-874a576e7456 }, { nid: 192.168.201.8@tcp, uuid: 31df6b58-6241-4415-ae67-874a576e7456 }, ] nodemap.c0.fileset= nodemap.c0.forbid_encryption=0 nodemap.c0.id=39 nodemap.c0.idmap= [ { idtype: uid, client_id: 60003, fs_id: 60000 }, { idtype: uid, client_id: 60004, fs_id: 60002 }, { idtype: gid, client_id: 60003, fs_id: 60000 }, { idtype: gid, client_id: 60004, fs_id: 60002 } ] nodemap.c0.map_mode=all nodemap.c0.ranges= [ { id: 41, start_nid: 192.168.201.8@tcp, end_nid: 192.168.201.8@tcp } ] nodemap.c0.rbac=file_perms,dne_ops,quota_ops,byfid_ops,chlg_ops,fscrypt_admin nodemap.c0.readonly_mount=0 nodemap.c0.sepol= nodemap.c0.squash_gid=65534 nodemap.c0.squash_projid=65534 nodemap.c0.squash_uid=65534 nodemap.c0.trusted_nodemap=0 nodemap.default.admin_nodemap=1 nodemap.default.audit_mode=1 nodemap.default.deny_unknown=0 nodemap.default.exports= [ { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-lwp-OST0001_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-OST0001_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-lwp-OST0000_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-OST0000_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-MDT0001_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-MDT0000_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-mdtlov_UUID }, ] nodemap.default.fileset= nodemap.default.forbid_encryption=0 nodemap.default.id=0 nodemap.default.map_mode=all nodemap.default.readonly_mount=0 nodemap.default.squash_gid=65534 nodemap.default.squash_projid=65534 nodemap.default.squash_uid=65534 nodemap.default.trusted_nodemap=1 On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 24 (74s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 25: test save and reload nodemap config ========================================================== 16:43:08 (1713300188) Stopping clients: oleg108-client.virtnet /mnt/lustre (opts:) Stopping client oleg108-client.virtnet /mnt/lustre opts: mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, test25.id = nodemap.test25.id=41 waiting 10 secs for sync === sanity-sec: start setup 16:44:09 (1713300249) === Checking servers environments Checking clients oleg108-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg108-server' oleg108-server: oleg108-server.virtnet: executing load_modules_local oleg108-server: Loading modules from /home/green/git/lustre-release/lustre oleg108-server: detected 4 online CPUs by sysfs oleg108-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg108-server: mount.lustre: according to /etc/mtab /dev/mapper/mds1_flakey is already mounted on /mnt/lustre-mds1 pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 Start of /dev/mapper/mds1_flakey on mds1 failed 17 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg108-server: mount.lustre: according to /etc/mtab /dev/mapper/mds2_flakey is already mounted on /mnt/lustre-mds2 pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 Start of /dev/mapper/mds2_flakey on mds2 failed 17 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg108-server: mount.lustre: according to /etc/mtab /dev/mapper/ost1_flakey is already mounted on /mnt/lustre-ost1 pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of /dev/mapper/ost1_flakey on ost1 failed 17 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 oleg108-server: mount.lustre: according to /etc/mtab /dev/mapper/ost2_flakey is already mounted on /mnt/lustre-ost2 pdsh@oleg108-client: oleg108-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of /dev/mapper/ost2_flakey on ost2 failed 17 Starting client: oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre Starting client oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012dbb2800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012dbb2800.idle_timeout=debug disable quota as required osd-ldiskfs.track_declares_assert=1 === sanity-sec: finish setup 16:44:31 (1713300271) === Stopping clients: oleg108-client.virtnet /mnt/lustre (opts:) Stopping client oleg108-client.virtnet /mnt/lustre opts: On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync Starting client oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) PASS 25 (120s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-sec test_26 skipping SLOW test 26 SKIP: sanity-sec test_27a skipping excluded test 27a (base 27) SKIP: sanity-sec test_27b skipping excluded test 27b (base 27) debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 28: check shared key rotation method == 16:45:12 (1713300312) SKIP: sanity-sec test_28 need shared key feature for this test SKIP 28 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 29: check for missing shared key ====== 16:45:15 (1713300315) SKIP: sanity-sec test_29 need shared key feature for this test SKIP 29 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 30: check for invalid shared key ====== 16:45:18 (1713300318) SKIP: sanity-sec test_30 need shared key feature for this test SKIP 30 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 30b: basic test of all different SSK flavors ========================================================== 16:45:21 (1713300321) SKIP: sanity-sec test_30b need shared key feature for this test SKIP 30b (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 31: client mount option '-o network' == 16:45:23 (1713300323) SKIP: sanity-sec test_31 without lnetctl support. SKIP 31 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 32: check for mgssec ================== 16:45:26 (1713300326) SKIP: sanity-sec test_32 need shared key feature for this test SKIP 32 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 33: correct srpc flags for MGS connection ========================================================== 16:45:28 (1713300328) SKIP: sanity-sec test_33 need shared key feature for this test SKIP 33 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 34: deny_unknown on default nodemap === 16:45:30 (1713300330) On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.deny_unknown = nodemap.default.deny_unknown=1 waiting 10 secs for sync On MGS 192.168.201.108, default.deny_unknown = nodemap.default.deny_unknown=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 34 (48s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 35: Check permissions when accessing changelogs ========================================================== 16:46:21 (1713300381) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl1 cl1' mdd.lustre-MDT0000.changelog_mask=ALL mdd.lustre-MDT0001.changelog_mask=ALL lustre-MDT0000.1 02MKDIR 20:46:24.686966208 2024.04.16 0x0 t=[0x200000403:0x1:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.201.8@tcp p=[0x200000007:0x1:0x0] d35.sanity-sec lustre-MDT0000.2 01CREAT 20:46:24.696917264 2024.04.16 0x0 t=[0x200000403:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.201.8@tcp p=[0x200000403:0x1:0x0] f35.sanity-sec lustre-MDT0000.3 10OPEN 20:46:24.697054787 2024.04.16 0x4a t=[0x200000403:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.201.8@tcp m=-w- p=[0x200000403:0x1:0x0] lustre-MDT0000.4 11CLOSE 20:46:24.706846521 2024.04.16 0x42 t=[0x200000403:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.201.8@tcp lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0001: clear the changelog for cl1 of all records mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync lfs changelog: cannot access changelog: Permission denied lfs changelog: cannot access changelog: Permission denied lustre-MDT0000: clear the changelog for cl1 of all records lfs changelog_clear: cannot purge records for 'cl1': Permission denied (13) changelog_clear error: Permission denied lustre-MDT0001: clear the changelog for cl1 of all records lfs changelog_clear: cannot purge records for 'cl1': Permission denied (13) changelog_clear error: Permission denied On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync lustre-MDT0001: clear the changelog for cl1 of all records lustre-MDT0001: Deregistered changelog user #1 lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 PASS 35 (88s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 36: control if clients can use encryption ========================================================== 16:47:51 (1713300471) SKIP: sanity-sec test_36 client encryption not supported SKIP 36 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 37: simple encrypted file ============= 16:47:54 (1713300474) SKIP: sanity-sec test_37 client encryption not supported SKIP 37 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 38: encrypted file with hole ========== 16:47:58 (1713300478) SKIP: sanity-sec test_38 client encryption not supported SKIP 38 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 39: rewrite data in already encrypted page ========================================================== 16:48:01 (1713300481) SKIP: sanity-sec test_39 client encryption not supported SKIP 39 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 40: exercise size of encrypted file === 16:48:04 (1713300484) SKIP: sanity-sec test_40 client encryption not supported SKIP 40 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 41: test race on encrypted file size (1) ========================================================== 16:48:07 (1713300487) SKIP: sanity-sec test_41 client encryption not supported SKIP 41 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 42: test race on encrypted file size (2) ========================================================== 16:48:10 (1713300490) SKIP: sanity-sec test_42 client encryption not supported SKIP 42 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 43: test race on encrypted file size (3) ========================================================== 16:48:13 (1713300493) SKIP: sanity-sec test_43 client encryption not supported SKIP 43 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 44: encrypted file access semantics: direct IO ========================================================== 16:48:17 (1713300497) SKIP: sanity-sec test_44 client encryption not supported SKIP 44 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 45: encrypted file access semantics: MMAP ========================================================== 16:48:20 (1713300500) SKIP: sanity-sec test_45 client encryption not supported SKIP 45 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 46: encrypted file access semantics without key ========================================================== 16:48:23 (1713300503) SKIP: sanity-sec test_46 client encryption not supported SKIP 46 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 47: encrypted file access semantics: rename/link ========================================================== 16:48:26 (1713300506) SKIP: sanity-sec test_47 client encryption not supported SKIP 47 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 48a: encrypted file access semantics: truncate ========================================================== 16:48:30 (1713300510) SKIP: sanity-sec test_48a client encryption not supported SKIP 48a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 48b: encrypted file: concurrent truncate ========================================================== 16:48:33 (1713300513) SKIP: sanity-sec test_48b client encryption not supported SKIP 48b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 49: Avoid getxattr for encryption context ========================================================== 16:48:36 (1713300516) SKIP: sanity-sec test_49 client encryption not supported SKIP 49 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 50: DoM encrypted file ================ 16:48:40 (1713300520) SKIP: sanity-sec test_50 client encryption not supported SKIP 50 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 51: FS capabilities =================== 16:48:43 (1713300523) mdt.lustre-MDT0000.enable_cap_mask=0xf mdt.lustre-MDT0001.enable_cap_mask=0xf running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/chown] [500] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/chown: changing ownership of '/mnt/lustre/d51.sanity-sec/f51.sanity-sec': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/chown] [500] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/touch] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/touch: cannot touch '/mnt/lustre/d51.sanity-sec/f51.sanity-sec': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/touch] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/cat: /mnt/lustre/d51.sanity-sec/f51.sanity-sec: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/cat: /mnt/lustre/d51.sanity-sec/f51.sanity-sec: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] mdt.lustre-MDT0000.enable_cap_mask=0x0 mdt.lustre-MDT0001.enable_cap_mask=0x0 PASS 51 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 52: Mirrored encrypted file =========== 16:48:49 (1713300529) SKIP: sanity-sec test_52 client encryption not supported SKIP 52 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 53: Mixed PAGE_SIZE clients =========== 16:48:53 (1713300533) SKIP: sanity-sec test_53 client encryption not supported SKIP 53 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 54: Encryption policies with fscrypt == 16:48:56 (1713300536) SKIP: sanity-sec test_54 client encryption not supported SKIP 54 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 55: access with seteuid =============== 16:49:00 (1713300540) 192.168.201.108@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync oleg108-server: error: c0 not existing nodemap name pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 On MGS 192.168.201.108, c0.id = waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync Starting client oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Initially root ruid:rgid 0:0, euid:egid 0:0 Groups 0 - root, To switch to effective sanityusr uid:gid 500:500 Groups 500 - sanityusr, Now root ruid:rgid 0:0, euid:egid 500:500 Groups 500 - sanityusr, File /mnt/lustre/d55.sanity-sec/sanityusr/testdir_groups/file successfully written 192.168.201.108@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync Starting client: oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre PASS 55 (107s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 56: FIEMAP on encrypted file ========== 16:50:50 (1713300650) SKIP: sanity-sec test_56 client encryption not supported SKIP 56 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 57: security.c/encryption.c xattr protection ========================================================== 16:50:53 (1713300653) SKIP: sanity-sec test_57 client encryption not supported SKIP 57 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 58: access to enc file's xattrs ======= 16:50:56 (1713300656) SKIP: sanity-sec test_58 client encryption not supported SKIP 58 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 59a: mirror resync of encrypted files without key ========================================================== 16:51:00 (1713300660) SKIP: sanity-sec test_59a client encryption not supported SKIP 59a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 59b: migrate/extend/split of encrypted files without key ========================================================== 16:51:03 (1713300663) SKIP: sanity-sec test_59b client encryption not supported SKIP 59b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 59c: MDT migrate of encrypted files without key ========================================================== 16:51:07 (1713300667) SKIP: sanity-sec test_59c client encryption not supported SKIP 59c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 60: Subdirmount of encrypted dir ====== 16:51:09 (1713300669) SKIP: sanity-sec test_60 client encryption not supported SKIP 60 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 61: Nodemap enforces read-only mount == 16:51:12 (1713300672) affected facets: mds1 oleg108-server: oleg108-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg108-server: *.lustre-MDT0000.recovery_status status: INACTIVE affected facets: mds2 oleg108-server: oleg108-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg108-server: *.lustre-MDT0001.recovery_status status: INACTIVE 192.168.201.108@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync oleg108-server: error: c0 not existing nodemap name pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 On MGS 192.168.201.108, c0.id = waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync Starting client oleg108-client.virtnet: -o user_xattr,flock,rw oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) 192.168.201.108@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) On MGS 192.168.201.108, c0.readonly_mount = nodemap.c0.readonly_mount=1 waiting 10 secs for sync Starting client oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) a /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: line 5506: /mnt/lustre/d61.sanity-sec/f61.sanity-sec: Read-only file system 192.168.201.108@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) Starting client oleg108-client.virtnet: -o user_xattr,flock,rw oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) a /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: line 5515: /mnt/lustre/d61.sanity-sec/f61.sanity-sec: Read-only file system 192.168.201.108@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) Starting client oleg108-client.virtnet: -o user_xattr,flock,ro oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) a /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: line 5523: /mnt/lustre/d61.sanity-sec/f61.sanity-sec: Read-only file system 192.168.201.108@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) Starting client oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) 192.168.201.108@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 a /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: line 5533: /mnt/lustre/d61.sanity-sec/f61.sanity-sec: Read-only file system 192.168.201.108@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync Starting client: oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre PASS 61 (122s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 62: e2fsck with encrypted files ======= 16:53:16 (1713300796) SKIP: sanity-sec test_62 client encryption not supported SKIP 62 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 63: fid2path with encrypted files ===== 16:53:19 (1713300799) SKIP: sanity-sec test_63 client encryption not supported SKIP 63 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64a: Nodemap enforces file_perms RBAC roles ========================================================== 16:53:23 (1713300803) On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync oleg108-server: error: c0 not existing nodemap name pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 On MGS 192.168.201.108, c0.id = waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=file_perms waiting 10 secs for sync + chmod 777 /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec + chown quota_usr:quota_usr /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec + chgrp quota_usr /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec + /home/green/git/lustre-release/lustre/utils/lfs project -p 1000 /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec + set +vx On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync + chmod 777 /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec chmod: changing permissions of '/mnt/lustre/d64a.sanity-sec/f64a.sanity-sec': Operation not permitted + chown quota_usr:quota_usr /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec chown: changing ownership of '/mnt/lustre/d64a.sanity-sec/f64a.sanity-sec': Operation not permitted + chgrp quota_usr /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec chgrp: changing group of '/mnt/lustre/d64a.sanity-sec/f64a.sanity-sec': Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs project -p 1000 /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec lfs: failed to set xattr for '/mnt/lustre/d64a.sanity-sec/f64a.sanity-sec': Operation not permitted + set +vx On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 64a (128s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64b: Nodemap enforces dne_ops RBAC roles ========================================================== 16:55:33 (1713300933) On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync oleg108-server: error: c0 not existing nodemap name pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 On MGS 192.168.201.108, c0.id = waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mdt.lustre-MDT0000.enable_dir_restripe=1 mdt.lustre-MDT0001.enable_dir_restripe=1 On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=dne_ops waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs mkdir -i 1 /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + rmdir /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + /home/green/git/lustre-release/lustre/utils/lfs mkdir -c 2 /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + rmdir /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + mkdir /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + /home/green/git/lustre-release/lustre/utils/lfs setdirstripe -c 2 /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + rmdir /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + /home/green/git/lustre-release/lustre/utils/lfs migrate -m 1 /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_for_migr + touch /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_mdt0/fileA + mv /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_mdt0/fileA /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_mdt1/ + set +vx On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs mkdir -i 1 /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d lfs mkdir: dirstripe error on '/mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d': Operation not permitted lfs setdirstripe: cannot create dir '/mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d': Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs mkdir -c 2 /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d lfs mkdir: dirstripe error on '/mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d': Operation not permitted lfs setdirstripe: cannot create dir '/mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d': Operation not permitted + mkdir /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + /home/green/git/lustre-release/lustre/utils/lfs setdirstripe -c 2 /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d lfs setdirstripe: dirstripe error on '/mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d': Operation not permitted lfs setdirstripe: cannot create dir '/mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d': Operation not permitted + rmdir /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d + /home/green/git/lustre-release/lustre/utils/lfs migrate -m 1 /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_for_migr lfs migrate: /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_for_migr migrate failed: Operation not permitted (1) + touch /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_mdt0/fileA + mv /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_mdt0/fileA /mnt/lustre/d64b.sanity-sec/f64b.sanity-sec.d_mdt1/ + set +vx mdt.lustre-MDT0000.enable_dir_restripe=0 mdt.lustre-MDT0001.enable_dir_restripe=0 On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 64b (129s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64c: Nodemap enforces quota_ops RBAC roles ========================================================== 16:57:44 (1713301064) On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync oleg108-server: error: c0 not existing nodemap name pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 On MGS 192.168.201.108, c0.id = waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=quota_ops waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -U -b 10G -B 11G -i 100K -I 105K /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -U -b 0 -B 0 -i 0 -I 0 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -G -b 10G -B 11G -i 100K -I 105K /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -G -b 0 -B 0 -i 0 -I 0 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -P -b 10G -B 11G -i 100K -I 105K /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -P -b 0 -B 0 -i 0 -I 0 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr -D /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr -D /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 -D /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 --delete /mnt/lustre + set +vx On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -U -b 10G -B 11G -i 100K -I 105K /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -G -b 10G -B 11G -i 100K -I 105K /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -P -b 10G -B 11G -i 100K -I 105K /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr -D /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr -D /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 -D /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + set +vx On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 64c (128s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64d: Nodemap enforces byfid_ops RBAC roles ========================================================== 16:59:54 (1713301194) On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync oleg108-server: error: c0 not existing nodemap name pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 On MGS 192.168.201.108, c0.id = waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=byfid_ops waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs fid2path /mnt/lustre '[0x240000404:0xe:0x0]' /mnt/lustre/d64d.sanity-sec/f64d.sanity-sec + cat '/mnt/lustre/.lustre/fid/[0x240000404:0xe:0x0]' + lfs rmfid /mnt/lustre '[0x240000404:0xe:0x0]' + set +vx On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs fid2path /mnt/lustre '[0x240000404:0xf:0x0]' /mnt/lustre/d64d.sanity-sec/f64d.sanity-sec + cat '/mnt/lustre/.lustre/fid/[0x240000404:0xf:0x0]' cat: /mnt/lustre/.lustre/fid/[0x240000404:0xf:0x0]: Operation not permitted + lfs rmfid /mnt/lustre '[0x240000404:0xf:0x0]' lfs rmfid: cannot remove [0x240000404:0xf:0x0]: Operation not permitted + set +vx On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 64d (130s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64e: Nodemap enforces chlg_ops RBAC roles ========================================================== 17:02:06 (1713301326) On MGS 192.168.201.108, active = nodemap.active=1 waiting 10 secs for sync oleg108-server: error: c0 not existing nodemap name pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 On MGS 192.168.201.108, c0.id = waiting 10 secs for sync On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.108, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl2 cl2' mdd.lustre-MDT0000.changelog_mask=ALL mdd.lustre-MDT0001.changelog_mask=ALL On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=chlg_ops waiting 10 secs for sync changelogs dump lustre-MDT0000.5 01CREAT 21:03:18.714190312 2024.04.16 0x0 t=[0x200000406:0x11:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.201.8@tcp p=[0x200000406:0x10:0x0] f64e.sanity-sec lustre-MDT0000.6 10OPEN 21:03:18.714381272 2024.04.16 0x4a t=[0x200000406:0x11:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.201.8@tcp m=-w- p=[0x200000406:0x10:0x0] lustre-MDT0000.7 11CLOSE 21:03:18.729579612 2024.04.16 0x42 t=[0x200000406:0x11:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.201.8@tcp lustre-MDT0001.1 02MKDIR 21:03:18.703215951 2024.04.16 0x0 t=[0x240000404:0x10:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.201.8@tcp p=[0x200000406:0x10:0x0] f64e.sanity-sec.d changelogs clear lustre-MDT0000: clear the changelog for cl2 of all records lustre-MDT0001: clear the changelog for cl2 of all records On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync changelogs dump lfs changelog: cannot access changelog: Permission denied lfs changelog: cannot access changelog: Permission denied changelogs clear lustre-MDT0000: clear the changelog for cl2 of all records lfs changelog_clear: cannot purge records for 'cl2': Permission denied (13) changelog_clear error: Permission denied lustre-MDT0001: clear the changelog for cl2 of all records lfs changelog_clear: cannot purge records for 'cl2': Permission denied (13) changelog_clear error: Permission denied On MGS 192.168.201.108, c0.rbac = nodemap.c0.rbac=file_perms,dne_ops,quota_ops,byfid_ops,chlg_ops,fscrypt_admin waiting 10 secs for sync lustre-MDT0001: clear the changelog for cl2 of all records lustre-MDT0001: Deregistered changelog user #2 lustre-MDT0000: clear the changelog for cl2 of all records lustre-MDT0000: Deregistered changelog user #2 On MGS 192.168.201.108, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.108, active = nodemap.active=0 waiting 10 secs for sync PASS 64e (144s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64f: Nodemap enforces fscrypt_admin RBAC roles ========================================================== 17:04:32 (1713301472) SKIP: sanity-sec test_64f Need enc support, skip fscrypt_admin role SKIP 64f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 65: lfs find -printf %La and --attrs support ========================================================== 17:04:35 (1713301475) SKIP: sanity-sec test_65 client encryption not supported SKIP 65 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 68: all config logs are processed ===== 17:04:39 (1713301479) 192.168.201.108@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) fail_loc=0x8000051d fail_val=20 Starting client oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre Started clients oleg108-client.virtnet: 192.168.201.108@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) 192.168.201.108@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg108-client.virtnet /mnt/lustre (opts:) fail_loc=0 fail_val=0 Starting client: oleg108-client.virtnet: -o user_xattr,flock oleg108-server@tcp:/lustre /mnt/lustre PASS 68 (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 69: check upcall incorrect values ===== 17:05:06 (1713301506) mdt.lustre-MDT0000.identity_upcall=/path/to/prog oleg108-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/identity_upcall=prog: Invalid argument oleg108-server: error: set_param: setting 'mdt/lustre-MDT0000/identity_upcall'='prog': Invalid argument pdsh@oleg108-client: oleg108-server: ssh exited with exit code 22 mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0000.identity_upcall=none mdt.lustre-MDT0000.identity_upcall=NONE PASS 69 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 70: targets have local copy of sptlrpc llog ========================================================== 17:05:12 (1713301512) SKIP: sanity-sec test_70 need shared key feature for this test SKIP 70 (1s) debug_raw_pointers=0 debug_raw_pointers=0 cleanup: ====================================================== running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre] d17.sanity-sec d18.sanity-sec d21.sanity-sec d35.sanity-sec d51.sanity-sec d55.sanity-sec d61.sanity-sec d64a.sanity-sec d64b.sanity-sec d64c.sanity-sec d64d.sanity-sec d64e.sanity-sec running as uid/gid/euid/egid 501/501/501/501, groups: [ls] [/mnt/lustre] d17.sanity-sec d18.sanity-sec d21.sanity-sec d35.sanity-sec d51.sanity-sec d55.sanity-sec d61.sanity-sec d64a.sanity-sec d64b.sanity-sec d64c.sanity-sec d64d.sanity-sec d64e.sanity-sec == sanity-sec test complete, duration 5066 sec =========== 17:05:15 (1713301515) === sanity-sec: start cleanup 17:05:15 (1713301515) === === sanity-sec: finish cleanup 17:05:16 (1713301516) ===