-----============= acceptance-small: sanity-sec ============----- Fri Apr 19 05:51:09 EDT 2024 client=34553369 MDS=34553369 OSS=34553369 excepting tests: 27 skipping tests SLOW=no: 26 was USER0=sanityusr:x:500:500::/home/sanityusr:/bin/bash was USER1=sanityusr1:x:501:501::/home/sanityusr1:/bin/bash now USER0=sanityusr=500:500, USER1=sanityusr1=501:501 === sanity-sec: start setup 05:51:17 (1713520277) === oleg341-client.virtnet: executing check_config_client /mnt/lustre oleg341-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg341-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6cba000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6cba000.idle_timeout=debug disable quota as required oleg341-server: oleg341-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all === sanity-sec: finish setup 05:51:29 (1713520289) === without GSS support debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 0: uid permission ======================================================================================= 05:51:33 (1713520293) running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [-ali] [/mnt/lustre] total 22 144115188193296385 drwxr-xr-x 4 root root 10752 Apr 19 05:51 . 13872 drwxr-xr-x 3 root root 0 Apr 19 05:50 .. 144115205272502273 drwxr-xr-x 2 sanityusr root 11776 Apr 19 05:51 d0.sanity-sec running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/f0] touch: cannot touch '/mnt/lustre/f0': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0.sanity-sec/f1] running as uid/gid/euid/egid 501/501/501/501, groups: [touch] [/mnt/lustre/d0.sanity-sec/f2] touch: cannot touch '/mnt/lustre/d0.sanity-sec/f2': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0.sanity-sec/f4] running as uid/gid/euid/egid 501/501/501/501, groups: [touch] [/mnt/lustre/d0.sanity-sec/f5] touch: cannot touch '/mnt/lustre/d0.sanity-sec/f5': Permission denied PASS 0 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 1: setuid/gid ======================================================================================= 05:51:39 (1713520299) SKIP: sanity-sec test_1 without GSS support. SKIP 1 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 4: set supplementary group ========================================================================= 05:51:43 (1713520303) /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: illegal option -- p running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre/d4.sanity-sec] running as uid/gid/euid/egid 501/501/501/501, groups: 1 2 500 [ls] [/mnt/lustre/d4.sanity-sec] running as uid/gid/euid/egid 501/501/501/501, groups: 1 2 [ls] [/mnt/lustre/d4.sanity-sec] ls: cannot open directory /mnt/lustre/d4.sanity-sec: Permission denied PASS 4 (4s) debug_raw_pointers=0 debug_raw_pointers=0 On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_projid = nodemap.default.squash_projid=65534 waiting 10 secs for sync debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 7: nodemap create and delete ========== 05:52:24 (1713520344) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=1 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=2 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=3 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync PASS 7 (97s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 8: nodemap reject duplicates ========== 05:54:04 (1713520444) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=4 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=5 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=6 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync oleg341-server: error: 02271_0 existing nodemap name pdsh@oleg341-client: oleg341-server: ssh exited with exit code 1 nodemap_add 02271_0 failed with 1 On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync PASS 8 (121s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 9: nodemap range add ================== 05:56:08 (1713520568) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=7 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=8 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=9 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync PASS 9 (102s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 10a: nodemap reject duplicate ranges == 05:57:53 (1713520673) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=10 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=11 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=12 waiting 10 secs for sync oleg341-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg341-server: error: nodemap_add_range: cannot add range '22.0.0.[1-253]@tcp' to nodemap '02271_0': File exists pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 oleg341-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg341-server: error: nodemap_add_range: cannot add range '22.0.1.[1-253]@tcp' to nodemap '02271_0': File exists pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 oleg341-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg341-server: error: nodemap_add_range: cannot add range '22.1.0.[1-253]@tcp' to nodemap '02271_1': File exists pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 oleg341-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg341-server: error: nodemap_add_range: cannot add range '22.1.1.[1-253]@tcp' to nodemap '02271_1': File exists pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 oleg341-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg341-server: error: nodemap_add_range: cannot add range '22.2.0.[1-253]@tcp' to nodemap '02271_2': File exists pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 oleg341-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg341-server: error: nodemap_add_range: cannot add range '22.2.1.[1-253]@tcp' to nodemap '02271_2': File exists pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync PASS 10a (104s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 10b: delete range from the correct nodemap ========================================================== 05:59:39 (1713520779) oleg341-server: error: invalid ioctl: 000ce043 errno: 22: Invalid argument oleg341-server: error: nodemap_del_range: cannot delete range '192.168.19.[0-255]@o2ib20' to nodemap 'nodemap2': rc = -22 pdsh@oleg341-client: oleg341-server: ssh exited with exit code 22 PASS 10b (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 10c: verfify contiguous range support ========================================================== 05:59:48 (1713520788) PASS 10c (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 10d: verfify nodemap range format '*@' support ========================================================== 05:59:55 (1713520795) PASS 10d (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 11: nodemap modify ==================== 06:00:04 (1713520804) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=17 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=18 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=19 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync PASS 11 (102s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 12: nodemap set squash ids ============ 06:01:48 (1713520908) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=20 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=21 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=22 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync PASS 12 (96s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 13: test nids ========================= 06:03:26 (1713521006) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=23 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=24 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=25 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync PASS 13 (98s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 14: test default nodemap nid lookup === 06:05:05 (1713521105) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=26 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=27 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=28 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync PASS 14 (95s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 15: test id mapping =================== 06:06:42 (1713521202) On MGS 192.168.203.141, default.squash_uid = nodemap.default.squash_uid=65534 waiting 10 secs for sync On MGS 192.168.203.141, default.squash_gid = nodemap.default.squash_gid=65534 waiting 10 secs for sync On MGS 192.168.203.141, 02271_0.id = nodemap.02271_0.id=29 waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = nodemap.02271_1.id=30 waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = nodemap.02271_2.id=31 waiting 10 secs for sync Start to add idmaps ... Start to test idmaps ... Start to add root idmaps ... Start to delete root idmaps ... Start to add root idmaps ... Start to delete root idmaps ... Start to update idmaps ... Start to delete idmaps ... On MGS 192.168.203.141, 02271_0.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_1.id = waiting 10 secs for sync On MGS 192.168.203.141, 02271_2.id = waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 15 (141s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 16: test nodemap all_off fileops ====== 06:09:05 (1713521345) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d16.sanity-sec ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d16.sanity-sec ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 16 (134s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 17: test nodemap trusted_noadmin fileops ========================================================== 06:11:21 (1713521481) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied On MGS 192.168.203.141, c0.map_mode = nodemap.c0.map_mode=projid waiting 10 secs for sync mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 17 (961s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 18: test nodemap mapped_noadmin fileops ========================================================== 06:27:23 (1713522443) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d18.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d18.sanity-sec': Permission denied mkdir -p /mnt/lustre/d18.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d18.sanity-sec': Permission denied On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 18 (519s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 19: test nodemap trusted_admin fileops ========================================================== 06:36:04 (1713522964) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d19.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d19.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 19 (256s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 20: test nodemap mapped_admin fileops ========================================================== 06:40:21 (1713523221) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d20.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d20.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 20 (256s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 21: test nodemap mapped_trusted_noadmin fileops ========================================================== 06:44:39 (1713523479) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d21.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d21.sanity-sec': Permission denied mkdir -p /mnt/lustre/d21.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d21.sanity-sec': Permission denied On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 21 (516s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 22: test nodemap mapped_trusted_admin fileops ========================================================== 06:53:17 (1713523997) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d22.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d22.sanity-sec On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6cba000.lru_size=clear On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 22 (246s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 23a: test mapped regular ACLs ========= 06:57:25 (1713524245) SKIP: sanity-sec test_23a Need 2 clients at least SKIP 23a (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 23b: test mapped default ACLs ========= 06:57:27 (1713524247) SKIP: sanity-sec test_23b Need 2 clients at least SKIP 23b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 24: check nodemap proc files for LBUGs and Oopses ========================================================== 06:57:29 (1713524249) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync nodemap.active=1 nodemap.c0.admin_nodemap=0 nodemap.c0.audit_mode=1 nodemap.c0.deny_unknown=0 nodemap.c0.exports= [ { nid: 192.168.203.41@tcp, uuid: c67e921c-9840-40c8-b6ea-e1726345d189 }, ] nodemap.c0.fileset= nodemap.c0.forbid_encryption=0 nodemap.c0.id=39 nodemap.c0.idmap= [ { idtype: uid, client_id: 60003, fs_id: 60000 }, { idtype: uid, client_id: 60004, fs_id: 60002 }, { idtype: gid, client_id: 60003, fs_id: 60000 }, { idtype: gid, client_id: 60004, fs_id: 60002 } ] nodemap.c0.map_mode=all nodemap.c0.ranges= [ { id: 41, start_nid: 192.168.203.41@tcp, end_nid: 192.168.203.41@tcp } ] nodemap.c0.rbac=file_perms,dne_ops,quota_ops,byfid_ops,chlg_ops,fscrypt_admin nodemap.c0.readonly_mount=0 nodemap.c0.sepol= nodemap.c0.squash_gid=65534 nodemap.c0.squash_projid=65534 nodemap.c0.squash_uid=65534 nodemap.c0.trusted_nodemap=0 nodemap.default.admin_nodemap=1 nodemap.default.audit_mode=1 nodemap.default.deny_unknown=0 nodemap.default.exports= [ { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-OST0001_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-OST0000_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-MDT0000_UUID }, ] nodemap.default.fileset= nodemap.default.forbid_encryption=0 nodemap.default.id=0 nodemap.default.map_mode=all nodemap.default.readonly_mount=0 nodemap.default.squash_gid=65534 nodemap.default.squash_projid=65534 nodemap.default.squash_uid=65534 nodemap.default.trusted_nodemap=1 On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 24 (70s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 25: test save and reload nodemap config ========================================================== 06:58:40 (1713524320) Stopping clients: oleg341-client.virtnet /mnt/lustre (opts:) Stopping client oleg341-client.virtnet /mnt/lustre opts: mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, test25.id = nodemap.test25.id=41 waiting 10 secs for sync === sanity-sec: start setup 06:59:39 (1713524379) === Checking servers environments Checking clients oleg341-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg341-server' oleg341-server: oleg341-server.virtnet: executing load_modules_local oleg341-server: Loading modules from /home/green/git/lustre-release/lustre oleg341-server: detected 4 online CPUs by sysfs oleg341-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg341-server: mount.lustre: according to /etc/mtab lustre-mdt1/mdt1 is already mounted on /mnt/lustre-mds1 pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 Start of lustre-mdt1/mdt1 on mds1 failed 17 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 oleg341-server: mount.lustre: according to /etc/mtab lustre-ost1/ost1 is already mounted on /mnt/lustre-ost1 pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of lustre-ost1/ost1 on ost1 failed 17 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 oleg341-server: mount.lustre: according to /etc/mtab lustre-ost2/ost2 is already mounted on /mnt/lustre-ost2 pdsh@oleg341-client: oleg341-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of lustre-ost2/ost2 on ost2 failed 17 Starting client: oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre Starting client oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012e7d5800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012e7d5800.idle_timeout=debug disable quota as required === sanity-sec: finish setup 06:59:53 (1713524393) === Stopping clients: oleg341-client.virtnet /mnt/lustre (opts:) Stopping client oleg341-client.virtnet /mnt/lustre opts: On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync Starting client oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) PASS 25 (109s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-sec test_26 skipping SLOW test 26 SKIP: sanity-sec test_27a skipping excluded test 27a (base 27) SKIP: sanity-sec test_27b skipping excluded test 27b (base 27) debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 28: check shared key rotation method == 07:00:31 (1713524431) SKIP: sanity-sec test_28 need shared key feature for this test SKIP 28 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 29: check for missing shared key ====== 07:00:33 (1713524433) SKIP: sanity-sec test_29 need shared key feature for this test SKIP 29 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 30: check for invalid shared key ====== 07:00:36 (1713524436) SKIP: sanity-sec test_30 need shared key feature for this test SKIP 30 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 30b: basic test of all different SSK flavors ========================================================== 07:00:38 (1713524438) SKIP: sanity-sec test_30b need shared key feature for this test SKIP 30b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 31: client mount option '-o network' == 07:00:40 (1713524440) SKIP: sanity-sec test_31 without lnetctl support. SKIP 31 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 32: check for mgssec ================== 07:00:43 (1713524443) SKIP: sanity-sec test_32 need shared key feature for this test SKIP 32 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 33: correct srpc flags for MGS connection ========================================================== 07:00:45 (1713524445) SKIP: sanity-sec test_33 need shared key feature for this test SKIP 33 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 34: deny_unknown on default nodemap === 07:00:47 (1713524447) On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.deny_unknown = nodemap.default.deny_unknown=1 waiting 10 secs for sync On MGS 192.168.203.141, default.deny_unknown = nodemap.default.deny_unknown=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 34 (46s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 35: Check permissions when accessing changelogs ========================================================== 07:01:35 (1713524495) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl1' mdd.lustre-MDT0000.changelog_mask=ALL lustre-MDT0000.1 02MKDIR 11:01:37.245541822 2024.04.19 0x0 t=[0x200000402:0x1:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.203.41@tcp p=[0x200000007:0x1:0x0] d35.sanity-sec lustre-MDT0000.2 01CREAT 11:01:37.252838767 2024.04.19 0x0 t=[0x200000402:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.41@tcp p=[0x200000402:0x1:0x0] f35.sanity-sec lustre-MDT0000.3 10OPEN 11:01:37.253003960 2024.04.19 0x4a t=[0x200000402:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.41@tcp m=-w- p=[0x200000402:0x1:0x0] lustre-MDT0000.4 11CLOSE 11:01:37.271123605 2024.04.19 0x42 t=[0x200000402:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.41@tcp lustre-MDT0000: clear the changelog for cl1 of all records mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync lfs changelog: cannot access changelog: Permission denied lustre-MDT0000: clear the changelog for cl1 of all records lfs changelog_clear: cannot purge records for 'cl1': Permission denied (13) changelog_clear error: Permission denied On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 PASS 35 (82s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 36: control if clients can use encryption ========================================================== 07:02:58 (1713524578) SKIP: sanity-sec test_36 client encryption not supported SKIP 36 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 37: simple encrypted file ============= 07:03:01 (1713524581) SKIP: sanity-sec test_37 client encryption not supported SKIP 37 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 38: encrypted file with hole ========== 07:03:03 (1713524583) SKIP: sanity-sec test_38 client encryption not supported SKIP 38 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 39: rewrite data in already encrypted page ========================================================== 07:03:05 (1713524585) SKIP: sanity-sec test_39 client encryption not supported SKIP 39 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 40: exercise size of encrypted file === 07:03:07 (1713524587) SKIP: sanity-sec test_40 client encryption not supported SKIP 40 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 41: test race on encrypted file size (1) ========================================================== 07:03:09 (1713524589) SKIP: sanity-sec test_41 client encryption not supported SKIP 41 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 42: test race on encrypted file size (2) ========================================================== 07:03:12 (1713524592) SKIP: sanity-sec test_42 client encryption not supported SKIP 42 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 43: test race on encrypted file size (3) ========================================================== 07:03:14 (1713524594) SKIP: sanity-sec test_43 client encryption not supported SKIP 43 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 44: encrypted file access semantics: direct IO ========================================================== 07:03:16 (1713524596) SKIP: sanity-sec test_44 client encryption not supported SKIP 44 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 45: encrypted file access semantics: MMAP ========================================================== 07:03:18 (1713524598) SKIP: sanity-sec test_45 client encryption not supported SKIP 45 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 46: encrypted file access semantics without key ========================================================== 07:03:20 (1713524600) SKIP: sanity-sec test_46 client encryption not supported SKIP 46 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 47: encrypted file access semantics: rename/link ========================================================== 07:03:23 (1713524603) SKIP: sanity-sec test_47 client encryption not supported SKIP 47 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 48a: encrypted file access semantics: truncate ========================================================== 07:03:25 (1713524605) SKIP: sanity-sec test_48a client encryption not supported SKIP 48a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 48b: encrypted file: concurrent truncate ========================================================== 07:03:27 (1713524607) SKIP: sanity-sec test_48b client encryption not supported SKIP 48b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 49: Avoid getxattr for encryption context ========================================================== 07:03:29 (1713524609) SKIP: sanity-sec test_49 client encryption not supported SKIP 49 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 50: DoM encrypted file ================ 07:03:31 (1713524611) SKIP: sanity-sec test_50 client encryption not supported SKIP 50 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 51: FS capabilities =================== 07:03:34 (1713524614) mdt.lustre-MDT0000.enable_cap_mask=0xf running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/chown] [500] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/chown: changing ownership of '/mnt/lustre/d51.sanity-sec/f51.sanity-sec': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/chown] [500] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/touch] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/touch: cannot touch '/mnt/lustre/d51.sanity-sec/f51.sanity-sec': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/touch] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/cat: /mnt/lustre/d51.sanity-sec/f51.sanity-sec: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/cat: /mnt/lustre/d51.sanity-sec/f51.sanity-sec: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] mdt.lustre-MDT0000.enable_cap_mask=0x0 PASS 51 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 52: Mirrored encrypted file =========== 07:03:38 (1713524618) SKIP: sanity-sec test_52 client encryption not supported SKIP 52 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 53: Mixed PAGE_SIZE clients =========== 07:03:40 (1713524620) SKIP: sanity-sec test_53 client encryption not supported SKIP 53 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 54: Encryption policies with fscrypt == 07:03:42 (1713524622) SKIP: sanity-sec test_54 client encryption not supported SKIP 54 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 55: access with seteuid =============== 07:03:44 (1713524624) 192.168.203.141@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync oleg341-server: error: c0 not existing nodemap name pdsh@oleg341-client: oleg341-server: ssh exited with exit code 1 On MGS 192.168.203.141, c0.id = waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync Starting client oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Initially root ruid:rgid 0:0, euid:egid 0:0 Groups 0 - root, To switch to effective sanityusr uid:gid 500:500 Groups 500 - sanityusr, Now root ruid:rgid 0:0, euid:egid 500:500 Groups 500 - sanityusr, File /mnt/lustre/d55.sanity-sec/sanityusr/testdir_groups/file successfully written 192.168.203.141@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync Starting client: oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre PASS 55 (102s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 56: FIEMAP on encrypted file ========== 07:05:28 (1713524728) SKIP: sanity-sec test_56 skip ZFS backend SKIP 56 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 57: security.c/encryption.c xattr protection ========================================================== 07:05:30 (1713524730) SKIP: sanity-sec test_57 skip ZFS backend SKIP 57 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 58: access to enc file's xattrs ======= 07:05:33 (1713524733) SKIP: sanity-sec test_58 skip ZFS backend SKIP 58 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 59a: mirror resync of encrypted files without key ========================================================== 07:05:35 (1713524735) SKIP: sanity-sec test_59a client encryption not supported SKIP 59a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 59b: migrate/extend/split of encrypted files without key ========================================================== 07:05:37 (1713524737) SKIP: sanity-sec test_59b client encryption not supported SKIP 59b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 59c: MDT migrate of encrypted files without key ========================================================== 07:05:40 (1713524740) SKIP: sanity-sec test_59c client encryption not supported SKIP 59c (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 60: Subdirmount of encrypted dir ====== 07:05:42 (1713524742) SKIP: sanity-sec test_60 client encryption not supported SKIP 60 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 61: Nodemap enforces read-only mount == 07:05:45 (1713524745) affected facets: mds1 oleg341-server: oleg341-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg341-server: *.lustre-MDT0000.recovery_status status: INACTIVE 192.168.203.141@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync oleg341-server: error: c0 not existing nodemap name pdsh@oleg341-client: oleg341-server: ssh exited with exit code 1 On MGS 192.168.203.141, c0.id = waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync Starting client oleg341-client.virtnet: -o user_xattr,flock,rw oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) 192.168.203.141@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) On MGS 192.168.203.141, c0.readonly_mount = nodemap.c0.readonly_mount=1 waiting 10 secs for sync Starting client oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) a /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: line 5506: /mnt/lustre/d61.sanity-sec/f61.sanity-sec: Read-only file system 192.168.203.141@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) Starting client oleg341-client.virtnet: -o user_xattr,flock,rw oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) a /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: line 5515: /mnt/lustre/d61.sanity-sec/f61.sanity-sec: Read-only file system 192.168.203.141@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) Starting client oleg341-client.virtnet: -o user_xattr,flock,ro oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) a /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: line 5523: /mnt/lustre/d61.sanity-sec/f61.sanity-sec: Read-only file system 192.168.203.141@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) Starting client oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) 192.168.203.141@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 a /home/green/git/lustre-release/lustre/tests/sanity-sec.sh: line 5533: /mnt/lustre/d61.sanity-sec/f61.sanity-sec: Read-only file system 192.168.203.141@tcp:/lustre /mnt/lustre lustre ro,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync Starting client: oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre PASS 61 (114s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 62: e2fsck with encrypted files ======= 07:07:41 (1713524861) SKIP: sanity-sec test_62 skip ZFS backend SKIP 62 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 63: fid2path with encrypted files ===== 07:07:43 (1713524863) SKIP: sanity-sec test_63 client encryption not supported SKIP 63 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64a: Nodemap enforces file_perms RBAC roles ========================================================== 07:07:45 (1713524865) On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync oleg341-server: error: c0 not existing nodemap name pdsh@oleg341-client: oleg341-server: ssh exited with exit code 1 On MGS 192.168.203.141, c0.id = waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=file_perms waiting 10 secs for sync + chmod 777 /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec + chown quota_usr:quota_usr /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec + chgrp quota_usr /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec + /home/green/git/lustre-release/lustre/utils/lfs project -p 1000 /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec + set +vx On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync + chmod 777 /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec chmod: changing permissions of '/mnt/lustre/d64a.sanity-sec/f64a.sanity-sec': Operation not permitted + chown quota_usr:quota_usr /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec chown: changing ownership of '/mnt/lustre/d64a.sanity-sec/f64a.sanity-sec': Operation not permitted + chgrp quota_usr /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec chgrp: changing group of '/mnt/lustre/d64a.sanity-sec/f64a.sanity-sec': Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs project -p 1000 /mnt/lustre/d64a.sanity-sec/f64a.sanity-sec lfs: failed to set xattr for '/mnt/lustre/d64a.sanity-sec/f64a.sanity-sec': Operation not permitted + set +vx On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 64a (123s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64b: Nodemap enforces dne_ops RBAC roles ========================================================== 07:09:50 (1713524990) SKIP: sanity-sec test_64b mdt count 1, skipping dne_ops role SKIP 64b (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64c: Nodemap enforces quota_ops RBAC roles ========================================================== 07:09:52 (1713524992) On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync oleg341-server: error: c0 not existing nodemap name pdsh@oleg341-client: oleg341-server: ssh exited with exit code 1 On MGS 192.168.203.141, c0.id = waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=quota_ops waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -U -b 10G -B 11G -i 100K -I 105K /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -U -b 0 -B 0 -i 0 -I 0 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -G -b 10G -B 11G -i 100K -I 105K /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -G -b 0 -B 0 -i 0 -I 0 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -P -b 10G -B 11G -i 100K -I 105K /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -P -b 0 -B 0 -i 0 -I 0 /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr -D /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr -D /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr --delete /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 -D /mnt/lustre + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 --delete /mnt/lustre + set +vx On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -U -b 10G -B 11G -i 100K -I 105K /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -G -b 10G -B 11G -i 100K -I 105K /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -P -b 10G -B 11G -i 100K -I 105K /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr -D /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -u sanityusr --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr -D /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -g sanityusr --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 -D /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + /home/green/git/lustre-release/lustre/utils/lfs setquota -p 1000 --delete /mnt/lustre lfs setquota: quotactl failed: Operation not permitted setquota failed: Operation not permitted + set +vx On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 64c (123s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64d: Nodemap enforces byfid_ops RBAC roles ========================================================== 07:11:57 (1713525117) On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync oleg341-server: error: c0 not existing nodemap name pdsh@oleg341-client: oleg341-server: ssh exited with exit code 1 On MGS 192.168.203.141, c0.id = waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=byfid_ops waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs fid2path /mnt/lustre '[0x200000405:0x6:0x0]' /mnt/lustre/d64d.sanity-sec/f64d.sanity-sec + cat '/mnt/lustre/.lustre/fid/[0x200000405:0x6:0x0]' + lfs rmfid /mnt/lustre '[0x200000405:0x6:0x0]' + set +vx On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync + /home/green/git/lustre-release/lustre/utils/lfs fid2path /mnt/lustre '[0x200000405:0x7:0x0]' /mnt/lustre/d64d.sanity-sec/f64d.sanity-sec + cat '/mnt/lustre/.lustre/fid/[0x200000405:0x7:0x0]' cat: /mnt/lustre/.lustre/fid/[0x200000405:0x7:0x0]: Operation not permitted + lfs rmfid /mnt/lustre '[0x200000405:0x7:0x0]' lfs rmfid: cannot remove [0x200000405:0x7:0x0]: Operation not permitted + set +vx On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 64d (123s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64e: Nodemap enforces chlg_ops RBAC roles ========================================================== 07:14:01 (1713525241) On MGS 192.168.203.141, active = nodemap.active=1 waiting 10 secs for sync oleg341-server: error: c0 not existing nodemap name pdsh@oleg341-client: oleg341-server: ssh exited with exit code 1 On MGS 192.168.203.141, c0.id = waiting 10 secs for sync On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.141, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl2' mdd.lustre-MDT0000.changelog_mask=ALL On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=chlg_ops waiting 10 secs for sync changelogs dump lustre-MDT0000.5 02MKDIR 11:15:10.447012534 2024.04.19 0x0 t=[0x200000405:0x9:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.203.41@tcp p=[0x200000405:0x8:0x0] f64e.sanity-sec.d lustre-MDT0000.6 01CREAT 11:15:10.451723089 2024.04.19 0x0 t=[0x200000405:0xa:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.41@tcp p=[0x200000405:0x8:0x0] f64e.sanity-sec lustre-MDT0000.7 10OPEN 11:15:10.451960696 2024.04.19 0x4a t=[0x200000405:0xa:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.41@tcp m=-w- p=[0x200000405:0x8:0x0] lustre-MDT0000.8 11CLOSE 11:15:10.472131251 2024.04.19 0x42 t=[0x200000405:0xa:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.41@tcp changelogs clear lustre-MDT0000: clear the changelog for cl2 of all records On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=none waiting 10 secs for sync changelogs dump lfs changelog: cannot access changelog: Permission denied changelogs clear lustre-MDT0000: clear the changelog for cl2 of all records lfs changelog_clear: cannot purge records for 'cl2': Permission denied (13) changelog_clear error: Permission denied On MGS 192.168.203.141, c0.rbac = nodemap.c0.rbac=file_perms,dne_ops,quota_ops,byfid_ops,chlg_ops,fscrypt_admin waiting 10 secs for sync lustre-MDT0000: clear the changelog for cl2 of all records lustre-MDT0000: Deregistered changelog user #2 On MGS 192.168.203.141, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.141, active = nodemap.active=0 waiting 10 secs for sync PASS 64e (137s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 64f: Nodemap enforces fscrypt_admin RBAC roles ========================================================== 07:16:20 (1713525380) SKIP: sanity-sec test_64f Need enc support, skip fscrypt_admin role SKIP 64f (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 65: lfs find -printf %La and --attrs support ========================================================== 07:16:22 (1713525382) SKIP: sanity-sec test_65 client encryption not supported SKIP 65 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 68: all config logs are processed ===== 07:16:24 (1713525384) 192.168.203.141@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) fail_loc=0x8000051d fail_val=20 Starting client oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre Started clients oleg341-client.virtnet: 192.168.203.141@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) 192.168.203.141@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg341-client.virtnet /mnt/lustre (opts:) fail_loc=0 fail_val=0 Starting client: oleg341-client.virtnet: -o user_xattr,flock oleg341-server@tcp:/lustre /mnt/lustre PASS 68 (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 69: check upcall incorrect values ===== 07:16:51 (1713525411) mdt.lustre-MDT0000.identity_upcall=/path/to/prog oleg341-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/identity_upcall=prog: Invalid argument oleg341-server: error: set_param: setting 'mdt/lustre-MDT0000/identity_upcall'='prog': Invalid argument pdsh@oleg341-client: oleg341-server: ssh exited with exit code 22 mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0000.identity_upcall=none mdt.lustre-MDT0000.identity_upcall=NONE PASS 69 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-sec test 70: targets have local copy of sptlrpc llog ========================================================== 07:16:56 (1713525416) SKIP: sanity-sec test_70 need shared key feature for this test SKIP 70 (1s) debug_raw_pointers=0 debug_raw_pointers=0 cleanup: ====================================================== running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre] d17.sanity-sec d18.sanity-sec d21.sanity-sec d35.sanity-sec d51.sanity-sec d55.sanity-sec d61.sanity-sec d64a.sanity-sec d64c.sanity-sec d64d.sanity-sec d64e.sanity-sec running as uid/gid/euid/egid 501/501/501/501, groups: [ls] [/mnt/lustre] d17.sanity-sec d18.sanity-sec d21.sanity-sec d35.sanity-sec d51.sanity-sec d55.sanity-sec d61.sanity-sec d64a.sanity-sec d64c.sanity-sec d64d.sanity-sec d64e.sanity-sec == sanity-sec test complete, duration 5148 sec =========== 07:16:58 (1713525418) === sanity-sec: start cleanup 07:16:58 (1713525418) === === sanity-sec: finish cleanup 07:16:59 (1713525419) ===