-----============= acceptance-small: sanity-sec ============----- Wed Apr 17 17:01:52 EDT 2024 excepting tests: 27 skipping tests SLOW=no: 26 oleg252-client.virtnet: executing check_config_client /mnt/lustre oleg252-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg252-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012a647800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012a647800.idle_timeout=debug disable quota as required oleg252-server: oleg252-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 osd-ldiskfs.track_declares_assert=1 without GSS support == sanity-sec test 0: uid permission ======================================================================================= 17:02:07 (1713387727) running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre] d0.sanity-sec running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/f0] touch: cannot touch '/mnt/lustre/f0': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0.sanity-sec/f1] running as uid/gid/euid/egid 501/501/501/501, groups: [touch] [/mnt/lustre/d0.sanity-sec/f2] touch: cannot touch '/mnt/lustre/d0.sanity-sec/f2': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0.sanity-sec/f4] running as uid/gid/euid/egid 501/501/501/501, groups: [touch] [/mnt/lustre/d0.sanity-sec/f5] touch: cannot touch '/mnt/lustre/d0.sanity-sec/f5': Permission denied PASS 0 (1s) == sanity-sec test 1: setuid/gid ======================================================================================= 17:02:08 (1713387728) SKIP: sanity-sec test_1 without GSS support. SKIP 1 (1s) == sanity-sec test 4: set supplementary group ========================================================================= 17:02:09 (1713387729) running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre/d4.sanity-sec] running as uid/gid/euid/egid 501/501/501/501, groups: 1 2 500 [ls] [/mnt/lustre/d4.sanity-sec] running as uid/gid/euid/egid 501/501/501/501, groups: 1 2 [ls] [/mnt/lustre/d4.sanity-sec] ls: cannot open directory /mnt/lustre/d4.sanity-sec: Permission denied PASS 4 (2s) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_projid = nodemap.default.squash_projid=99 waiting 10 secs for sync == sanity-sec test 7: nodemap create and delete ========== 17:02:44 (1713387764) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=1 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=2 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=3 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync PASS 7 (90s) == sanity-sec test 8: nodemap reject duplicates ========== 17:04:14 (1713387854) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=4 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=5 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=6 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync oleg252-server: error: 18656_0 existing nodemap name pdsh@oleg252-client: oleg252-server: ssh exited with exit code 1 nodemap_add 18656_0 failed with 1 On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync PASS 8 (115s) == sanity-sec test 9: nodemap range add ================== 17:06:09 (1713387969) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=7 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=8 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=9 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync PASS 9 (96s) == sanity-sec test 10a: nodemap reject duplicate ranges == 17:07:45 (1713388065) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=10 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=11 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=12 waiting 10 secs for sync oleg252-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg252-server: error: nodemap_add_range: cannot add range '157.0.0.[1-253]@tcp' to nodemap '18656_0': File exists pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 oleg252-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg252-server: error: nodemap_add_range: cannot add range '157.0.1.[1-253]@tcp' to nodemap '18656_0': File exists pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 oleg252-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg252-server: error: nodemap_add_range: cannot add range '157.1.0.[1-253]@tcp' to nodemap '18656_1': File exists pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 oleg252-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg252-server: error: nodemap_add_range: cannot add range '157.1.1.[1-253]@tcp' to nodemap '18656_1': File exists pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 oleg252-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg252-server: error: nodemap_add_range: cannot add range '157.2.0.[1-253]@tcp' to nodemap '18656_2': File exists pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 oleg252-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg252-server: error: nodemap_add_range: cannot add range '157.2.1.[1-253]@tcp' to nodemap '18656_2': File exists pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync PASS 10a (94s) == sanity-sec test 10b: delete range from the correct nodemap ========================================================== 17:09:19 (1713388159) oleg252-server: error: invalid ioctl: 000ce043 errno: 22: Invalid argument oleg252-server: error: nodemap_del_range: cannot delete range '192.168.19.[0-255]@o2ib20' to nodemap 'nodemap2': rc = -22 pdsh@oleg252-client: oleg252-server: ssh exited with exit code 22 PASS 10b (4s) == sanity-sec test 10c: verfify contiguous range support ========================================================== 17:09:23 (1713388163) PASS 10c (2s) == sanity-sec test 10d: verfify nodemap range format '*@' support ========================================================== 17:09:26 (1713388166) PASS 10d (3s) == sanity-sec test 11: nodemap modify ==================== 17:09:28 (1713388168) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=17 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=18 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=19 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync PASS 11 (98s) == sanity-sec test 12: nodemap set squash ids ============ 17:11:06 (1713388266) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=20 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=21 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=22 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync PASS 12 (98s) == sanity-sec test 13: test nids ========================= 17:12:45 (1713388365) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=23 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=24 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=25 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync PASS 13 (100s) == sanity-sec test 14: test default nodemap nid lookup === 17:14:24 (1713388464) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=26 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=27 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=28 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync PASS 14 (100s) == sanity-sec test 15: test id mapping =================== 17:16:04 (1713388564) On MGS 192.168.202.152, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.202.152, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.202.152, 18656_0.id = nodemap.18656_0.id=29 waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = nodemap.18656_1.id=30 waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = nodemap.18656_2.id=31 waiting 10 secs for sync Start to add idmaps ... Start to test idmaps ... Start to update idmaps ... Start to delete idmaps ... On MGS 192.168.202.152, 18656_0.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_1.id = waiting 10 secs for sync On MGS 192.168.202.152, 18656_2.id = waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 15 (143s) == sanity-sec test 16: test nodemap all_off fileops ====== 17:18:27 (1713388707) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d16.sanity-sec ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d16.sanity-sec ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 16 (114s) == sanity-sec test 17: test nodemap trusted_noadmin fileops ========================================================== 17:20:21 (1713388821) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 17 (502s) == sanity-sec test 18: test nodemap mapped_noadmin fileops ========================================================== 17:28:43 (1713389323) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d18.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d18.sanity-sec': Permission denied mkdir -p /mnt/lustre/d18.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d18.sanity-sec': Permission denied On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 18 (505s) == sanity-sec test 19: test nodemap trusted_admin fileops ========================================================== 17:37:08 (1713389828) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d19.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d19.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 19 (225s) == sanity-sec test 20: test nodemap mapped_admin fileops ========================================================== 17:40:53 (1713390053) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d20.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d20.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 20 (226s) == sanity-sec test 21: test nodemap mapped_trusted_noadmin fileops ========================================================== 17:44:39 (1713390279) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d21.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d21.sanity-sec': Permission denied mkdir -p /mnt/lustre/d21.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d21.sanity-sec': Permission denied On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 21 (482s) == sanity-sec test 22: test nodemap mapped_trusted_admin fileops ========================================================== 17:52:41 (1713390761) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d22.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d22.sanity-sec On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a647800.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a647800.lru_size=clear On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 22 (225s) == sanity-sec test 23a: test mapped regular ACLs ========= 17:56:26 (1713390986) SKIP: sanity-sec test_23a Need 2 clients at least SKIP 23a (1s) == sanity-sec test 23b: test mapped default ACLs ========= 17:56:27 (1713390987) SKIP: sanity-sec test_23b Need 2 clients at least SKIP 23b (1s) == sanity-sec test 24: check nodemap proc files for LBUGs and Oopses ========================================================== 17:56:28 (1713390988) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync nodemap.active=1 nodemap.c0.admin_nodemap=0 nodemap.c0.audit_mode=1 nodemap.c0.deny_unknown=0 nodemap.c0.exports= [ { nid: 192.168.202.52@tcp, uuid: d56216e5-b9db-4970-83da-bad1b0614dfe }, { nid: 192.168.202.52@tcp, uuid: d56216e5-b9db-4970-83da-bad1b0614dfe }, ] nodemap.c0.fileset= nodemap.c0.forbid_encryption=0 nodemap.c0.id=39 nodemap.c0.idmap= [ { idtype: uid, client_id: 60003, fs_id: 60000 }, { idtype: uid, client_id: 60004, fs_id: 60002 }, { idtype: gid, client_id: 60003, fs_id: 60000 }, { idtype: gid, client_id: 60004, fs_id: 60002 } ] nodemap.c0.map_mode=all nodemap.c0.ranges= [ { id: 41, start_nid: 192.168.202.52@tcp, end_nid: 192.168.202.52@tcp } ] nodemap.c0.sepol= nodemap.c0.squash_gid=99 nodemap.c0.squash_projid=99 nodemap.c0.squash_uid=99 nodemap.c0.trusted_nodemap=0 nodemap.default.admin_nodemap=1 nodemap.default.audit_mode=1 nodemap.default.deny_unknown=0 nodemap.default.exports= [ { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-lwp-OST0001_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-OST0001_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-lwp-OST0000_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-OST0000_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-MDT0001_UUID }, { nid: 0@lo, uuid: lustre-MDT0001-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-MDT0000_UUID }, ] nodemap.default.fileset= nodemap.default.forbid_encryption=0 nodemap.default.id=0 nodemap.default.map_mode=all nodemap.default.squash_gid=99 nodemap.default.squash_projid=99 nodemap.default.squash_uid=99 nodemap.default.trusted_nodemap=1 On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 24 (74s) == sanity-sec test 25: test save and reload nodemap config ========================================================== 17:57:42 (1713391062) Stopping clients: oleg252-client.virtnet /mnt/lustre (opts:) Stopping client oleg252-client.virtnet /mnt/lustre opts: mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, test25.id = nodemap.test25.id=41 waiting 10 secs for sync Checking servers environments Checking clients oleg252-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg252-server' oleg252-server: oleg252-server.virtnet: executing load_modules_local oleg252-server: Loading modules from /home/green/git/lustre-release/lustre oleg252-server: detected 4 online CPUs by sysfs oleg252-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg252-server: mount.lustre: according to /etc/mtab /dev/mapper/mds1_flakey is already mounted on /mnt/lustre-mds1 pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 Start of /dev/mapper/mds1_flakey on mds1 failed 17 /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4968: mdt.lustre-MDT0000.identity_upcall: command not found Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg252-server: mount.lustre: according to /etc/mtab /dev/mapper/mds2_flakey is already mounted on /mnt/lustre-mds2 pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 Start of /dev/mapper/mds2_flakey on mds2 failed 17 /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4968: mdt.lustre-MDT0000.identity_upcall: command not found Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg252-server: mount.lustre: according to /etc/mtab /dev/mapper/ost1_flakey is already mounted on /mnt/lustre-ost1 pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 Start of /dev/mapper/ost1_flakey on ost1 failed 17 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 oleg252-server: mount.lustre: according to /etc/mtab /dev/mapper/ost2_flakey is already mounted on /mnt/lustre-ost2 pdsh@oleg252-client: oleg252-server: ssh exited with exit code 17 Start of /dev/mapper/ost2_flakey on ost2 failed 17 Starting client: oleg252-client.virtnet: -o user_xattr,flock oleg252-server@tcp:/lustre /mnt/lustre Starting client oleg252-client.virtnet: -o user_xattr,flock oleg252-server@tcp:/lustre /mnt/lustre Started clients oleg252-client.virtnet: 192.168.202.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a9970800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a9970800.idle_timeout=debug disable quota as required osd-ldiskfs.track_declares_assert=1 Stopping clients: oleg252-client.virtnet /mnt/lustre (opts:) Stopping client oleg252-client.virtnet /mnt/lustre opts: On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync Starting client oleg252-client.virtnet: -o user_xattr,flock oleg252-server@tcp:/lustre /mnt/lustre Started clients oleg252-client.virtnet: 192.168.202.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) PASS 25 (121s) SKIP: sanity-sec test_26 skipping SLOW test 26 SKIP: sanity-sec test_27a skipping excluded test 27a (base 27) SKIP: sanity-sec test_27b skipping excluded test 27b (base 27) == sanity-sec test 28: check shared key rotation method == 17:59:44 (1713391184) SKIP: sanity-sec test_28 need shared key feature for this test SKIP 28 (1s) == sanity-sec test 29: check for missing shared key ====== 17:59:46 (1713391186) SKIP: sanity-sec test_29 need shared key feature for this test SKIP 29 (1s) == sanity-sec test 30: check for invalid shared key ====== 17:59:47 (1713391187) SKIP: sanity-sec test_30 need shared key feature for this test SKIP 30 (1s) == sanity-sec test 30b: basic test of all different SSK flavors ========================================================== 17:59:48 (1713391188) SKIP: sanity-sec test_30b need shared key feature for this test SKIP 30b (1s) == sanity-sec test 31: client mount option '-o network' == 17:59:49 (1713391189) SKIP: sanity-sec test_31 without lnetctl support. SKIP 31 (1s) == sanity-sec test 32: check for mgssec ================== 17:59:50 (1713391190) SKIP: sanity-sec test_32 need shared key feature for this test SKIP 32 (1s) == sanity-sec test 33: correct srpc flags for MGS connection ========================================================== 17:59:51 (1713391191) SKIP: sanity-sec test_33 need shared key feature for this test SKIP 33 (2s) == sanity-sec test 34: deny_unknown on default nodemap === 17:59:53 (1713391193) On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.deny_unknown = nodemap.default.deny_unknown=1 waiting 10 secs for sync On MGS 192.168.202.152, default.deny_unknown = nodemap.default.deny_unknown=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync PASS 34 (48s) == sanity-sec test 35: Check permissions when accessing changelogs ========================================================== 18:00:41 (1713391241) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl1 cl1' mdd.lustre-MDT0000.changelog_mask=ALL mdd.lustre-MDT0001.changelog_mask=ALL lustre-MDT0000.1 02MKDIR 22:00:45.206706214 2024.04.17 0x0 t=[0x200000403:0x1:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.202.52@tcp p=[0x200000007:0x1:0x0] d35.sanity-sec lustre-MDT0000.2 01CREAT 22:00:45.218649522 2024.04.17 0x0 t=[0x200000403:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.202.52@tcp p=[0x200000403:0x1:0x0] f35.sanity-sec lustre-MDT0000.3 10OPEN 22:00:45.218814582 2024.04.17 0x4a t=[0x200000403:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.202.52@tcp m=-w- p=[0x200000403:0x1:0x0] lustre-MDT0000.4 11CLOSE 22:00:45.228540593 2024.04.17 0x42 t=[0x200000403:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.202.52@tcp lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0001: clear the changelog for cl1 of all records mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync lfs changelog: cannot access changelog: Permission denied lfs changelog: cannot access changelog: Permission denied lustre-MDT0000: clear the changelog for cl1 of all records lfs changelog_clear: cannot purge records for 'cl1': Permission denied (13) changelog_clear error: Permission denied lustre-MDT0001: clear the changelog for cl1 of all records lfs changelog_clear: cannot purge records for 'cl1': Permission denied (13) changelog_clear error: Permission denied On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync lustre-MDT0001: clear the changelog for cl1 of all records lustre-MDT0001: Deregistered changelog user #1 lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 PASS 35 (89s) == sanity-sec test 36: control if clients can use encryption ========================================================== 18:02:10 (1713391330) SKIP: sanity-sec test_36 client encryption not supported SKIP 36 (1s) == sanity-sec test 37: simple encrypted file ============= 18:02:11 (1713391331) SKIP: sanity-sec test_37 client encryption not supported SKIP 37 (1s) == sanity-sec test 38: encrypted file with hole ========== 18:02:12 (1713391332) SKIP: sanity-sec test_38 client encryption not supported SKIP 38 (1s) == sanity-sec test 39: rewrite data in already encrypted page ========================================================== 18:02:13 (1713391333) SKIP: sanity-sec test_39 client encryption not supported SKIP 39 (1s) == sanity-sec test 40: exercise size of encrypted file === 18:02:14 (1713391334) SKIP: sanity-sec test_40 client encryption not supported SKIP 40 (1s) == sanity-sec test 41: test race on encrypted file size (1) ========================================================== 18:02:15 (1713391335) SKIP: sanity-sec test_41 client encryption not supported SKIP 41 (1s) == sanity-sec test 42: test race on encrypted file size (2) ========================================================== 18:02:17 (1713391337) SKIP: sanity-sec test_42 client encryption not supported SKIP 42 (1s) == sanity-sec test 43: test race on encrypted file size (3) ========================================================== 18:02:18 (1713391338) SKIP: sanity-sec test_43 client encryption not supported SKIP 43 (1s) == sanity-sec test 44: encrypted file access semantics: direct IO ========================================================== 18:02:19 (1713391339) SKIP: sanity-sec test_44 client encryption not supported SKIP 44 (1s) == sanity-sec test 45: encrypted file access semantics: MMAP ========================================================== 18:02:20 (1713391340) SKIP: sanity-sec test_45 client encryption not supported SKIP 45 (1s) == sanity-sec test 46: encrypted file access semantics without key ========================================================== 18:02:21 (1713391341) SKIP: sanity-sec test_46 client encryption not supported SKIP 46 (1s) == sanity-sec test 47: encrypted file access semantics: rename/link ========================================================== 18:02:22 (1713391342) SKIP: sanity-sec test_47 client encryption not supported SKIP 47 (1s) == sanity-sec test 48a: encrypted file access semantics: truncate ========================================================== 18:02:23 (1713391343) SKIP: sanity-sec test_48a client encryption not supported SKIP 48a (2s) == sanity-sec test 48b: encrypted file: concurrent truncate ========================================================== 18:02:25 (1713391345) SKIP: sanity-sec test_48b client encryption not supported SKIP 48b (1s) == sanity-sec test 49: Avoid getxattr for encryption context ========================================================== 18:02:26 (1713391346) SKIP: sanity-sec test_49 client encryption not supported SKIP 49 (1s) == sanity-sec test 50: DoM encrypted file ================ 18:02:27 (1713391347) SKIP: sanity-sec test_50 client encryption not supported SKIP 50 (1s) == sanity-sec test 51: FS capabilities =================== 18:02:28 (1713391348) mdt.lustre-MDT0000.enable_cap_mask=0xf mdt.lustre-MDT0001.enable_cap_mask=0xf running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/chown] [500] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/chown: changing ownership of '/mnt/lustre/d51.sanity-sec/f51.sanity-sec': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/chown] [500] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/touch] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/touch: cannot touch '/mnt/lustre/d51.sanity-sec/f51.sanity-sec': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/touch] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/cat: /mnt/lustre/d51.sanity-sec/f51.sanity-sec: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/cat: /mnt/lustre/d51.sanity-sec/f51.sanity-sec: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] mdt.lustre-MDT0000.enable_cap_mask=0x0 mdt.lustre-MDT0001.enable_cap_mask=0x0 PASS 51 (4s) == sanity-sec test 52: Mirrored encrypted file =========== 18:02:32 (1713391352) SKIP: sanity-sec test_52 client encryption not supported SKIP 52 (1s) == sanity-sec test 53: Mixed PAGE_SIZE clients =========== 18:02:33 (1713391353) SKIP: sanity-sec test_53 client encryption not supported SKIP 53 (1s) == sanity-sec test 54: Encryption policies with fscrypt == 18:02:34 (1713391354) SKIP: sanity-sec test_54 client encryption not supported SKIP 54 (1s) == sanity-sec test 55: access with seteuid =============== 18:02:35 (1713391355) 192.168.202.152@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt 0 0 Stopping client oleg252-client.virtnet /mnt/lustre (opts:) mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.202.152, active = nodemap.active=1 waiting 10 secs for sync oleg252-server: error: c0 not existing nodemap name pdsh@oleg252-client: oleg252-server: ssh exited with exit code 1 On MGS 192.168.202.152, c0.id = waiting 10 secs for sync On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.152, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync Starting client oleg252-client.virtnet: -o user_xattr,flock oleg252-server@tcp:/lustre /mnt/lustre Started clients oleg252-client.virtnet: 192.168.202.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Initially root ruid:rgid 0:0, euid:egid 0:0 Groups 0 - root, To switch to effective sanityusr uid:gid 500:500 Groups 500 - sanityusr, Now root ruid:rgid 0:0, euid:egid 500:500 Groups 500 - sanityusr, File /mnt/lustre/d55.sanity-sec/sanityusr/testdir_groups/file successfully written 192.168.202.152@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt 0 0 Stopping client oleg252-client.virtnet /mnt/lustre (opts:) On MGS 192.168.202.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.152, active = nodemap.active=0 waiting 10 secs for sync Starting client: oleg252-client.virtnet: -o user_xattr,flock oleg252-server@tcp:/lustre /mnt/lustre PASS 55 (107s) == sanity-sec test 56: FIEMAP on encrypted file ========== 18:04:22 (1713391462) SKIP: sanity-sec test_56 client encryption not supported SKIP 56 (1s) == sanity-sec test 57: security.c/encryption.c xattr protection ========================================================== 18:04:23 (1713391463) SKIP: sanity-sec test_57 client encryption not supported SKIP 57 (1s) == sanity-sec test 58: access to enc file's xattrs ======= 18:04:25 (1713391465) SKIP: sanity-sec test_58 client encryption not supported SKIP 58 (1s) == sanity-sec test 59a: mirror resync of encrypted files without key ========================================================== 18:04:26 (1713391466) SKIP: sanity-sec test_59a client encryption not supported SKIP 59a (1s) == sanity-sec test 59b: migrate/extend/split of encrypted files without key ========================================================== 18:04:27 (1713391467) SKIP: sanity-sec test_59b client encryption not supported SKIP 59b (1s) == sanity-sec test 59c: MDT migrate of encrypted files without key ========================================================== 18:04:28 (1713391468) SKIP: sanity-sec test_59c client encryption not supported SKIP 59c (1s) == sanity-sec test 60: Subdirmount of encrypted dir ====== 18:04:29 (1713391469) SKIP: sanity-sec test_60 client encryption not supported SKIP 60 (1s) == sanity-sec test 62: e2fsck with encrypted files ======= 18:04:30 (1713391470) SKIP: sanity-sec test_62 Need MDS version at least 2.15.51 SKIP 62 (1s) cleanup: ====================================================== running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre] d17.sanity-sec d18.sanity-sec d21.sanity-sec d35.sanity-sec d51.sanity-sec d55.sanity-sec running as uid/gid/euid/egid 501/501/501/501, groups: [ls] [/mnt/lustre] d17.sanity-sec d18.sanity-sec d21.sanity-sec d35.sanity-sec d51.sanity-sec d55.sanity-sec == sanity-sec test complete, duration 3760 sec =========== 18:04:32 (1713391472)