-----============= acceptance-small: sanity-sec ============----- Wed Apr 17 17:01:25 EDT 2024 excepting tests: 27 skipping tests SLOW=no: 26 oleg316-client.virtnet: executing check_config_client /mnt/lustre oleg316-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg316-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b65f5000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b65f5000.idle_timeout=debug disable quota as required oleg316-server: oleg316-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 without GSS support == sanity-sec test 0: uid permission ======================================================================================= 17:01:39 (1713387699) running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre] d0.sanity-sec running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/f0] touch: cannot touch '/mnt/lustre/f0': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0.sanity-sec/f1] running as uid/gid/euid/egid 501/501/501/501, groups: [touch] [/mnt/lustre/d0.sanity-sec/f2] touch: cannot touch '/mnt/lustre/d0.sanity-sec/f2': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0.sanity-sec/f4] running as uid/gid/euid/egid 501/501/501/501, groups: [touch] [/mnt/lustre/d0.sanity-sec/f5] touch: cannot touch '/mnt/lustre/d0.sanity-sec/f5': Permission denied PASS 0 (2s) == sanity-sec test 1: setuid/gid ======================================================================================= 17:01:41 (1713387701) SKIP: sanity-sec test_1 without GSS support. SKIP 1 (0s) == sanity-sec test 4: set supplementary group ========================================================================= 17:01:41 (1713387701) running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre/d4.sanity-sec] running as uid/gid/euid/egid 501/501/501/501, groups: 1 2 500 [ls] [/mnt/lustre/d4.sanity-sec] running as uid/gid/euid/egid 501/501/501/501, groups: 1 2 [ls] [/mnt/lustre/d4.sanity-sec] ls: cannot open directory /mnt/lustre/d4.sanity-sec: Permission denied PASS 4 (3s) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_projid = nodemap.default.squash_projid=99 waiting 10 secs for sync == sanity-sec test 7: nodemap create and delete ========== 17:02:17 (1713387737) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=1 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=2 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=3 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync PASS 7 (90s) == sanity-sec test 8: nodemap reject duplicates ========== 17:03:47 (1713387827) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=4 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=5 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=6 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync oleg316-server: error: 35042_0 existing nodemap name pdsh@oleg316-client: oleg316-server: ssh exited with exit code 1 nodemap_add 35042_0 failed with 1 On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync PASS 8 (113s) == sanity-sec test 9: nodemap range add ================== 17:05:40 (1713387940) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=7 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=8 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=9 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync PASS 9 (93s) == sanity-sec test 10a: nodemap reject duplicate ranges == 17:07:13 (1713388033) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=10 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=11 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=12 waiting 10 secs for sync oleg316-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg316-server: error: nodemap_add_range: cannot add range '43.0.0.[1-253]@tcp' to nodemap '35042_0': File exists pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 oleg316-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg316-server: error: nodemap_add_range: cannot add range '43.0.1.[1-253]@tcp' to nodemap '35042_0': File exists pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 oleg316-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg316-server: error: nodemap_add_range: cannot add range '43.1.0.[1-253]@tcp' to nodemap '35042_1': File exists pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 oleg316-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg316-server: error: nodemap_add_range: cannot add range '43.1.1.[1-253]@tcp' to nodemap '35042_1': File exists pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 oleg316-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg316-server: error: nodemap_add_range: cannot add range '43.2.0.[1-253]@tcp' to nodemap '35042_2': File exists pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 oleg316-server: error: invalid ioctl: 000ce042 errno: 17: File exists oleg316-server: error: nodemap_add_range: cannot add range '43.2.1.[1-253]@tcp' to nodemap '35042_2': File exists pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync PASS 10a (95s) == sanity-sec test 10b: delete range from the correct nodemap ========================================================== 17:08:48 (1713388128) oleg316-server: error: invalid ioctl: 000ce043 errno: 22: Invalid argument oleg316-server: error: nodemap_del_range: cannot delete range '192.168.19.[0-255]@o2ib20' to nodemap 'nodemap2': rc = -22 pdsh@oleg316-client: oleg316-server: ssh exited with exit code 22 PASS 10b (3s) == sanity-sec test 10c: verfify contiguous range support ========================================================== 17:08:52 (1713388132) PASS 10c (2s) == sanity-sec test 10d: verfify nodemap range format '*@' support ========================================================== 17:08:54 (1713388134) PASS 10d (3s) == sanity-sec test 11: nodemap modify ==================== 17:08:57 (1713388137) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=17 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=18 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=19 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync PASS 11 (93s) == sanity-sec test 12: nodemap set squash ids ============ 17:10:30 (1713388230) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=20 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=21 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=22 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync PASS 12 (94s) == sanity-sec test 13: test nids ========================= 17:12:04 (1713388324) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=23 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=24 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=25 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync PASS 13 (96s) == sanity-sec test 14: test default nodemap nid lookup === 17:13:40 (1713388420) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=26 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=27 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=28 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync PASS 14 (96s) == sanity-sec test 15: test id mapping =================== 17:15:16 (1713388516) On MGS 192.168.203.116, default.squash_uid = nodemap.default.squash_uid=99 waiting 10 secs for sync On MGS 192.168.203.116, default.squash_gid = nodemap.default.squash_gid=99 waiting 10 secs for sync On MGS 192.168.203.116, 35042_0.id = nodemap.35042_0.id=29 waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = nodemap.35042_1.id=30 waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = nodemap.35042_2.id=31 waiting 10 secs for sync Start to add idmaps ... Start to test idmaps ... Start to update idmaps ... Start to delete idmaps ... On MGS 192.168.203.116, 35042_0.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_1.id = waiting 10 secs for sync On MGS 192.168.203.116, 35042_2.id = waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 15 (140s) == sanity-sec test 16: test nodemap all_off fileops ====== 17:17:36 (1713388656) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d16.sanity-sec ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d16.sanity-sec ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d16.sanity-sec/f16.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 16 (142s) == sanity-sec test 17: test nodemap trusted_noadmin fileops ========================================================== 17:19:58 (1713388798) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied mkdir -p /mnt/lustre/d17.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d17.sanity-sec/f17.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d17.sanity-sec': Permission denied On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 17 (535s) == sanity-sec test 18: test nodemap mapped_noadmin fileops ========================================================== 17:28:53 (1713389333) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d18.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d18.sanity-sec': Permission denied mkdir -p /mnt/lustre/d18.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d18.sanity-sec/f18.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d18.sanity-sec': Permission denied On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 18 (531s) == sanity-sec test 19: test nodemap trusted_admin fileops ========================================================== 17:37:44 (1713389864) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d19.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d19.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d19.sanity-sec/f19.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 19 (265s) == sanity-sec test 20: test nodemap mapped_admin fileops ========================================================== 17:42:09 (1713390129) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync mkdir -p /mnt/lustre/d20.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d20.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d20.sanity-sec/f20.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 20 (265s) == sanity-sec test 21: test nodemap mapped_trusted_noadmin fileops ========================================================== 17:46:35 (1713390395) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d21.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d21.sanity-sec': Permission denied mkdir -p /mnt/lustre/d21.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d21.sanity-sec/f21.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] rm: cannot remove '/mnt/lustre/d21.sanity-sec': Permission denied On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 21 (533s) == sanity-sec test 22: test nodemap mapped_trusted_admin fileops ========================================================== 17:55:27 (1713390927) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync mkdir -p /mnt/lustre/d22.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] mkdir -p /mnt/lustre/d22.sanity-sec On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b65f5000.lru_size=clear On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 0/0/0/0, groups: 0 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 0/0/0/0, groups: 0 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [rm] [/mnt/lustre/d22.sanity-sec/f22.sanity-sec] sleep 5 for ZFS zfs sleep 5 for ZFS zfs running as uid/gid/euid/egid 60003/60003/60003/60003, groups: 60003 [lfs] [quota] [-q] [/mnt/lustre] On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 22 (254s) == sanity-sec test 23a: test mapped regular ACLs ========= 17:59:41 (1713391181) SKIP: sanity-sec test_23a Need 2 clients at least SKIP 23a (1s) == sanity-sec test 23b: test mapped default ACLs ========= 17:59:42 (1713391182) SKIP: sanity-sec test_23b Need 2 clients at least SKIP 23b (1s) == sanity-sec test 24: check nodemap proc files for LBUGs and Oopses ========================================================== 17:59:43 (1713391183) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync nodemap.active=1 nodemap.c0.admin_nodemap=0 nodemap.c0.audit_mode=1 nodemap.c0.deny_unknown=0 nodemap.c0.exports= [ { nid: 192.168.203.16@tcp, uuid: a4bc9a35-fcc4-42a3-8a53-cd62e2d91f9b }, ] nodemap.c0.fileset= nodemap.c0.forbid_encryption=0 nodemap.c0.id=39 nodemap.c0.idmap= [ { idtype: uid, client_id: 60003, fs_id: 60000 }, { idtype: uid, client_id: 60004, fs_id: 60002 }, { idtype: gid, client_id: 60003, fs_id: 60000 }, { idtype: gid, client_id: 60004, fs_id: 60002 } ] nodemap.c0.map_mode=all nodemap.c0.ranges= [ { id: 41, start_nid: 192.168.203.16@tcp, end_nid: 192.168.203.16@tcp } ] nodemap.c0.sepol= nodemap.c0.squash_gid=99 nodemap.c0.squash_projid=99 nodemap.c0.squash_uid=99 nodemap.c0.trusted_nodemap=0 nodemap.default.admin_nodemap=1 nodemap.default.audit_mode=1 nodemap.default.deny_unknown=0 nodemap.default.exports= [ { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-mdtlov_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-OST0001_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-MDT0000_UUID }, { nid: 0@lo, uuid: lustre-MDT0000-lwp-OST0000_UUID }, ] nodemap.default.fileset= nodemap.default.forbid_encryption=0 nodemap.default.id=0 nodemap.default.map_mode=all nodemap.default.squash_gid=99 nodemap.default.squash_projid=99 nodemap.default.squash_uid=99 nodemap.default.trusted_nodemap=1 On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 24 (74s) == sanity-sec test 25: test save and reload nodemap config ========================================================== 18:00:57 (1713391257) Stopping clients: oleg316-client.virtnet /mnt/lustre (opts:) Stopping client oleg316-client.virtnet /mnt/lustre opts: mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, test25.id = nodemap.test25.id=41 waiting 10 secs for sync Checking servers environments Checking clients oleg316-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg316-server' oleg316-server: oleg316-server.virtnet: executing load_modules_local oleg316-server: Loading modules from /home/green/git/lustre-release/lustre oleg316-server: detected 4 online CPUs by sysfs oleg316-server: Force libcfs to create 2 CPU partitions oleg316-server: libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg316-server: mount.lustre: according to /etc/mtab lustre-mdt1/mdt1 is already mounted on /mnt/lustre-mds1 pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 Start of lustre-mdt1/mdt1 on mds1 failed 17 /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4968: mdt.lustre-MDT0000.identity_upcall: command not found Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 oleg316-server: mount.lustre: according to /etc/mtab lustre-ost1/ost1 is already mounted on /mnt/lustre-ost1 pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 Start of lustre-ost1/ost1 on ost1 failed 17 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 oleg316-server: mount.lustre: according to /etc/mtab lustre-ost2/ost2 is already mounted on /mnt/lustre-ost2 pdsh@oleg316-client: oleg316-server: ssh exited with exit code 17 Start of lustre-ost2/ost2 on ost2 failed 17 Starting client: oleg316-client.virtnet: -o user_xattr,flock oleg316-server@tcp:/lustre /mnt/lustre Starting client oleg316-client.virtnet: -o user_xattr,flock oleg316-server@tcp:/lustre /mnt/lustre Started clients oleg316-client.virtnet: 192.168.203.116@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff880136b23800.idle_timeout=debug osc.lustre-OST0001-osc-ffff880136b23800.idle_timeout=debug disable quota as required Stopping clients: oleg316-client.virtnet /mnt/lustre (opts:) Stopping client oleg316-client.virtnet /mnt/lustre opts: On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync Starting client oleg316-client.virtnet: -o user_xattr,flock oleg316-server@tcp:/lustre /mnt/lustre Started clients oleg316-client.virtnet: 192.168.203.116@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) PASS 25 (117s) SKIP: sanity-sec test_26 skipping SLOW test 26 SKIP: sanity-sec test_27a skipping excluded test 27a (base 27) SKIP: sanity-sec test_27b skipping excluded test 27b (base 27) == sanity-sec test 28: check shared key rotation method == 18:02:55 (1713391375) SKIP: sanity-sec test_28 need shared key feature for this test SKIP 28 (1s) == sanity-sec test 29: check for missing shared key ====== 18:02:56 (1713391376) SKIP: sanity-sec test_29 need shared key feature for this test SKIP 29 (1s) == sanity-sec test 30: check for invalid shared key ====== 18:02:57 (1713391377) SKIP: sanity-sec test_30 need shared key feature for this test SKIP 30 (1s) == sanity-sec test 30b: basic test of all different SSK flavors ========================================================== 18:02:58 (1713391378) SKIP: sanity-sec test_30b need shared key feature for this test SKIP 30b (1s) == sanity-sec test 31: client mount option '-o network' == 18:03:00 (1713391380) SKIP: sanity-sec test_31 without lnetctl support. SKIP 31 (2s) == sanity-sec test 32: check for mgssec ================== 18:03:01 (1713391381) SKIP: sanity-sec test_32 need shared key feature for this test SKIP 32 (1s) == sanity-sec test 33: correct srpc flags for MGS connection ========================================================== 18:03:02 (1713391382) SKIP: sanity-sec test_33 need shared key feature for this test SKIP 33 (1s) == sanity-sec test 34: deny_unknown on default nodemap === 18:03:03 (1713391383) On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.deny_unknown = nodemap.default.deny_unknown=1 waiting 10 secs for sync On MGS 192.168.203.116, default.deny_unknown = nodemap.default.deny_unknown=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync PASS 34 (48s) == sanity-sec test 35: Check permissions when accessing changelogs ========================================================== 18:03:51 (1713391431) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl1' mdd.lustre-MDT0000.changelog_mask=ALL lustre-MDT0000.1 02MKDIR 22:03:54.890108905 2024.04.17 0x0 t=[0x200000402:0x1:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.203.16@tcp p=[0x200000007:0x1:0x0] d35.sanity-sec lustre-MDT0000.2 01CREAT 22:03:54.901548927 2024.04.17 0x0 t=[0x200000402:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.16@tcp p=[0x200000402:0x1:0x0] f35.sanity-sec lustre-MDT0000.3 10OPEN 22:03:54.901784600 2024.04.17 0x4a t=[0x200000402:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.16@tcp m=-w- p=[0x200000402:0x1:0x0] lustre-MDT0000.4 11CLOSE 22:03:54.928456791 2024.04.17 0x42 t=[0x200000402:0x2:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.203.16@tcp lustre-MDT0000: clear the changelog for cl1 of all records mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync lfs changelog: cannot access changelog: Permission denied lustre-MDT0000: clear the changelog for cl1 of all records lfs changelog_clear: cannot purge records for 'cl1': Permission denied (13) changelog_clear error: Permission denied On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 PASS 35 (87s) == sanity-sec test 36: control if clients can use encryption ========================================================== 18:05:18 (1713391518) SKIP: sanity-sec test_36 client encryption not supported SKIP 36 (1s) == sanity-sec test 37: simple encrypted file ============= 18:05:19 (1713391519) SKIP: sanity-sec test_37 client encryption not supported SKIP 37 (1s) == sanity-sec test 38: encrypted file with hole ========== 18:05:20 (1713391520) SKIP: sanity-sec test_38 client encryption not supported SKIP 38 (1s) == sanity-sec test 39: rewrite data in already encrypted page ========================================================== 18:05:21 (1713391521) SKIP: sanity-sec test_39 client encryption not supported SKIP 39 (1s) == sanity-sec test 40: exercise size of encrypted file === 18:05:22 (1713391522) SKIP: sanity-sec test_40 client encryption not supported SKIP 40 (1s) == sanity-sec test 41: test race on encrypted file size (1) ========================================================== 18:05:23 (1713391523) SKIP: sanity-sec test_41 client encryption not supported SKIP 41 (1s) == sanity-sec test 42: test race on encrypted file size (2) ========================================================== 18:05:24 (1713391524) SKIP: sanity-sec test_42 client encryption not supported SKIP 42 (1s) == sanity-sec test 43: test race on encrypted file size (3) ========================================================== 18:05:25 (1713391525) SKIP: sanity-sec test_43 client encryption not supported SKIP 43 (1s) == sanity-sec test 44: encrypted file access semantics: direct IO ========================================================== 18:05:26 (1713391526) SKIP: sanity-sec test_44 client encryption not supported SKIP 44 (2s) == sanity-sec test 45: encrypted file access semantics: MMAP ========================================================== 18:05:28 (1713391528) SKIP: sanity-sec test_45 client encryption not supported SKIP 45 (1s) == sanity-sec test 46: encrypted file access semantics without key ========================================================== 18:05:29 (1713391529) SKIP: sanity-sec test_46 client encryption not supported SKIP 46 (1s) == sanity-sec test 47: encrypted file access semantics: rename/link ========================================================== 18:05:30 (1713391530) SKIP: sanity-sec test_47 client encryption not supported SKIP 47 (1s) == sanity-sec test 48a: encrypted file access semantics: truncate ========================================================== 18:05:31 (1713391531) SKIP: sanity-sec test_48a client encryption not supported SKIP 48a (1s) == sanity-sec test 48b: encrypted file: concurrent truncate ========================================================== 18:05:32 (1713391532) SKIP: sanity-sec test_48b client encryption not supported SKIP 48b (1s) == sanity-sec test 49: Avoid getxattr for encryption context ========================================================== 18:05:33 (1713391533) SKIP: sanity-sec test_49 client encryption not supported SKIP 49 (1s) == sanity-sec test 50: DoM encrypted file ================ 18:05:34 (1713391534) SKIP: sanity-sec test_50 client encryption not supported SKIP 50 (0s) == sanity-sec test 51: FS capabilities =================== 18:05:34 (1713391534) mdt.lustre-MDT0000.enable_cap_mask=0xf running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/chown] [500] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/chown: changing ownership of '/mnt/lustre/d51.sanity-sec/f51.sanity-sec': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/chown] [500] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/touch] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/touch: cannot touch '/mnt/lustre/d51.sanity-sec/f51.sanity-sec': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/touch] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/cat: /mnt/lustre/d51.sanity-sec/f51.sanity-sec: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] /mnt/lustre/d51.sanity-sec/cat: /mnt/lustre/d51.sanity-sec/f51.sanity-sec: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/d51.sanity-sec/cat] [/mnt/lustre/d51.sanity-sec/f51.sanity-sec] mdt.lustre-MDT0000.enable_cap_mask=0x0 PASS 51 (4s) == sanity-sec test 52: Mirrored encrypted file =========== 18:05:38 (1713391538) SKIP: sanity-sec test_52 client encryption not supported SKIP 52 (1s) == sanity-sec test 53: Mixed PAGE_SIZE clients =========== 18:05:39 (1713391539) SKIP: sanity-sec test_53 client encryption not supported SKIP 53 (1s) == sanity-sec test 54: Encryption policies with fscrypt == 18:05:40 (1713391540) SKIP: sanity-sec test_54 client encryption not supported SKIP 54 (1s) == sanity-sec test 55: access with seteuid =============== 18:05:41 (1713391541) 192.168.203.116@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt 0 0 Stopping client oleg316-client.virtnet /mnt/lustre (opts:) mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.116, active = nodemap.active=1 waiting 10 secs for sync oleg316-server: error: c0 not existing nodemap name pdsh@oleg316-client: oleg316-server: ssh exited with exit code 1 On MGS 192.168.203.116, c0.id = waiting 10 secs for sync On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.116, c0.admin_nodemap = nodemap.c0.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync Starting client oleg316-client.virtnet: -o user_xattr,flock oleg316-server@tcp:/lustre /mnt/lustre Started clients oleg316-client.virtnet: 192.168.203.116@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Initially root ruid:rgid 0:0, euid:egid 0:0 Groups 0 - root, To switch to effective sanityusr uid:gid 500:500 Groups 500 - sanityusr, Now root ruid:rgid 0:0, euid:egid 500:500 Groups 500 - sanityusr, File /mnt/lustre/d55.sanity-sec/sanityusr/testdir_groups/file successfully written 192.168.203.116@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt 0 0 Stopping client oleg316-client.virtnet /mnt/lustre (opts:) On MGS 192.168.203.116, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.116, active = nodemap.active=0 waiting 10 secs for sync Starting client: oleg316-client.virtnet: -o user_xattr,flock oleg316-server@tcp:/lustre /mnt/lustre PASS 55 (106s) == sanity-sec test 56: FIEMAP on encrypted file ========== 18:07:27 (1713391647) SKIP: sanity-sec test_56 skip ZFS backend SKIP 56 (1s) == sanity-sec test 57: security.c/encryption.c xattr protection ========================================================== 18:07:29 (1713391649) SKIP: sanity-sec test_57 skip ZFS backend SKIP 57 (1s) == sanity-sec test 58: access to enc file's xattrs ======= 18:07:30 (1713391650) SKIP: sanity-sec test_58 skip ZFS backend SKIP 58 (1s) == sanity-sec test 59a: mirror resync of encrypted files without key ========================================================== 18:07:31 (1713391651) SKIP: sanity-sec test_59a client encryption not supported SKIP 59a (1s) == sanity-sec test 59b: migrate/extend/split of encrypted files without key ========================================================== 18:07:32 (1713391652) SKIP: sanity-sec test_59b client encryption not supported SKIP 59b (1s) == sanity-sec test 59c: MDT migrate of encrypted files without key ========================================================== 18:07:33 (1713391653) SKIP: sanity-sec test_59c client encryption not supported SKIP 59c (1s) == sanity-sec test 60: Subdirmount of encrypted dir ====== 18:07:34 (1713391654) SKIP: sanity-sec test_60 client encryption not supported SKIP 60 (1s) == sanity-sec test 62: e2fsck with encrypted files ======= 18:07:35 (1713391655) SKIP: sanity-sec test_62 skip ZFS backend SKIP 62 (1s) cleanup: ====================================================== running as uid/gid/euid/egid 500/500/500/500, groups: [ls] [/mnt/lustre] d17.sanity-sec d18.sanity-sec d21.sanity-sec d35.sanity-sec d51.sanity-sec d55.sanity-sec running as uid/gid/euid/egid 501/501/501/501, groups: [ls] [/mnt/lustre] d17.sanity-sec d18.sanity-sec d21.sanity-sec d35.sanity-sec d51.sanity-sec d55.sanity-sec == sanity-sec test complete, duration 3972 sec =========== 18:07:37 (1713391657)