Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add Public-Viewer support #443

Merged
merged 87 commits into from
Sep 26, 2024
Merged
Show file tree
Hide file tree
Changes from 83 commits
Commits
Show all changes
87 commits
Select commit Hold shift + click to select a range
1cfcd94
fix: use UTC time in tests
filariow Jul 12, 2024
2c897f6
add public-viewer support
filariow Mar 18, 2024
52156fe
enable public-viewer support in spacelister_get
filariow Jul 9, 2024
573aecc
add unit tests for pkg/configuration
filariow Jul 12, 2024
fc99e4e
add tests for log
filariow Jul 12, 2024
962c87c
add unit tests for proxy
filariow Jul 12, 2024
ec8b470
Apply suggestions from code review
filariow Jul 15, 2024
8193b55
Merge branch 'master' into pv-532
filariow Jul 15, 2024
91ff1aa
fix linter
filariow Jul 15, 2024
6c2a595
return empty slice instead of nil
filariow Jul 15, 2024
ca4b882
simplify getUserSpaceBinding
filariow Jul 15, 2024
7f015b5
cleanup tests
filariow Jul 15, 2024
941e052
refactor spacelister_get tests
filariow Jul 15, 2024
4981f91
refactor
filariow Jul 15, 2024
76e43eb
fix spacelister list
filariow Jul 15, 2024
a85b104
enhance spacelister get tests
filariow Jul 15, 2024
75fec5c
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Jul 16, 2024
0d3bfc5
remove unreachable check
filariow Jul 16, 2024
6941c15
Merge branch 'master' into pv-532
filariow Jul 22, 2024
9136bb0
add tests for cluster-service
filariow Jul 22, 2024
7ec2cb4
add user validation test
filariow Jul 22, 2024
f362ac5
add tests for validateSpace
filariow Jul 22, 2024
3b7d1c2
use ptr
filariow Jul 22, 2024
650a6a2
add test case for non-ready user
filariow Jul 22, 2024
e275514
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Jul 23, 2024
4e51460
fix configuration tests comments
filariow Jul 25, 2024
dfa1533
Update pkg/proxy/handlers/spacelister_get.go
filariow Jul 25, 2024
69be0bf
refactor proxy.go
filariow Jul 25, 2024
ef8e333
refactor comments and function names
filariow Jul 25, 2024
8cf0f7b
refactor
filariow Jul 25, 2024
29dd7fe
fix comments
filariow Jul 25, 2024
00dd2f4
improve code readability
filariow Jul 25, 2024
f9b9b5a
improve cluster_service code reuse
filariow Jul 25, 2024
c683c65
Add pkg/context tests for public-viewer
filariow Jul 25, 2024
f533353
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Jul 31, 2024
08cb567
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Jul 31, 2024
43f6c58
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Aug 5, 2024
3452b2b
improve tests
filariow Aug 12, 2024
ac24de1
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Aug 20, 2024
4d863bb
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Aug 26, 2024
2b62f6c
rollback changes to pkg/server/in_cluster_application.go
filariow Aug 26, 2024
3ccbee7
add PublicViewer's middleware
filariow Aug 1, 2024
276323b
remove ctx set in favor of middleware
filariow Aug 26, 2024
8d944fb
fix error message
filariow Aug 26, 2024
f74ee96
Update pkg/proxy/proxy.go
filariow Aug 27, 2024
d278ea7
remove not needed checks
filariow Aug 27, 2024
6f97b34
remove unused parameter
filariow Aug 27, 2024
7934119
Update pkg/proxy/proxy.go
filariow Aug 28, 2024
30b980b
fix tests
filariow Sep 3, 2024
51fe1a4
improve comments
filariow Sep 3, 2024
e27d558
remove redundant check
filariow Sep 3, 2024
96b1646
improve returned error and logging
filariow Sep 3, 2024
048f29b
improve comments
filariow Sep 3, 2024
d9299c5
improve comments
filariow Sep 3, 2024
6408242
check if public-viewer has access to space before forwarding
filariow Sep 3, 2024
76ca225
Merge branch 'master' into pv-532
filariow Sep 3, 2024
a2bc3f8
add test for not signed up user
filariow Sep 12, 2024
1995b76
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Sep 12, 2024
47ab59e
fix typo in comment
filariow Sep 12, 2024
d4fee9b
refactor user names and cleanup test
filariow Sep 12, 2024
e614e38
rename notReadyUser to john
filariow Sep 12, 2024
bd91b58
rename notSignedUpUser to bob
filariow Sep 12, 2024
241fd5a
remove communityuser in favor of alice
filariow Sep 12, 2024
139809a
fix linter complaints
filariow Sep 12, 2024
a2a9837
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Sep 12, 2024
b1e93fb
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Sep 12, 2024
ea515a6
Add tests for banneduser in community tests
filariow Sep 12, 2024
3db6128
use ElementsMatch instead of Equal to reduce flakyness
filariow Sep 12, 2024
b18db56
remove unnecessary TODO comment
filariow Sep 13, 2024
d5c1503
refactor tests
filariow Sep 13, 2024
94e45a0
refactor tests
filariow Sep 13, 2024
306b6d6
revert unneeded changes to test/fake/proxy.go
filariow Sep 13, 2024
fd3e191
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Sep 23, 2024
13d335a
check proxy has forwarded the call
filariow Sep 23, 2024
15fa0cd
add comments and fix typos
filariow Sep 23, 2024
83ea573
add more test cases
filariow Sep 23, 2024
5d0f604
remove dead code
filariow Sep 23, 2024
c071225
consolidate code
filariow Sep 23, 2024
a3eff03
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Sep 23, 2024
e09cd75
fix linter complaints
filariow Sep 23, 2024
cf79085
Merge branch 'master' into pv-532
filariow Sep 24, 2024
599e2d2
add comment
filariow Sep 25, 2024
150dbd0
check proxy requires email in the JWT token
filariow Sep 25, 2024
c343167
remove publicViewer leftovers from test
filariow Sep 25, 2024
087b0dc
Merge remote-tracking branch 'upstream/master' into pv-532
filariow Sep 25, 2024
8ba3452
sort proxy middlewares
filariow Sep 25, 2024
0ffff03
remove unneeded comment
filariow Sep 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
244 changes: 216 additions & 28 deletions pkg/proxy/proxy.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
"github.com/codeready-toolchain/registration-service/pkg/proxy/access"
"github.com/codeready-toolchain/registration-service/pkg/proxy/handlers"
"github.com/codeready-toolchain/registration-service/pkg/proxy/metrics"
"github.com/codeready-toolchain/registration-service/pkg/signup"
commoncluster "github.com/codeready-toolchain/toolchain-common/pkg/cluster"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
Expand Down Expand Up @@ -118,6 +119,7 @@
return next(ctx)
}
},
p.addPublicViewerContext(),
p.ensureUserIsNotBanned(),
)

Expand All @@ -142,6 +144,7 @@
// Space lister routes
wg.GET("/:workspace", handlers.HandleSpaceGetRequest(p.spaceLister, p.getMembersFunc))
wg.GET("", handlers.HandleSpaceListRequest(p.spaceLister))

router.GET(proxyHealthEndpoint, p.health)
// SSO routes. Used by web login (oc login -w).
// Here is the expected flow for the "oc login -w" command:
Expand Down Expand Up @@ -269,47 +272,211 @@
}

func (p *Proxy) processRequest(ctx echo.Context) (string, *access.ClusterAccess, error) {
// retrieve required information from the HTTP request
userID, _ := ctx.Get(context.SubKey).(string)
username, _ := ctx.Get(context.UsernameKey).(string)
proxyPluginName, workspaceName, err := getWorkspaceContext(ctx.Request())
if err != nil {
return "", nil, crterrors.NewBadRequest("unable to get workspace context", err.Error())
}

ctx.Set(context.WorkspaceKey, workspaceName) // set workspace context for logging
cluster, err := p.app.MemberClusterService().GetClusterAccess(userID, username, workspaceName, proxyPluginName, false)
if err != nil {
return "", nil, crterrors.NewInternalError(errs.New("unable to get target cluster"), err.Error())
}
// set workspace context for logging
ctx.Set(context.WorkspaceKey, workspaceName)

// before proxying the request, verify that the user has a spacebinding for the workspace and that the namespace (if any) belongs to the workspace
var workspaces []toolchainv1alpha1.Workspace
if workspaceName != "" {
// when a workspace name was provided
// validate that the user has access to the workspace by getting all spacebindings recursively, starting from this workspace and going up to the parent workspaces till the "root" of the workspace tree.
workspace, err := handlers.GetUserWorkspace(ctx, p.spaceLister, workspaceName)
// if the target workspace is NOT explicitly declared in the HTTP request,
// process the request against the user's home workspace
if workspaceName == "" {
cluster, err := p.processHomeWorkspaceRequest(ctx, userID, username, proxyPluginName)
if err != nil {
return "", nil, crterrors.NewInternalError(errs.New("unable to retrieve user workspaces"), err.Error())
}
if workspace == nil {
// not found
return "", nil, crterrors.NewForbiddenError("invalid workspace request", fmt.Sprintf("access to workspace '%s' is forbidden", workspaceName))
}
// workspace was found means we can forward the request
workspaces = []toolchainv1alpha1.Workspace{*workspace}
} else {
// list all workspaces
workspaces, err = handlers.ListUserWorkspaces(ctx, p.spaceLister)
if err != nil {
return "", nil, crterrors.NewInternalError(errs.New("unable to retrieve user workspaces"), err.Error())
return "", nil, err
}
return proxyPluginName, cluster, nil
}

// if the target workspace is explicitly declared in the HTTP request,
// process the request against the declared workspace
cluster, err := p.processWorkspaceRequest(ctx, userID, username, workspaceName, proxyPluginName)
if err != nil {
return "", nil, err
}
return proxyPluginName, cluster, nil
}

// processHomeWorkspaceRequest process an HTTP Request targeting the user's home workspace.
func (p *Proxy) processHomeWorkspaceRequest(ctx echo.Context, userID, username, proxyPluginName string) (*access.ClusterAccess, error) {
// retrieves the ClusterAccess for the user and their home workspace
cluster, err := p.app.MemberClusterService().GetClusterAccess(userID, username, "", proxyPluginName, false)
if err != nil {
return nil, crterrors.NewInternalError(errs.New("unable to get target cluster"), err.Error())
}

// list all workspaces the user has access to
workspaces, err := handlers.ListUserWorkspaces(ctx, p.spaceLister)
if err != nil {
return nil, crterrors.NewInternalError(errs.New("unable to retrieve user workspaces"), err.Error())
}

// check whether the user has access to the home workspace
// and whether the requestedNamespace -if any- exists in the workspace.
requestedNamespace := namespaceFromCtx(ctx)
if err := validateWorkspaceRequest(workspaceName, requestedNamespace, workspaces); err != nil {
return "", nil, crterrors.NewForbiddenError("invalid workspace request", err.Error())
if err := validateWorkspaceRequest("", requestedNamespace, workspaces...); err != nil {
return nil, crterrors.NewForbiddenError("invalid workspace request", err.Error())
}

return proxyPluginName, cluster, nil
// return the cluster access
return cluster, nil
}

// processWorkspaceRequest process an HTTP Request targeting a specific workspace.
func (p *Proxy) processWorkspaceRequest(ctx echo.Context, userID, username, workspaceName, proxyPluginName string) (*access.ClusterAccess, error) {
// check that the user is provisioned and the space exists.
// if the PublicViewer support is enabled, user check is skipped.
if err := p.checkUserIsProvisionedAndSpaceExists(ctx, userID, username, workspaceName); err != nil {
return nil, err
}

// retrieve the requested Workspace with SpaceBindings
workspace, err := p.getUserWorkspaceWithBindings(ctx, workspaceName)
if err != nil {
return nil, err
}

// check whether the user has access to the workspace
// and whether the requestedNamespace -if any- exists in the workspace.
requestedNamespace := namespaceFromCtx(ctx)
if err := validateWorkspaceRequest(workspaceName, requestedNamespace, *workspace); err != nil {
return nil, crterrors.NewForbiddenError("invalid workspace request", err.Error())
}

// retrieve the ClusterAccess for the user and the target workspace
return p.getClusterAccess(ctx, userID, username, proxyPluginName, workspace)
}

// checkUserIsProvisionedAndSpaceExists checks that the user is provisioned and the Space exists.
// If the PublicViewer support is enabled, User check is skipped.
func (p *Proxy) checkUserIsProvisionedAndSpaceExists(ctx echo.Context, userID, username, workspaceName string) error {
if err := p.checkUserIsProvisioned(ctx, userID, username); err != nil {
return crterrors.NewInternalError(errs.New("unable to get target cluster"), err.Error())
}
if err := p.checkSpaceExists(workspaceName); err != nil {
return crterrors.NewInternalError(errs.New("unable to get target cluster"), err.Error())
}
return nil
}

// checkSpaceExists checks whether the Space exists.
func (p *Proxy) checkSpaceExists(workspaceName string) error {
if _, err := p.app.InformerService().GetSpace(workspaceName); err != nil {
// log the actual error but do not return it so that it doesn't reveal information about a space that may not belong to the requestor
log.Errorf(nil, err, "requested space '%s' does not exist", workspaceName)
return fmt.Errorf("access to workspace '%s' is forbidden", workspaceName)
}
return nil
}

// checkUserIsProvisioned checks whether the user is Approved, if they are not an error is returned.
// If public-viewer is enabled, user validation is skipped.
alexeykazakov marked this conversation as resolved.
Show resolved Hide resolved
func (p *Proxy) checkUserIsProvisioned(ctx echo.Context, userID, username string) error {
// skip if public-viewer is enabled: read-only operations on community workspaces are always permitted.
if context.IsPublicViewerEnabled(ctx) {
return nil
}

// retrieve the UserSignup for the requesting user.
//
// UserSignup complete status is not checked, since it might cause the proxy blocking the request
// and returning an error when quick transitions from ready to provisioning are happening.
userSignup, err := p.app.SignupService().GetSignupFromInformer(nil, userID, username, false)
if err != nil {
return err

Check warning on line 391 in pkg/proxy/proxy.go

View check run for this annotation

Codecov / codecov/patch

pkg/proxy/proxy.go#L391

Added line #L391 was not covered by tests
}

// if the UserSignup is nil or has NOT the CompliantUsername set,
// it means that MUR was NOT created and useraccount is NOT provisioned yet
if userSignup == nil || userSignup.CompliantUsername == "" {
cause := errs.New("user is not provisioned (yet)")
log.Error(nil, cause, fmt.Sprintf("signup object: %+v", userSignup))
return cause
}
return nil
}

// getClusterAccess retrieves the access to the cluster hosting the requested workspace,
// if the user has access to it.
// Access can be either direct (a SpaceBinding linking the user to the workspace exists)
// or community (a SpaceBinding linking the PublicViewer user to the workspace exists).
func (p *Proxy) getClusterAccess(ctx echo.Context, userID, username, proxyPluginName string, workspace *toolchainv1alpha1.Workspace) (*access.ClusterAccess, error) {
// retrieve cluster access as requesting user or PublicViewer
cluster, err := p.getClusterAccessAsUserOrPublicViewer(ctx, userID, username, proxyPluginName, workspace)
if err != nil {
return nil, crterrors.NewInternalError(errs.New("unable to get target cluster"), err.Error())

Check warning on line 412 in pkg/proxy/proxy.go

View check run for this annotation

Codecov / codecov/patch

pkg/proxy/proxy.go#L412

Added line #L412 was not covered by tests
}
return cluster, nil
}

// getClusterAccessAsUserOrPublicViewer if the requesting user exists and has direct access to the workspace,
// this function returns the ClusterAccess impersonating the requesting user.
// If PublicViewer support is enabled and PublicViewer user has access to the workspace,
// this function returns the ClusterAccess impersonating the PublicViewer user.
// If requesting user does not exists and PublicViewer is disabled or does not have access to the workspace,
// this function returns an error.
func (p *Proxy) getClusterAccessAsUserOrPublicViewer(ctx echo.Context, userID, username, proxyPluginName string, workspace *toolchainv1alpha1.Workspace) (*access.ClusterAccess, error) {
// retrieve the requesting user's UserSignup
userSignup, err := p.app.SignupService().GetSignupFromInformer(nil, userID, username, false)
if err != nil {
log.Error(nil, err, fmt.Sprintf("error retrieving user signup for userID '%s' and username '%s'", userID, username))
return nil, crterrors.NewInternalError(errs.New("unable to get user info"), "error retrieving user")

Check warning on line 428 in pkg/proxy/proxy.go

View check run for this annotation

Codecov / codecov/patch

pkg/proxy/proxy.go#L427-L428

Added lines #L427 - L428 were not covered by tests
}

// proceed as PublicViewer if the feature is enabled and userSignup is nil
publicViewerEnabled := context.IsPublicViewerEnabled(ctx)
if publicViewerEnabled && !userHasDirectAccess(userSignup, workspace) {
return p.app.MemberClusterService().GetClusterAccess(
toolchainv1alpha1.KubesawAuthenticatedUsername,
toolchainv1alpha1.KubesawAuthenticatedUsername,
workspace.Name,
proxyPluginName,
publicViewerEnabled)
}

// otherwise retrieve the ClusterAccess for the cluster hosting the workspace and the given user.
return p.app.MemberClusterService().GetClusterAccess(userID, username, workspace.Name, proxyPluginName, publicViewerEnabled)
}

// userHasDirectAccess checks if an UserSignup has access to a workspace.
// Workspace's bindings are obtained from its `status.bindings` property.
func userHasDirectAccess(signup *signup.Signup, workspace *toolchainv1alpha1.Workspace) bool {
if signup == nil {
return false
}

return userHasBinding(signup.CompliantUsername, workspace)
}

func userHasBinding(username string, workspace *toolchainv1alpha1.Workspace) bool {
for _, b := range workspace.Status.Bindings {
if b.MasterUserRecord == username {
return true
}
}
return false

}

// getUserWorkspaceWithBindings retrieves the workspace with the SpaceBindings if the requesting user has access to it.
// User access to the Workspace is checked by getting all spacebindings recursively,
// starting from this workspace and going up to the parent workspaces till the "root" of the workspace tree.
func (p *Proxy) getUserWorkspaceWithBindings(ctx echo.Context, workspaceName string) (*toolchainv1alpha1.Workspace, error) {
workspace, err := handlers.GetUserWorkspaceWithBindings(ctx, p.spaceLister, workspaceName, p.getMembersFunc)
if err != nil {
return nil, crterrors.NewInternalError(errs.New("unable to retrieve user workspaces"), err.Error())
}
if workspace == nil {
// not found
return nil, crterrors.NewForbiddenError("invalid workspace request", fmt.Sprintf("access to workspace '%s' is forbidden", workspaceName))
}
// workspace was found means we can forward the request
return workspace, nil
}

func (p *Proxy) handleRequestAndRedirect(ctx echo.Context) error {
Expand Down Expand Up @@ -409,6 +576,18 @@
}
}

// addPublicViewerContext updates echo.Context with the configuration's PublicViewerEnabled value.
func (p *Proxy) addPublicViewerContext() echo.MiddlewareFunc {
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(ctx echo.Context) error {
publicViewerEnabled := configuration.GetRegistrationServiceConfig().PublicViewerEnabled()
ctx.Set(context.PublicViewerEnabled, publicViewerEnabled)

return next(ctx)
}
}
}

// ensureUserIsNotBanned rejects the request if the user is banned.
// This Middleware requires the context to contain the email of the user,
// so it needs to be executed after the `addUserContext` Middleware.
Expand Down Expand Up @@ -503,11 +682,17 @@
func (p *Proxy) newReverseProxy(ctx echo.Context, target *access.ClusterAccess, isPlugin bool) *httputil.ReverseProxy {
req := ctx.Request()
targetQuery := target.APIURL().RawQuery
username, _ := ctx.Get(context.UsernameKey).(string)
// set username in context for logging purposes
ctx.Set(context.ImpersonateUser, target.Username())
mfrancisc marked this conversation as resolved.
Show resolved Hide resolved

director := func(req *http.Request) {
origin := req.URL.String()
req.URL.Scheme = target.APIURL().Scheme
req.URL.Host = target.APIURL().Host
req.URL.Path = singleJoiningSlash(target.APIURL().Path, req.URL.Path)
req.Header.Set("X-SSO-User", username)

if isPlugin {
// for non k8s clients testing, like vanilla http clients accessing plugin proxy flows, testing has proven that the request
// host needs to be updated in addition to the URL in order to have the reverse proxy contact the openshift
Expand Down Expand Up @@ -671,7 +856,10 @@
req.Header.Set(ph, strings.Join(protocols, ","))
}

func validateWorkspaceRequest(requestedWorkspace, requestedNamespace string, workspaces []toolchainv1alpha1.Workspace) error {
// validateWorkspaceRequest checks whether the requested workspace is in the list of workspaces the user has visibility on (retrieved via the spaceLister).
// If `requestedWorkspace` is zero, this function looks for the home workspace (the one with `status.Type` set to `home`).
// If `requestedNamespace` is NOT zero, this function checks if the namespace exists in the workspace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We really should stop checking if the namespace belongs to the workspace or not. It should be up to the cluster RBAC to manage namespace access. If there is a corresponding SpaceBinding in the namespace then everything will work. It would solve problems when users have permissions to some special namespaces in the cluster which are not part of any workspace and managed outside of kubesaw (like special global view-only namespaces for all authenticated users).
We added this additional namespace check as a workaround for the UI bag which we fixed many moons ago.

This is not related to this PR though. And let's address this in a separate PR. Just wanted to mention it here.

Copy link
Contributor Author

@filariow filariow Aug 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUIC, if we remove this check we could enable unwanted cross-workspace requests for workspaces living in the same cluster.

Consider the scenario in which Alice has access to workspaces alice - with namespace alice-tenant- and bob - with namespace bob-tenant. Both living in the same cluster.
If we remove the check, Alice could explicitly target the workspace alice and access the namespace bob-tenant. Something like .../workspaces/alice/api/v1/namespaces/bob-tenant/pods. Am I correct?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we remove the check, Alice could explicitly target the workspace alice and access the namespace bob-tenant. Something like .../workspaces/alice/api/v1/namespaces/bob-tenant/pods. Am I correct?

Not really. Our proxy will redirect the requests indeed to the same cluster: /api/v1/namespaces/bob-tenant/pods as Alice. But since Alice does not have permissions to access that namespace, kube will reject it.

It basically will still fail but on the kube side. Not on the proxy side.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, but if Alice has access to bob-tenant namespace because Bob gave her access to his workspace, the request will not fail. Right?

If this is correct, Alice could set a workspace and perform cross-workspace requests by simply using a namespace from another workspace. The only requirements are:

  1. workspaces are hosted on the same cluster
  2. Alice has access to both workspaces

So Alice can have a kubeconfig with server url set to https://cluster-domain/workspaces/alice-workspace and target namespaces from different workspace. To me this is a problem, because as a user I expect my requests to be confined to the workspace I set in the url.

Copy link
Contributor

@alexeykazakov alexeykazakov Aug 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is correct but I still don't see a problem with that. It's not like Alice can hack the proxy and get access to a namespace she doesn't have access to. She just could create a weird workspace/namespace context and use /workspaces/alice/api/v1/namespaces/bob-tenant/pods instead of more appropriate /workspaces/bob/api/v1/namespaces/bob-tenant/pods.

This is exactly why we introduced it in the first place. There was a bug in HAC/UI with the workspace switching. The workspace switcher would change the workspace but wouldn't change the namespace. So we hopped that setting such a limit would make it easy to spot bugs/mistakes on the client side (like in HAC): hey, you are using a weird context, make sure you really know what namespace in what workspace you want to access otherwise you could be scratching your head wondering why it gives you weird result.
So it is not a security problem.

However it has significant side effects. We faced multiple cases when users try to access some namespaces which do not belong to any workspace at all! Some global view-only namespaces which should be available to all authenticated users. Some operators like Pipelines, Virtualization and many others use such approach.
With this limitation in place it's impossible to access such non-workspace namespaces through the proxy even if they are totally accessible using the api server directly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the followup PR: #471

func validateWorkspaceRequest(requestedWorkspace, requestedNamespace string, workspaces ...toolchainv1alpha1.Workspace) error {
// check workspace access
isHomeWSRequested := requestedWorkspace == ""

Expand Down
Loading
Loading