OPA for HTTP Authorization
Open Policy Agent[1] is a promising, light weight and very generic policy engine to govern authorization is any type of domain. I found this comparion[2] very attractive in evaluating OPA for a project I am currently working on, where they demonstrate how OPA can cater same functionality defined in RBAC, RBAC with Seperation of Duty, ABAC and XACML.
Here are the steps to a brief demonstration of OPA used for HTTP API authorization based on the sample [3], taking it another level up.
Running OPA Server First we need to download OPA from [4], based on the operating system we are running on. For linux, curl -L -o opa https://github.com/open-policy-agent/opa/releases/download/v0.10.3/opa_linux_amd64 Make it executable, chmod 755 ./opa Once done, we can start OPA policy engine as a server.
./opa run --server Define Data and Rules Next we need to load data and authorization rules to the server, so it can make decisions. OPA defines these in files in the format of .rego. Below is a sample …
Here are the steps to a brief demonstration of OPA used for HTTP API authorization based on the sample [3], taking it another level up.
Running OPA Server First we need to download OPA from [4], based on the operating system we are running on. For linux, curl -L -o opa https://github.com/open-policy-agent/opa/releases/download/v0.10.3/opa_linux_amd64 Make it executable, chmod 755 ./opa Once done, we can start OPA policy engine as a server.
./opa run --server Define Data and Rules Next we need to load data and authorization rules to the server, so it can make decisions. OPA defines these in files in the format of .rego. Below is a sample …

According to this approach, do the workload themselves handle OAuth2 token retrieval and validations or a node agent (side car) will take care of it ?. I think it would make more sense to delegate this to a sidecar/node agent since workloads themselves should not be aware of how authentication and authorization takes place in a complex service mesh.
ReplyDeleteIf you are to use sidecars, then how would scopes will be used / controlled ? A side car will not have a clear idea on what actions the workload is trying to do/invoke and what scopes should be available for the token while retrieving and validating.
Thanks for the raised concern and valuable thoughts.
DeleteYes really make more sense to handle the OAuth specific details in a sidecar than injecting that logic to workload, keeping the seperation of duties.
In the deisgn we will consider the case of sidecars, but not limited to. Even a legacy system that supports OAuth which might not necessarily be in the modern architecture and SPIFFE will be exist and co-operate in the system. In my opinion having a sidecar component that can understand OAuth protocol is an step forward.
Ideal solution I am looking at is selecting the scopes attached to the OAuth token, based on a set of policies defined in the authorization server using OPA, which can be consumed by sidecar as well to let the resources be accessed based on the scopes attached to the token.
Appreciate your thoughts...
This comment has been removed by the author.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThanks Pushpalanka Jayawardhana!!!
ReplyDeleteSome thoughts:
1) Will each replica have different certificates? I mean, replica1 will get a different certificate than replica2?
2) Why AWS workloads really need to get dinamically a certificate from a SPIFFE server, instead of having its own on deployment time?
Hi.. Thanks for your thoughts. Please find few more details on the same.
Delete(1) Yes, each replica will have a unique set of key pairs for each. However if we want to treat them as same or different can be governed by how we instruct the SPIRE server to issue SPIFFE IDs. More details at https://pushpalankajaya.blogspot.com/2019/01/spiffe-in-nutshell.html
(2) The environment we are looking at here is more of a larger scale and dynamic in nature which needs to be well automated (as per the requirements of thousands of APIs serving in systems and with micro services architectures.). It will have less control over how these workloads are started and disappeared, rebooted, fail-overed etc.. Frequent key rotation is another aspect supported by this solution with no manual intervention.
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteWe are a part of the success story for many of our customer's successful cloud Migrations.
ReplyDeleteCloud Migration services
Aws Cloud Migration services
Azure Cloud Migration services
Thank you for the informative post about Security challenges in AWS , Found it useful . cloud migration services have now become secured and with no-risk
ReplyDeleteVmware Cloud Migration services
Database Migration services
I am really impressed with the way of writing of this blog. The author has shared the info in a crisp and short way.
ReplyDeleteLia Infraservices
We are a part of the success story for many of our customer's successful cloud Migrations.
ReplyDeleteCloud Migration services
Best Cloud Migration Tool
ReplyDeleteThankyou for sharing Good information...
AWS Training
This post is nice and thanks for sharing.
ReplyDeleteGoogle Cloud Platform Training
GCP Online Training
Google Cloud Platform Training In Hyderabad