Okctl provides three options for authentication to your AWS account:
- Named AWS profile, including AWS Single Sign-On (SSO), (default)
- IAM access key
In addition to login to the AWS account, you can also give access to other users to your Kubernetes cluster. NB! This is also required if you use several of the login methods above or are migrating to a new one (e.g. SSO), as only the user/role that created the cluster has admin access out of the box.
IAM access key
You can use IAM access keys from your AWS user to authenticate against the AWS account. This is useful for service users and automated integrations where you for instance cannot supply any multi-factor authentication.
By passing the flag --aws-credentials-type access-key
to okctl
, it will look for the following environment variables to be used to authenticate:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
# Example
export AWS_ACCESS_KEY_ID=myid
export AWS_SECRET_ACCESS_KEY=mykey
okctl apply cluster --aws-credentials-type access-key -f cluster.yaml
For how to get an access key, see the AWS documentation.
Named AWS profile
You can also authenticate using a named AWS profile. This allows you to login with AWS Single Sign-On (SSO) or using IAM access keys without the need for setting secrets in the OS environment variables.
By passing the flag --aws-credentials-type aws-profile
to okctl
, it will look for the following environment variables to be used to authenticate:
AWS_PROFILE
# Example
export AWS_PROFILE=myprofile
okctl apply-cluster --aws-credentials-type aws-profile -f cluster.yaml
This will fetch credentials from your AWS configuration files (located in ~/.aws/
).
For how to configure named AWS profiles, see the AWS documentation.
AWS Single Sign-On (SSO)
One use case for AWS profiles is to login with AWS SSO.
Configure AWS SSO
Before logging in with AWS SSO you must first configure your AWS CLI for use with AWS SSO:
aws configure sso
# Fill in the following (note that the SSO region is eu-west-1, not
# necessarily the same region where you are running your infrastructure):
SSO start URL: https://osloorigo.awsapps.com/start/
SSO Region: eu-west-1
You will be redirected to your web browser to log in. Afterwards, go back to the terminal and choose the account and role you want to log in with. Then select the default region for the AWS CLI (i.e. the region where you run your infrastructure) and specify the CLI default output format (json
) and a profile name of your choosing (which you use as your AWS_PROFILE
as described above):
CLI default client Region [eu-west-1]: eu-north-1
CLI default output format [None]: json
CLI profile name [...]: myprofile
Logging in with AWS SSO
Before logging in, AWS SSO login must be enabled for your Kubernetes cluster. Follow the guide below - "Allow SSO logins to cluster".
To run Okctl, first specify the AWS profile to use (same as when you configured SSO above) and login to AWS SSO using the AWS CLI. Then run the okctl
command with the aws-profile
credentials type:
# Example
export AWS_PROFILE=myprofile
aws sso login
# This will redirect you to your web browser for login...
# Test that you have access
aws s3 ls
okctl --aws-credentials-type aws-profile -c cluster.yaml venv
# Or just:
okctl -a aws-profile -c cluster.yaml venv
Allow SSO logins to cluster
If other users than yourself need access to the cluster, you need to enable SSO login to your cluster.
Note! Only the person who created the Kubernetes cluster have access to follow this guide.
Use eksctl get cluster
to find the right name. Then run:
CLUSTER="my-cluster"
Set correct AWS account ID:
AWS_ACCOUNT_ID="123456789012"
In the following command, replace the 0s with the the string you find in your AWS console under profile information in the upper right corner.

Replace the 0
s below with what you find in red circle above.
ROLE_NAME="AWSReservedSSO_AWSAdministratorAccess_00000000000000"
Run
ROLE_ARN="arn:aws:iam::${AWS_ACCOUNT_ID}:role/${ROLE_NAME}"
Finally, give the access:
eksctl create iamidentitymapping \
--cluster ${CLUSTER} \
--arn ${ROLE_ARN} \
--group system:masters \
--username "admin:{{SessionName}}"
Optional: To verify, you can run:
kubectl edit configmap -n kube-system aws-auth
and it should look something like this:
apiVersion: v1
data:
mapRoles: |
...
- groups:
- system:masters
rolearn: arn:aws:iam::123456789012:role/AWSReservedSSO_AWSAdministratorAccess_000000000000000
username: admin:{{SessionName}}
...
mapUsers: |
...
Alternative: Give access manually
Run
kubectl edit configmap -n kube-system aws-auth
and give access to the role
arn:aws:iam::ACCOUNT_NUMBER:role/AWSReservedSSO_AWSAdministratorAccess_ROLE_ID
- Replace ACCOUNT_NUMBER with the AWS account number
- Replace ROLE_ID with the ID you find when logging in to the AWS console, and clicking your user. (See screenshot above.)
Setting default authentication method
Set the following environment variable to avoid having to specify the AWS credentials type on each command:
# Example
export OKCTL_AWS_CREDENTIALS_TYPE=access-key
Give access to other users
Note: For this to work, you must either be the one creating the cluster, or the cluster creator must give you access by following the description below.
When you create a Kubernetes cluster with Okctl, you are automatically assigned owner roles which gives you access. To give access to other users, you must do the following instructions.
Note: We will probably change the way access is given in the future, using roles instead of editing the Kubernetes config directly.
Log in to the cluster with okctl venv
as described above.
kubectl edit configmap -n kube-system aws-auth
For IAM user add an element to the mapUsers
field, and for roles, add to mapRoles
. Multiple users users or roles can be listed here.
apiVersion: v1
data:
mapRoles: |
- groups:
- system:masters
rolearn: arn:aws:iam::123456789012:role/...
username: admin:{{SessionName}}
mapUsers: |
- userarn: arn:aws:iam::123456789012:user/someone@email.com
username: someone@email.com
groups:
- system:masters
Replace:
123456789012
with AWS account numbersomeone@email.com
with the e-mail of the user who wants accessrole/...
with the full role name
NB! If the AWS role name contains a path, you must remove this as it is not supported. E.g. change role/some/path/rolename
to role/rolename
(see EKS documentation for details).
Verify
Now, the user who wants access can verify that things work by running:
export AWS_ACCESS_KEY_ID=someid
export AWS_SECRET_ACCESS_KEY=somesecret
okctl -a access-key venv -c my-cluster.yaml
kubectl get pods
This should give no errors - either a list of pods, or just the message:
No resources found in default namespace.
That's it. Now you are able to run all Okctl commands with the -a access-key
option, which tells Okctl to use the provided access key instead of using the default authentication method.