Okctl provides three options for authentication to your AWS account:

In addition to login to the AWS account, you can also give access to other users to your Kubernetes cluster. NB! This is also required if you use several of the login methods above or are migrating to a new one (e.g. SSO), as only the user/role that created the cluster has admin access out of the box.

SAML-based login

This allows you to login to AWS with the username, password and MFA token of your organization user (e.g. Oslo kommune AD user). If you haven't set up MFA yet, you can do that here.

This is the default authentication method, but you can force it by passing the flag --aws-credentials-type saml to okctl. Login is embedded in the okctl command.

# Example
okctl apply cluster -f cluster.yml
? Username: myuser
? Password: [? for help] ************
? Multi-factor authentication token: [? for help] ******

IAM access key

You can use IAM access keys from your AWS user to authenticate against the AWS account. This is useful for service users and automated integrations where you for instance cannot supply any multi-factor authentication.

By passing the flag --aws-credentials-type access-key to okctl, it will look for the following environment variables to be used to authenticate:

# Example
export AWS_ACCESS_KEY_ID=myid

okctl apply cluster --aws-credentials-type access-key -f cluster.yaml

For how to get an access key, see the AWS documentation.

Named AWS profile

You can also authenticate using a named AWS profile. This allows you to login with AWS Single Sign-On (SSO) or using IAM access keys without the need for setting secrets in the OS environment variables.

By passing the flag --aws-credentials-type aws-profile to okctl, it will look for the following environment variables to be used to authenticate:

# Example
export AWS_PROFILE=myprofile

okctl apply-cluster --aws-credentials-type aws-profile -f cluster.yaml

This will fetch credentials from your AWS configuration files (located in ~/.aws/).

For how to configure named AWS profiles, see the AWS documentation.

AWS Single Sign-On (SSO)

One use case for AWS profiles is to login with AWS SSO.

NOTE: SSO authentication is currently in early stages, and must be considered experimental. Okctl upgrades currently do not seem to work, but other commands do.

Configure AWS SSO

Before logging in with AWS SSO you must first configure your AWS CLI for use with AWS SSO:

aws configure sso

# Fill in the following (note that the SSO region is eu-west-1, not
# necessarily the same region where you are running your infrastructure):

SSO start URL: https://osloorigo.awsapps.com/start/
SSO Region: eu-west-1

You will be redirected to your web browser to log in. Afterwards, go back to the terminal and choose the account and role you want to log in with. Then select the default region for the AWS CLI (i.e. the region where you run your infrastructure) and specify the CLI default output format (json) and a profile name of your choosing (which you use as your AWS_PROFILE as described above):

CLI default client Region [eu-west-1]: eu-north-1
CLI default output format [None]: json
CLI profile name [...]: myprofile

Logging in with AWS SSO

To run Okctl, first specify the AWS profile to use (same as when you configured SSO above) and login to AWS SSO using the AWS CLI. Then run the okctl command with the aws-profile credentials type:

# Example
export AWS_PROFILE=myprofile

aws sso login
# This will redirect you to your web browser for login...

okctl apply cluster --aws-credentials-type aws-profile -f cluster.yaml

Setting default authentication method

Set the following environment variable to avoid having to specify the AWS credentials type on each command:

# Example

Give access to other users

Note: For this to work, you must either be the one creating the cluster, or the cluster creator must give you access by following the description below.

When you create a Kubernetes cluster with Okctl, you are automatically assigned owner roles which gives you access. To give access to other users, you must do the following instructions.

Note: We will probably change the way access is given in the future, using roles instead of editing the Kubernetes config directly.

Log in to the cluster with okctl venv as described above.

kubectl edit configmap -n kube-system aws-auth

For IAM user add an element to the mapUsers field, and for roles, add to mapRoles. Multiple users users or roles can be listed here.

apiVersion: v1
  mapRoles: |
    - groups:
      - system:masters
      rolearn: arn:aws:iam::123456789012:role/...
      username: admin:{{SessionName}}
  mapUsers: |
    - userarn: arn:aws:iam::123456789012:user/someone@email.com
      username: someone@email.com
      - system:masters


  • 123456789012 with AWS account number
  • someone@email.com with the e-mail of the user who wants access
  • role/... with the full role name

NB! If the AWS role name contains a path, you must remove this as it is not supported. E.g. change role/some/path/rolename to role/rolename (see EKS documentation for details).


Now, the user who wants access can verify that things work by running:

export AWS_ACCESS_KEY_ID=someid
export AWS_SECRET_ACCESS_KEY=somesecret

okctl -a access-key venv -c my-cluster.yaml

kubectl get pods

This should give no errors - either a list of pods, or just the message:

No resources found in default namespace.

That's it. Now you are able to run all Okctl commands with the -a access-key option, which tells Okctl to use the provided access key instead of using the default authentication method.