Skip to content
This repository was archived by the owner on Jun 29, 2022. It is now read-only.
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
257 changes: 138 additions & 119 deletions docs/quickstarts/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,174 +5,197 @@ weight: 10

## Introduction

This quickstart guide walks through the steps needed to create a Lokomotive cluster on AWS with
Flatcar Container Linux using Route53 as the DNS provider.
This quickstart guide walks through the steps needed to create a Lokomotive cluster on AWS.

Lokomotive runs on top of [Flatcar Container Linux](https://www.flatcar-linux.org/). This guide uses the `stable` channel.

The guide uses [Amazon Route 53](https://aws.amazon.com/route53/) as a DNS provider. For more information on how Lokomotive handles DNS, refer to [this](../concepts/dns.md) document.

Lokomotive stores Terraform state [locally](../configuration-reference/backend/local.md) by default, or remotely within an [AWS S3 bucket](../configuration-reference/backend/s3.md).

[Lokomotive components](../concepts/components.md) complement the "stock" Kubernetes functionality by adding features such as load balancing, persistent storage and monitoring to a cluster. To keep this guide short, you will deploy a single component - `httpbin` - which serves as a demo application to verify the cluster behaves as expected.

By the end of this guide, you'll have a production-ready Kubernetes cluster running on AWS.

## Requirements

* Basic understanding of Kubernetes concepts.
* AWS account and IAM credentials.
* AWS Route53 DNS Zone (registered Domain Name or delegated subdomain).
* An AWS account and IAM credentials.
* An AWS [access key ID and a user's secret](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) with [permissions](https://github.com/kinvolk/lokomotive/blob/master/docs/concepts/dns.md#aws-route-53) to edit Route 53 records.
* An AWS Route 53 zone (can be a subdomain).
* Terraform v0.13.x installed locally.
* An SSH key pair for management access.
* `kubectl` installed locally to access the Kubernetes cluster.
* `terraform v0.13.x`
[installed](https://learn.hashicorp.com/terraform/getting-started/install.html#install-terraform).
* `kubectl` [installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/).

## Steps

### Step 1: Install lokoctl
>**NOTE:** The `kubectl` version used to interact with a Kubernetes cluster needs to be compatible with
>the version of the Kubernetes control plane version. Ideally, you should install a `kubectl` binary whose
>version is identical to the Kubernetes control plane included with a Lokomotive release. However,
>some degree of version "skew" is tolerated - see the Kubernetes
>[version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/) document for
>more information. You can determine the Kubernetes control plane's version included with a
>Lokomotive release by looking at the [release notes](https://github.com/kinvolk/lokomotive/releases).

lokoctl is a command-line interface for Lokomotive.

To install `lokoctl`, follow the instructions in the [lokoctl installation](../installer/lokoctl.md)
guide.

### Step 2: Set up a working directory

It's better to start fresh in a new working directory, as the state of the cluster is stored in this
directory.

This also makes the cleanup task easier.
## Steps

```console
mkdir -p lokomotive-infra/myawscluster
cd lokomotive-infra/myawscluster
```
### Step 1: Install lokoctl

### Step 3: Set up credentials from environment variables
`lokoctl` is the command-line interface for managing Lokomotive clusters. You can follow the [installer guide](../installer/lokoctl.md) to install it locally for your OS.

The AWS credentials file can be found at `~/.aws/credentials` if you have set up and configured AWS
CLI before. If you want to use that account, you don't need to specify any credentials for lokoctl.
### Step 2: Create a cluster configuration

You can also take any other credentials mechanism used by the AWS CLI, for example environment
variables. Either prepend them when starting lokoctl or export each of them once in the current
terminal session:
Create a directory for the cluster-related files and navigate to it:

```console
AWS_ACCESS_KEY_ID=EXAMPLEID AWS_SECRET_ACCESS_KEY=EXAMPLEKEY lokoctl ...
mkdir lokomotive-demo && cd lokomotive-demo
```
or

```console
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-east-1
Create a file named `cluster.lokocfg` with the following contents:

```hcl
cluster "aws" {
asset_dir = "./assets"
cluster_name = "lokomotive-demo"
controller_count = 1
dns_zone = "example.com"
dns_zone_id = "DNSZONEID"

region = "us-east-1"
ssh_pubkeys = ["ssh-rsa AAAA..."]

worker_pool "pool-1" {
count = 2
ssh_pubkeys = ["ssh-rsa AAAA..."]
}
}

# Needed for ingress to work.
component "contour" {
service_type = "NodePort"
}

# A demo application.
component "httpbin" {
ingress_host = "httpbin.lokomotive-demo.example.com"
}
```

### Step 4: Define cluster configuration

To create a Lokomotive cluster, we need to define a configuration.
Replace the parameters above using the following information:

A [production-ready configuration](https://github.com/kinvolk/lokomotive/blob/v0.5.0/examples/aws-production/cluster.lokocfg) is already provided for ease of
use. Copy the example configuration to the working directory and modify accordingly.
- `dns_zone` - a Route 53 zone name. A subdomain will be created under this zone in the following
format: `<cluster_name>.<zone>`
- `dns_zone_id` - a Route 53 DNS zone ID found in your AWS console.
- `ssh_pubkeys` - A list of strings representing the *contents* of the public SSH keys which should be authorized on cluster nodes.

The provided configuration installs the Lokomotive cluster and the following components:
The rest of the parameters may be left as-is. For more information about the configuration options, see the [configuration reference](../configuration-reference/platforms/aws.md).

* [metrics-server](../configuration-reference/components/metrics-server.md)
* [openebs-operator](../configuration-reference/components/openebs-operator.md)
* [flatcar-linux-update-operator](../configuration-reference/components/flatcar-linux-update-operator.md)
* [openebs-storage-class](../configuration-reference/components/openebs-storage-class.md)
* [prometheus-operator](../configuration-reference/components/prometheus-operator.md)
### Step 3: Deploy the cluster

You can configure the components as per your requirements.
>**NOTE:** If you have the AWS CLI installed and configured for an AWS account, you can skip setting
>the `AWS_*` variables below. `lokoctl` follows the standard AWS authentication methods, which
>means it will use the `default` AWS CLI profile if no explicit credentials are specified.
>Similarly, environment variables such as `AWS_PROFILE` can instruct `lokoctl` to use a
>specific AWS CLI profile for AWS authentication.

Lokomotive can store Terraform state [locally](../configuration-reference/backend/local.md)
or remotely within an [AWS S3 bucket](../configuration-reference/backend/s3.md). By default, Lokomotive
stores Terraform state locally.

Create a variables file named `lokocfg.vars` in the working directory to set values for variables
defined in the configuration file.
Set up your AWS credentials in your shell:

```console
#lokocfg.vars
ssh_public_keys = ["ssh-rsa AAAAB3Nz...", "ssh-rsa AAAAB3Nz...", ...]

state_s3_bucket = "name-of-the-s3-bucket-to-store-the-cluster-state"
lock_dynamodb_table = "name-of-the-dynamodb-table-for-state-locking"
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7FAKE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYFAKE
```

dns_zone = "dns-zone-name"
route53_zone_id = "zone-id-of-the-dns-zone"
Add a private key corresponding to one of the public keys specified in `ssh_pubkeys` to your `ssh-agent`:

cert_manager_email = "email-address-used-for-cert-manager-component"
grafana_admin_password = "password-for-grafana"
```bash
ssh-add ~/.ssh/id_rsa
ssh-add -L
```

**NOTE**: You can separate component configurations from cluster configuration in separate
configuration files if doing so fits your needs.
Deploy the cluster:

Example:
```console
$ ls lokomotive-infra/myawscluster
cluster.lokocfg prometheus-operator.lokocfg lokocfg.vars
lokoctl cluster apply -v
```

For advanced cluster configurations and more information refer to the [AWS configuration
guide](../configuration-reference/platforms/aws.md).
The deployment process typically takes about 15 minutes. Upon successful completion, an output similar to the following is shown:

### Step 5: Create Lokomotive cluster
```
Your configurations are stored in ./assets

Add a private key corresponding to one of the public keys specified in `ssh_pubkeys` to your `ssh-agent`:
Now checking health and readiness of the cluster nodes ...

```bash
ssh-add ~/.ssh/id_rsa
ssh-add -L
```
Node Ready Reason Message

Run the following command to create the cluster:
ip-10-0-11-66 True KubeletReady kubelet is posting ready status
ip-10-0-34-253 True KubeletReady kubelet is posting ready status
ip-10-0-92-177 True KubeletReady kubelet is posting ready status

```console
lokoctl cluster apply
Success - cluster is healthy and nodes are ready!
```
Once the command finishes, your Lokomotive cluster details are stored in the path you've specified
under `asset_dir`.

## Verification

A successful installation results in the output:
Use the generated `kubeconfig` file to access the cluster:

```console
module.aws-myawscluster.null_resource.bootkube-start: Still creating... [1m50s elapsed]
module.aws-myawscluster.null_resource.bootkube-start: Still creating... [2m0s elapsed]
module.aws-myawscluster.null_resource.bootkube-start: Creation complete after 2m6s [id=5156996152315868880]
export KUBECONFIG=$(pwd)/assets/cluster-assets/auth/kubeconfig
kubectl get nodes
```

Sample output:

Apply complete! Resources: 118 added, 0 changed, 0 destroyed.
```
NAME STATUS ROLES AGE VERSION
ip-10-0-11-66 Ready <none> 105s v1.19.4
ip-10-0-34-253 Ready <none> 107s v1.19.4
ip-10-0-92-177 Ready <none> 105s v1.19.4
```

Your configurations are stored in /home/imran/lokoctl-assets/myawscluster
Verify all pods are ready:

Now checking health and readiness of the cluster nodes ...
```console
kubectl get pods -A
```

Node Ready Reason Message
Verify you can access httpbin:

ip-10-0-39-75 True KubeletReady kubelet is posting ready status
ip-10-0-39-78 True KubeletReady kubelet is posting ready status
ip-10-0-39-29 True KubeletReady kubelet is posting ready status
ip-10-0-12-241 True KubeletReady kubelet is posting ready status
ip-10-0-12-244 True KubeletReady kubelet is posting ready status
ip-10-0-12-249 True KubeletReady kubelet is posting ready status
```console
HTTPBIN_HOST=$(kubectl get ing -n httpbin -o jsonpath='{.items[*].spec.rules[0].host}')

Success - cluster is healthy and nodes are ready!
curl ${HTTPBIN_HOST}/get
```

Use the generated `kubeconfig` file to access the Kubernetes cluster and list nodes.
Sample output:

```console
export KUBECONFIG=./lokomotive-assets/cluster-assets/auth/kubeconfig
kubectl get nodes
```
{
"args": {},
"headers": {
"Accept": "*/*",
"Content-Length": "0",
"Host": "httpbin.lokomotive-demo.example.com",
"User-Agent": "curl/7.70.0",
"X-Envoy-Expected-Rq-Timeout-Ms": "15000",
"X-Envoy-External-Address": "49.207.214.243"
},
"origin": "49.207.214.243",
"url": "http://httpbin.lokomotive-demo.example.com/get"
}
```

## Using the cluster

At this point you have access to the Kubernetes cluster and can use it!
If you don't have Kubernetes experience you can check out the [Kubernetes
Basics official
documentation](https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/)
to learn about its usage.
At this point, you should have access to a Lokomotive cluster and can use it to deploy applications.

If you don't have any Kubernetes experience, you can check out the [Kubernetes
Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/) tutorial.

**Note**: Lokomotive sets up a pretty restrictive Pod Security Policy that
disallows running containers as root by default, check the [Pod Security Policy
documentation](../concepts/securing-lokomotive-cluster.md#cluster-wide-pod-security-policy)
for more details.
>**NOTE:** Lokomotive uses a relatively restrictive Pod Security Policy by default. This policy
>disallows running containers as root. Refer to the
>[Pod Security Policy documentation](../concepts/securing-lokomotive-cluster.md#cluster-wide-pod-security-policy)
>for more details.
> We also deploy a webhook server that disallows usage of `default` service account's usage. Refer to the [Lokomotive admission webhooks](../concepts/admission-webhook.md) for more information.

## Cleanup

Expand All @@ -182,8 +205,7 @@ To destroy the Lokomotive cluster, execute the following command:
lokoctl cluster destroy --confirm
```

You can safely delete the working directory created for this quickstart guide if you no longer
require it.
You can safely delete the working directory created for this quickstart guide if you no longer require it.

## Troubleshooting

Expand All @@ -198,18 +220,16 @@ module.aws-myawscluster.null_resource.copy-controller-secrets: Still creating...
...
```

The error probably happens because the `ssh_pubkeys` provided in the configuration is missing in the
`ssh-agent`.
The error probably happens because the `ssh_pubkeys` provided in the configuration is missing in the `ssh-agent`.

To rectify the error, you need to:
In case the deployment process seems to hang at the `copy-controller-secrets` phase for a long time, check the following:

1. Follow the steps [to add the SSH key to the
ssh-agent](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent#adding-your-ssh-key-to-the-ssh-agent).
2. Retry [Step 5](#step-5-create-lokomotive-cluster).
- Verify the correct private SSH key was added to `ssh-agent`.
- Verify that you can SSH into the created controller node from the machine running `lokoctl`.

### IAM Permission Issues

* If the failure is due to insufficient permissions, check the IAM policy and follow the [IAM troubleshooting guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot.html).
- If the failure is due to insufficient permissions, check the [IAM troubleshooting guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot.html) or follow the IAM permissions specified in the [DNS documentation](../concepts/dns.md#aws-route-53).

## Conclusion

Expand All @@ -219,5 +239,4 @@ After walking through this guide, you've learned how to set up a Lokomotive clus

You can now start deploying your workloads on the cluster.

For more information on installing supported Lokomotive components, you can visit the [component
configuration references](../configuration-reference/components).
For more information on installing supported Lokomotive components, you can visit the [component configuration references](../configuration-reference/components).