Skip to content
This repository was archived by the owner on Jun 29, 2022. It is now read-only.

Commit 8c33ee2

Browse files
author
knrt10
committed
docs/quickstart: Refactor AWS guide
closes: #613 Signed-off-by: knrt10 <kautilya@kinvolk.io>
1 parent 9d2aeee commit 8c33ee2

File tree

1 file changed

+151
-113
lines changed

1 file changed

+151
-113
lines changed

docs/quickstarts/aws.md

Lines changed: 151 additions & 113 deletions
Original file line numberDiff line numberDiff line change
@@ -5,174 +5,212 @@ weight: 10
55

66
## Introduction
77

8-
This quickstart guide walks through the steps needed to create a Lokomotive cluster on AWS with
9-
Flatcar Container Linux using Route53 as the DNS provider.
8+
This quickstart guide walks through the steps needed to create a Lokomotive cluster on AWS.
9+
10+
Lokomotive runs on top of [Flatcar Container Linux](https://www.flatcar-linux.org/). This guide
11+
uses the `stable` channel.
12+
13+
The guide uses [Amazon Route 53](https://aws.amazon.com/route53/) as a DNS provider. For more
14+
information on how Lokomotive handles DNS, refer to [this](../concepts/dns.md) document.
15+
16+
Lokomotive can store Terraform state [locally](../configuration-reference/backend/local.md)
17+
or remotely within an [AWS S3 bucket](../configuration-reference/backend/s3.md). By default, Lokomotive
18+
stores Terraform state locally.
19+
20+
[Lokomotive components](../concepts/components.md) complement the "stock" Kubernetes functionality
21+
by adding features such as load balancing, persistent storage and monitoring to a cluster. To keep
22+
this guide short you will deploy a single component - `httpbin` - which serves as a demo
23+
application to verify the cluster behaves as expected.
1024

1125
By the end of this guide, you'll have a production-ready Kubernetes cluster running on AWS.
1226

1327
## Requirements
1428

1529
* Basic understanding of Kubernetes concepts.
16-
* AWS account and IAM credentials.
17-
* AWS Route53 DNS Zone (registered Domain Name or delegated subdomain).
30+
* An AWS account and IAM credentials.
31+
* An AWS
32+
[access key ID and secret](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
33+
of a user with
34+
[permissions](https://github.com/kinvolk/lokomotive/blob/master/docs/concepts/dns.md#aws-route-53)
35+
to edit Route 53 records.
36+
* An AWS Route 53 zone (can be a subdomain).
1837
* Terraform v0.13.x installed locally.
1938
* An SSH key pair for management access.
20-
* `kubectl` installed locally to access the Kubernetes cluster.
39+
* `terraform v0.13.x`
40+
[installed](https://learn.hashicorp.com/terraform/getting-started/install.html#install-terraform).
41+
* `kubectl` [installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
2142

22-
## Steps
23-
24-
### Step 1: Install lokoctl
25-
26-
lokoctl is a command-line interface for Lokomotive.
43+
>NOTE: The `kubectl` version used to interact with a Kubernetes cluster needs to be compatible with
44+
>the version of the Kubernetes control plane. Ideally you should install a `kubectl` binary whose
45+
>version is identical to the Kubernetes control plane included with a Lokomotive release. However,
46+
>some degree of version "skew" is tolerated - see the Kubernetes
47+
>[version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/) document for
48+
>more information. You can determine the version of the Kubernetes control plane included with a
49+
>Lokomotive release by looking at the [release notes](https://github.com/kinvolk/lokomotive/releases).
2750
28-
To install `lokoctl`, follow the instructions in the [lokoctl installation](../installer/lokoctl.md)
29-
guide.
3051

31-
### Step 2: Set up a working directory
32-
33-
It's better to start fresh in a new working directory, as the state of the cluster is stored in this
34-
directory.
35-
36-
This also makes the cleanup task easier.
52+
## Steps
3753

38-
```console
39-
mkdir -p lokomotive-infra/myawscluster
40-
cd lokomotive-infra/myawscluster
41-
```
54+
### Step 1: Install lokoctl
4255

43-
### Step 3: Set up credentials from environment variables
56+
`lokoctl` is the command-line interface for managing Lokomotive clusters. You can follow the [installer guide](../installer/lokoctl) to install it locally for your OS.
4457

45-
The AWS credentials file can be found at `~/.aws/credentials` if you have set up and configured AWS
46-
CLI before. If you want to use that account, you don't need to specify any credentials for lokoctl.
58+
### Step 2: Create a cluster configuration
4759

48-
You can also take any other credentials mechanism used by the AWS CLI, for example environment
49-
variables. Either prepend them when starting lokoctl or export each of them once in the current
50-
terminal session:
60+
Create a directory for the cluster-related files and navigate to it:
5161

5262
```console
53-
AWS_ACCESS_KEY_ID=EXAMPLEID AWS_SECRET_ACCESS_KEY=EXAMPLEKEY lokoctl ...
63+
mkdir lokomotive-demo && cd lokomotive-demo
5464
```
55-
or
5665

57-
```console
58-
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
59-
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
60-
export AWS_DEFAULT_REGION=us-east-1
66+
Create a file named `cluster.lokocfg` with the following contents:
67+
68+
```hcl
69+
variable "cluster_name" {
70+
default = "lokomotive-demo"
71+
}
72+
73+
variable "dns_zone" {
74+
default = "example.com"
75+
}
76+
77+
cluster "aws" {
78+
asset_dir = "./assets"
79+
cluster_name = var.cluster_name
80+
controller_count = 1
81+
dns_zone = var.dns_zone
82+
dns_zone_id = "ZQC39L1XVW0D"
83+
84+
region = "us-east-1"
85+
ssh_pubkeys = ["ssh-rsa AAAA..."]
86+
87+
worker_pool "pool-1" {
88+
count = 2
89+
ssh_pubkeys = ["ssh-rsa AAAA..."]
90+
}
91+
}
92+
93+
# A demo application.
94+
component "httpbin" {
95+
ingress_host = "httpbin.${var.cluster_name}.${var.dns_zone}"
96+
}
6197
```
6298

63-
### Step 4: Define cluster configuration
99+
Replace the parameters above using the following information:
64100

65-
To create a Lokomotive cluster, we need to define a configuration.
101+
- `dns_zone` - a Route 53 zone name. A subdomain will be created under this zone in the following
102+
format: `<cluster_name>.<zone>`
103+
- `dns_zone_id` - a Route 53 DNS zone ID which can be found in your AWS console.
104+
- `ssh_pubkeys` - A list of strings representing the *contents* of the public SSH keys which should
105+
be authorized on cluster nodes.
66106

67-
A [production-ready configuration](https://github.com/kinvolk/lokomotive/blob/v0.5.0/examples/aws-production/cluster.lokocfg) is already provided for ease of
68-
use. Copy the example configuration to the working directory and modify accordingly.
107+
The rest of the parameters may be left as-is. For more information about the configuration options
108+
see the [configuration reference](../configuration-reference/platforms/aws.md).
69109

70-
The provided configuration installs the Lokomotive cluster and the following components:
110+
### Step 3: Deploy the cluster
71111

72-
* [metrics-server](../configuration-reference/components/metrics-server.md)
73-
* [openebs-operator](../configuration-reference/components/openebs-operator.md)
74-
* [flatcar-linux-update-operator](../configuration-reference/components/flatcar-linux-update-operator.md)
75-
* [openebs-storage-class](../configuration-reference/components/openebs-storage-class.md)
76-
* [prometheus-operator](../configuration-reference/components/prometheus-operator.md)
77-
78-
You can configure the components as per your requirements.
79-
80-
Lokomotive can store Terraform state [locally](../configuration-reference/backend/local.md)
81-
or remotely within an [AWS S3 bucket](../configuration-reference/backend/s3.md). By default, Lokomotive
82-
stores Terraform state locally.
112+
>NOTE: If you have the AWS CLI installed and configured for an AWS account, you can skip setting
113+
>the `AWS_*` variables below. `lokoctl` follows the standard AWS authentication methods, which
114+
>means it will use the `default` AWS CLI profile if no explicit credentials are specified.
115+
>Similarly, environment variables such as `AWS_PROFILE` can be used to instruct `lokoctl` to use a
116+
>specific AWS CLI profile for AWS authentication.
83117
84-
Create a variables file named `lokocfg.vars` in the working directory to set values for variables
85-
defined in the configuration file.
118+
Set up your Packet and AWS credentials in your shell:
86119

87120
```console
88-
#lokocfg.vars
89-
ssh_public_keys = ["ssh-rsa AAAAB3Nz...", "ssh-rsa AAAAB3Nz...", ...]
90-
91-
state_s3_bucket = "name-of-the-s3-bucket-to-store-the-cluster-state"
92-
lock_dynamodb_table = "name-of-the-dynamodb-table-for-state-locking"
121+
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7FAKE
122+
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYFAKE
123+
```
93124

94-
dns_zone = "dns-zone-name"
95-
route53_zone_id = "zone-id-of-the-dns-zone"
125+
Add a private key corresponding to one of the public keys specified in `ssh_pubkeys` to your `ssh-agent`:
96126

97-
cert_manager_email = "email-address-used-for-cert-manager-component"
98-
grafana_admin_password = "password-for-grafana"
127+
```bash
128+
ssh-add ~/.ssh/id_rsa
129+
ssh-add -L
99130
```
100131

101-
**NOTE**: You can separate component configurations from cluster configuration in separate
102-
configuration files if doing so fits your needs.
132+
Deploy the cluster:
103133

104-
Example:
105134
```console
106-
$ ls lokomotive-infra/myawscluster
107-
cluster.lokocfg prometheus-operator.lokocfg lokocfg.vars
135+
lokoctl cluster apply -v
108136
```
109137

110-
For advanced cluster configurations and more information refer to the [AWS configuration
111-
guide](../configuration-reference/platforms/aws.md).
138+
The deployment process typically takes about 15 minutes. Upon successful completion, an output
139+
similar to the following is shown:
112140

113-
### Step 5: Create Lokomotive cluster
141+
```
142+
Your configurations are stored in ./assets
114143
115-
Add a private key corresponding to one of the public keys specified in `ssh_pubkeys` to your `ssh-agent`:
144+
Now checking health and readiness of the cluster nodes ...
116145
117-
```bash
118-
ssh-add ~/.ssh/id_rsa
119-
ssh-add -L
120-
```
146+
Node Ready Reason Message
121147
122-
Run the following command to create the cluster:
148+
ip-10-0-11-66 True KubeletReady kubelet is posting ready status
149+
ip-10-0-34-253 True KubeletReady kubelet is posting ready status
150+
ip-10-0-92-177 True KubeletReady kubelet is posting ready status
123151
124-
```console
125-
lokoctl cluster apply
152+
Success - cluster is healthy and nodes are ready!
126153
```
127-
Once the command finishes, your Lokomotive cluster details are stored in the path you've specified
128-
under `asset_dir`.
129-
130154
## Verification
131155

132-
A successful installation results in the output:
156+
Use the generated `kubeconfig` file to access the cluster:
133157

134158
```console
135-
module.aws-myawscluster.null_resource.bootkube-start: Still creating... [1m50s elapsed]
136-
module.aws-myawscluster.null_resource.bootkube-start: Still creating... [2m0s elapsed]
137-
module.aws-myawscluster.null_resource.bootkube-start: Creation complete after 2m6s [id=5156996152315868880]
159+
export KUBECONFIG=$(pwd)/assets/cluster-assets/auth/kubeconfig
160+
kubectl get nodes
161+
```
138162

139-
Apply complete! Resources: 118 added, 0 changed, 0 destroyed.
163+
Sample output:
140164

141-
Your configurations are stored in /home/imran/lokoctl-assets/myawscluster
165+
```
166+
NAME STATUS ROLES AGE VERSION
167+
ip-10-0-11-66 Ready <none> 105s v1.19.4
168+
ip-10-0-34-253 Ready <none> 107s v1.19.4
169+
ip-10-0-92-177 Ready <none> 105s v1.19.4
170+
```
142171

143-
Now checking health and readiness of the cluster nodes ...
172+
Verify all pods are ready:
144173

145-
Node Ready Reason Message
174+
```console
175+
kubectl get pods -A
176+
```
146177

147-
ip-10-0-39-75 True KubeletReady kubelet is posting ready status
148-
ip-10-0-39-78 True KubeletReady kubelet is posting ready status
149-
ip-10-0-39-29 True KubeletReady kubelet is posting ready status
150-
ip-10-0-12-241 True KubeletReady kubelet is posting ready status
151-
ip-10-0-12-244 True KubeletReady kubelet is posting ready status
152-
ip-10-0-12-249 True KubeletReady kubelet is posting ready status
178+
Verify you can access httpbin:
153179

154-
Success - cluster is healthy and nodes are ready!
180+
```console
181+
kubectl -n httpbin port-forward svc/httpbin 8080
182+
183+
# In a new terminal.
184+
curl http://localhost:8080/get
155185
```
156186

157-
Use the generated `kubeconfig` file to access the Kubernetes cluster and list nodes.
187+
Sample output:
158188

159-
```console
160-
export KUBECONFIG=./lokomotive-assets/cluster-assets/auth/kubeconfig
161-
kubectl get nodes
189+
```
190+
{
191+
"args": {},
192+
"headers": {
193+
"Accept": "*/*",
194+
"Host": "localhost:8080",
195+
"User-Agent": "curl/7.70.0"
196+
},
197+
"origin": "127.0.0.1",
198+
"url": "http://localhost:8080/get"
199+
}
162200
```
163201

164202
## Using the cluster
165203

166-
At this point you have access to the Kubernetes cluster and can use it!
167-
If you don't have Kubernetes experience you can check out the [Kubernetes
168-
Basics official
169-
documentation](https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/)
170-
to learn about its usage.
204+
At this point you should have access to a Lokomotive cluster and can use it to deploy applications.
205+
206+
If you don't have any Kubernetes experience, you can check out the [Kubernetes
207+
Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/) tutorial.
171208

172-
**Note**: Lokomotive sets up a pretty restrictive Pod Security Policy that
173-
disallows running containers as root by default, check the [Pod Security Policy
174-
documentation](../concepts/securing-lokomotive-cluster.md#cluster-wide-pod-security-policy)
175-
for more details.
209+
>NOTE: Lokomotive uses a relatively restrictive Pod Security Policy by default. This policy
210+
>disallows running containers as root. Refer to the
211+
>[Pod Security Policy documentation](../concepts/securing-lokomotive-cluster.md#cluster-wide-pod-security-policy)
212+
>for more details.
213+
> We also deploy a webhook server which disallows usage of default service account. Refer to the [Lokomotive admission webhooks](../concepts/admission-webhook) for more details.
176214
177215
## Cleanup
178216

@@ -201,15 +239,15 @@ module.aws-myawscluster.null_resource.copy-controller-secrets: Still creating...
201239
The error probably happens because the `ssh_pubkeys` provided in the configuration is missing in the
202240
`ssh-agent`.
203241

204-
To rectify the error, you need to:
242+
In case the deployment process seems to hang at the `copy-controller-secrets` phase for a long
243+
time, check the following:
205244

206-
1. Follow the steps [to add the SSH key to the
207-
ssh-agent](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent#adding-your-ssh-key-to-the-ssh-agent).
208-
2. Retry [Step 5](#step-5-create-lokomotive-cluster).
245+
- Verify the correct private SSH key was added to `ssh-agent`.
246+
- Verify that you can SSH into the created controller node from the machine running `lokoctl`.
209247

210248
### IAM Permission Issues
211249

212-
* If the failure is due to insufficient permissions, check the IAM policy and follow the [IAM troubleshooting guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot.html).
250+
* If the failure is due to insufficient permissions, check the [IAM troubleshooting guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot.html) or follow the IAM permissiosn specified in [DNS document](../concepts/dns.md#aws-route-53).
213251

214252
## Conclusion
215253

0 commit comments

Comments
 (0)