Skip to content

Commit 72100bc

Browse files
authored
Re-org and revamp k3s etcd-snapshot docs (#447)
* Re-org and improve k3s etcd-snapshot docs Signed-off-by: manuelbuil <[email protected]>
1 parent 4d922d9 commit 72100bc

File tree

3 files changed

+135
-133
lines changed

3 files changed

+135
-133
lines changed

docs/cli/etcd-snapshot.md

Lines changed: 131 additions & 129 deletions
Original file line numberDiff line numberDiff line change
@@ -4,47 +4,88 @@ title: etcd-snapshot
44

55
# k3s etcd-snapshot
66

7-
This page documents the management of etcd snapshots using the `k3s etcd-snapshot` CLI, as well as configuration of etcd scheduled snapshots for the `k3s server` process, and use of the `k3s server --cluster-reset` command to reset etcd cluster membership and optionally restore etcd snapshots.
7+
This page describes how to use the `k3s etcd-snapshot` CLI tool to manage etcd snapshots and how to restore from an etcd snapshot.
88

9-
## Creating Snapshots
9+
K3s etcd snapshots are stored on the node file system, and may optionally be uploaded to an S3 compatible object store for disaster recovery scenarios. Snapshots can be both automated on a reoccurring schedule, and taken manually on-demand. The `k3s etcd-snapshot` CLI tool offers a set of subcommands that can be used to create, delete, and manage snapshots.
1010

11-
Snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value, which defaults to `${data-dir}/server/db/snapshots`. The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed independently by setting the `--data-dir` flag.
11+
| Subcommand | Description |
12+
| ----------- | --------------- |
13+
| delete | Delete given snapshot(s) |
14+
| ls, list, l | List snapshots |
15+
| prune | Remove snapshots that exceed the configured retention count |
16+
| save | Trigger an on-demand etcd snapshot |
1217

13-
### Scheduled Snapshots
18+
For additional information on the etcd snapshot subcommands, run `k3s etcd-snapshot --help`.
19+
20+
## Creating Snapshots
1421

15-
Scheduled snapshots are enabled by default, at 00:00 and 12:00 system time, with 5 snapshots retained. To configure the snapshot interval or the number of retained snapshots, refer to the [snapshot configuration options](#snapshot-configuration-options).
22+
<Tabs groupId="snapshots">
23+
<TabItem value="Scheduled">
1624

17-
Scheduled snapshots have a name that starts with `etcd-snapshot`, followed by the node name and timestamp. The base name can be changed with the `--etcd-snapshot-name` flag in the server configuration.
25+
Scheduled snapshots are enabled by default, at 00:00 and 12:00 system time, with 5 snapshots retained. Scheduled snapshots have a name that starts with `etcd-snapshot`, followed by the node name and timestamp.
1826

19-
### On-demand Snapshots
27+
The following options control the operation of scheduled snapshots:
2028

21-
Snapshots can be saved manually by running the `k3s etcd-snapshot save` command.
29+
| Flag | Description |
30+
| ----------- | --------------- |
31+
| `--etcd-disable-snapshots` | Disable scheduled snapshots |
32+
| `--etcd-snapshot-name` | Sets the base name of etcd scheduled snapshots. (Default: `etcd-snapshot`) |
33+
| `--etcd-snapshot-compress` | Compress etcd snapshots |
34+
| `--etcd-snapshot-dir` | Directory to save db snapshots. (Default location: `${data-dir}/db/snapshots`) |
35+
| `--etcd-snapshot-retention` | Number of snapshots to retain (default: 5) |
36+
| `--etcd-snapshot-schedule-cron` | Snapshot interval time in cron spec. eg. every 5 hours `0 */5 * * *` (default: `0 */12 * * *`) |
2237

23-
On-demand snapshots have a name that starts with `on-demand`, followed by the node name and timestamp. The base name can be changed with the `--name` flag when saving the snapshot.
38+
The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed independently by setting the `--data-dir` flag.
2439

25-
### Snapshot Configuration Options
40+
Scheduled snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](https://docs.k3s.io/cli/etcd-snapshot#s3-compatible-object-store-support)
2641

27-
These flags can be passed to the `k3s server` command to reset the etcd cluster, and optionally restore from a snapshot.
42+
</TabItem>
43+
<TabItem value="On-demand">
2844

29-
| Flag | Description |
30-
| ----------- | --------------- |
31-
| `--cluster-reset`| Forget all peers and become sole member of a new cluster. This can also be set with the environment variable `[$K3S_CLUSTER_RESET]` |
32-
| `--cluster-reset-restore-path` | Path to snapshot file to be restored |
45+
Snapshots can be saved manually by running the `k3s etcd-snapshot save` command. There is no retention for these on-demand snapshots and the user needs to remove them manually by using `k3s etcd-snapshot delete` or `k3s etcd-snapshot prune` commands. On-demand snapshots have a name that starts with `on-demand`, followed by the node name and timestamp.
3346

34-
These flags are valid for both `k3s server` and `k3s etcd-snapshot`, however when passed to `k3s etcd-snapshot` the `--etcd-` prefix can be omitted to avoid redundancy.
35-
Flags can be passed in with the command line, or in the [configuration file,](../installation/configuration.md#configuration-file ) which may be easier to use.
47+
The following options control the operation of on-demand snapshots:
3648

3749
| Flag | Description |
3850
| ----------- | --------------- |
39-
| `--etcd-disable-snapshots` | Disable scheduled snapshots |
51+
| `--name` | Sets the base name of etcd on-demand snapshots. (Default: `on-demand`) |
4052
| `--etcd-snapshot-compress` | Compress etcd snapshots |
4153
| `--etcd-snapshot-dir` | Directory to save db snapshots. (Default location: `${data-dir}/db/snapshots`) |
42-
| `--etcd-snapshot-retention` | Number of snapshots to retain (default: 5) |
43-
| `--etcd-snapshot-schedule-cron` | Snapshot interval time in cron spec. eg. every 5 hours `0 */5 * * *` (default: `0 */12 * * *`) |
4454

45-
### S3 Compatible Object Store Support
55+
The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed independently by setting the `--data-dir` flag.
4656

47-
K3s supports writing etcd snapshots to and restoring etcd snapshots from S3-compatible object stores. S3 support is available for both on-demand and scheduled snapshots.
57+
The `--name` flag can only be set when running the `k3s etcd-snapshot save` command. The other two can also be part of the `k3s server` [configuration file](../installation/configuration.md#configuration-file)
58+
59+
On-demand snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](https://docs.k3s.io/cli/etcd-snapshot#s3-compatible-object-store-support)
60+
61+
</TabItem>
62+
</Tabs>
63+
64+
65+
## Deleting Snapshots
66+
67+
Scheduled snapshots are deleted automatically when the number of snapshots exceeds the configured retention count (5 by default). The oldest snapshots are removed first.
68+
69+
To manually delete scheduled snapshot(s) or on-demand snapshot(s), you can use the `k3s etcd-snapshot delete` command:
70+
71+
```bash
72+
k3s etcd-snapshot delete <SNAPSHOT-NAME-1> <SNAPSHOT-NAME-2> ...
73+
```
74+
75+
The `prune` subcommand removes snapshots that match the name prefix (`on-demand` by default) and exceed the configured retention count. It includes the flag `--snapshot-retention` to set the retention count. For scheduled snapshots, it overrides the default retention policy. On-demand snapshots have no retention policy and hence this flag is required.
76+
77+
Prune "on-demand" snapshots down to a smaller amount:
78+
```bash
79+
k3s etcd-snapshot prune --snapshot-retention <NUM-OF-SNAPSHOTS-TO-RETAIN>
80+
```
81+
Prune "scheduled" snapshots down to a smaller amount:
82+
```bash
83+
k3s etcd-snapshot prune --name etcd-snapshot --etcd-snapshot-retention <NUM-OF-SNAPSHOTS-TO-RETAIN>
84+
```
85+
86+
## S3 Compatible Object Store Support
87+
88+
K3s supports replicating etcd snapshots to and restoring etcd snapshots from S3-compatible object stores. S3 support is available for both on-demand and scheduled snapshots.
4889

4990
| Flag | Description |
5091
| ----------- | --------------- |
@@ -62,29 +103,26 @@ K3s supports writing etcd snapshots to and restoring etcd snapshots from S3-comp
62103
| `--etcd-s3-timeout` | S3 timeout (default: `5m0s`) |
63104
| `--etcd-s3-config-secret` | Name of secret in the kube-system namespace used to configure S3, if etcd-s3 is enabled and no other etcd-s3 options are set |
64105

65-
To perform an on-demand etcd snapshot and save it to S3:
106+
For example, this is how the creation and deletion of on-demand etcd snapshots in S3 would work:
66107

67-
```bash
68-
k3s etcd-snapshot save \
69-
--s3 \
70-
--s3-bucket=<S3-BUCKET-NAME> \
71-
--s3-access-key=<S3-ACCESS-KEY> \
72-
--s3-secret-key=<S3-SECRET-KEY>
73-
```
108+
```shell-session
109+
$ k3s etcd-snapshot --s3 --s3-bucket=test-bucket --s3-access-key=test --s3-secret-key=secret save
110+
INFO[0155] Snapshot on-demand-server-0-1753178523 saved.
111+
INFO[0155] Snapshot on-demand-server-0-1753178523 saved.
74112
75-
To perform an on-demand etcd snapshot restore from S3, first make sure that K3s isn't running. Then run the following commands:
113+
$ k3s etcd-snapshot --s3 --s3-bucket=test-bucket --s3-access-key=test --s3-secret-key=secret ls
114+
Name Location Size Created
115+
on-demand-server-0-1753178523 s3://test-bucket/test-folder/on-demand-server-0-1753178523 5062688 2025-07-22T10:02:03Z
116+
on-demand-server-0-1753178523 file:///var/lib/rancher/k3s/server/db/snapshots/on-demand-server-0-1753178523 5062688 2025-07-22T10:02:03Z
76117
77-
```bash
78-
k3s server \
79-
--cluster-init \
80-
--cluster-reset \
81-
--etcd-s3 \
82-
--cluster-reset-restore-path=<SNAPSHOT-NAME> \
83-
--etcd-s3-bucket=<S3-BUCKET-NAME> \
84-
--etcd-s3-access-key=<S3-ACCESS-KEY> \
85-
--etcd-s3-secret-key=<S3-SECRET-KEY>
118+
$ k3s etcd-snapshot --s3 --s3-bucket=test-bucket --s3-access-key=test --s3-secret-key=secret delete on-demand-server-0-1753178523
119+
INFO[0000] Snapshot on-demand-server-0-1753178523 deleted.
120+
121+
$ k3s etcd-snapshot --s3 --s3-bucket=test-bucket --s3-access-key=test --s3-secret-key=secret ls
122+
Name Location Size Created
86123
```
87124

125+
88126
### S3 Configuration Secret Support
89127

90128
:::info Version Gate
@@ -129,94 +167,6 @@ stringData:
129167
etcd-s3-proxy: ""
130168
```
131169
132-
## Managing Snapshots
133-
134-
k3s supports a set of subcommands for working with your etcd snapshots.
135-
136-
| Subcommand | Description |
137-
| ----------- | --------------- |
138-
| delete | Delete given snapshot(s) |
139-
| ls, list, l | List snapshots |
140-
| prune | Remove snapshots that exceed the configured retention count |
141-
| save | Trigger an immediate etcd snapshot |
142-
143-
These commands will perform as expected whether the etcd snapshots are stored locally or in an S3 compatible object store.
144-
145-
For additional information on the etcd snapshot subcommands, run `k3s etcd-snapshot --help`.
146-
147-
Delete a snapshot from S3.
148-
149-
```bash
150-
k3s etcd-snapshot delete \
151-
--s3 \
152-
--s3-bucket=<S3-BUCKET-NAME> \
153-
--s3-access-key=<S3-ACCESS-KEY> \
154-
--s3-secret-key=<S3-SECRET-KEY> \
155-
<SNAPSHOT-NAME>
156-
```
157-
158-
Prune local snapshots with the default retention policy (5). The `prune` subcommand takes an additional flag `--snapshot-retention` that allows for overriding the default retention policy.
159-
160-
```bash
161-
k3s etcd-snapshot prune
162-
```
163-
164-
```bash
165-
k3s etcd-snapshot prune --snapshot-retention 10
166-
```
167-
168-
### ETCDSnapshotFile Custom Resources
169-
170-
:::info Version Gate
171-
ETCDSnapshotFiles are available as of the November 2023 releases: v1.28.4+k3s2, v1.27.8+k3s2, v1.26.11+k3s2, v1.25.16+k3s4
172-
:::
173-
174-
Snapshots can be viewed remotely using any Kubernetes client by listing or describing cluster-scoped `ETCDSnapshotFile` resources.
175-
Unlike the `k3s etcd-snapshot list` command, which only shows snapshots visible to that node, `ETCDSnapshotFile` resources track all snapshots present on cluster members.
176-
177-
```shell-session
178-
$ kubectl get etcdsnapshotfile
179-
NAME SNAPSHOTNAME NODE LOCATION SIZE CREATIONTIME
180-
local-on-demand-k3s-server-1-1730308816-3e9290 on-demand-k3s-server-1-1730308816 k3s-server-1 file:///var/lib/rancher/k3s/server/db/snapshots/on-demand-k3s-server-1-1730308816 2891808 2024-10-30T17:20:16Z
181-
s3-on-demand-k3s-server-1-1730308816-79b15c on-demand-k3s-server-1-1730308816 s3 s3://etcd/k3s-test/on-demand-k3s-server-1-1730308816 2891808 2024-10-30T17:20:16Z
182-
```
183-
184-
```shell-session
185-
$ kubectl describe etcdsnapshotfile s3-on-demand-k3s-server-1-1730308816-79b15c
186-
Name: s3-on-demand-k3s-server-1-1730308816-79b15c
187-
Namespace:
188-
Labels: etcd.k3s.cattle.io/snapshot-storage-node=s3
189-
Annotations: etcd.k3s.cattle.io/snapshot-token-hash: b4b83cda3099
190-
API Version: k3s.cattle.io/v1
191-
Kind: ETCDSnapshotFile
192-
Metadata:
193-
Creation Timestamp: 2024-10-30T17:20:16Z
194-
Finalizers:
195-
wrangler.cattle.io/managed-etcd-snapshots-controller
196-
Generation: 1
197-
Resource Version: 790
198-
UID: bec9a51c-dbbe-4746-922e-a5136bef53fc
199-
Spec:
200-
Location: s3://etcd/k3s-test/on-demand-k3s-server-1-1730308816
201-
Node Name: s3
202-
s3:
203-
Bucket: etcd
204-
Endpoint: s3.example.com
205-
Prefix: k3s-test
206-
Region: us-east-1
207-
Skip SSL Verify: true
208-
Snapshot Name: on-demand-k3s-server-1-1730308816
209-
Status:
210-
Creation Time: 2024-10-30T17:20:16Z
211-
Ready To Use: true
212-
Size: 2891808
213-
Events:
214-
Type Reason Age From Message
215-
---- ------ ---- ---- -------
216-
Normal ETCDSnapshotCreated 113s k3s-supervisor Snapshot on-demand-k3s-server-1-1730308816 saved on S3
217-
```
218-
219-
220170
## Restoring Snapshots
221171
222172
K3s runs through several steps when restoring a snapshot:
@@ -235,7 +185,7 @@ K3s runs through several steps when restoring a snapshot:
235185
Select the tab below that matches your cluster configuration.
236186

237187
<Tabs queryString="etcdsnap">
238-
<TabItem value="Single Server">
188+
<TabItem value="Single Server" default>
239189

240190
1. Stop the K3s service:
241191
```bash
@@ -305,3 +255,55 @@ In this example there are 3 servers, `S1`, `S2`, and `S3`. The snapshot is locat
305255
```
306256
</TabItem>
307257
</Tabs>
258+
259+
260+
## ETCDSnapshotFile Custom Resources
261+
262+
:::info Version Gate
263+
ETCDSnapshotFiles are available as of the November 2023 releases: v1.28.4+k3s2, v1.27.8+k3s2, v1.26.11+k3s2, v1.25.16+k3s4
264+
:::
265+
266+
Snapshots can be viewed remotely using any Kubernetes client by listing or describing cluster-scoped `ETCDSnapshotFile` resources.
267+
Unlike the `k3s etcd-snapshot list` command, which only shows snapshots visible to that node, `ETCDSnapshotFile` resources track all snapshots present on cluster members.
268+
269+
```shell-session
270+
$ kubectl get etcdsnapshotfile
271+
NAME SNAPSHOTNAME NODE LOCATION SIZE CREATIONTIME
272+
local-on-demand-k3s-server-1-1730308816-3e9290 on-demand-k3s-server-1-1730308816 k3s-server-1 file:///var/lib/rancher/k3s/server/db/snapshots/on-demand-k3s-server-1-1730308816 2891808 2024-10-30T17:20:16Z
273+
s3-on-demand-k3s-server-1-1730308816-79b15c on-demand-k3s-server-1-1730308816 s3 s3://etcd/k3s-test/on-demand-k3s-server-1-1730308816 2891808 2024-10-30T17:20:16Z
274+
```
275+
276+
```shell-session
277+
$ kubectl describe etcdsnapshotfile s3-on-demand-k3s-server-1-1730308816-79b15c
278+
Name: s3-on-demand-k3s-server-1-1730308816-79b15c
279+
Namespace:
280+
Labels: etcd.k3s.cattle.io/snapshot-storage-node=s3
281+
Annotations: etcd.k3s.cattle.io/snapshot-token-hash: b4b83cda3099
282+
API Version: k3s.cattle.io/v1
283+
Kind: ETCDSnapshotFile
284+
Metadata:
285+
Creation Timestamp: 2024-10-30T17:20:16Z
286+
Finalizers:
287+
wrangler.cattle.io/managed-etcd-snapshots-controller
288+
Generation: 1
289+
Resource Version: 790
290+
UID: bec9a51c-dbbe-4746-922e-a5136bef53fc
291+
Spec:
292+
Location: s3://etcd/k3s-test/on-demand-k3s-server-1-1730308816
293+
Node Name: s3
294+
s3:
295+
Bucket: etcd
296+
Endpoint: s3.example.com
297+
Prefix: k3s-test
298+
Region: us-east-1
299+
Skip SSL Verify: true
300+
Snapshot Name: on-demand-k3s-server-1-1730308816
301+
Status:
302+
Creation Time: 2024-10-30T17:20:16Z
303+
Ready To Use: true
304+
Size: 2891808
305+
Events:
306+
Type Reason Age From Message
307+
---- ------ ---- ---- -------
308+
Normal ETCDSnapshotCreated 113s k3s-supervisor Snapshot on-demand-k3s-server-1-1730308816 saved on S3
309+
```

docs/networking/basic-network-options.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ cat /etc/cni/net.d/10-canal.conflist
7979
You should see that IP forwarding is set to true.
8080

8181
</TabItem>
82-
<TabItem value="Calico" default>
82+
<TabItem value="Calico">
8383

8484
Follow the [Calico CNI Plugins Guide](https://docs.tigera.io/calico/latest/reference/configure-cni-plugins). Modify the Calico YAML so that IP forwarding is allowed in the `container_settings` section, for example:
8585

@@ -101,7 +101,7 @@ You should see that IP forwarding is set to true.
101101

102102

103103
</TabItem>
104-
<TabItem value="Cilium" default>
104+
<TabItem value="Cilium">
105105

106106
Before running `k3s-killall.sh` or `k3s-uninstall.sh`, you must manually remove `cilium_host`, `cilium_net` and `cilium_vxlan` interfaces. If you fail to do this, you may lose network connectivity to the host when K3s is stopped
107107

docs/networking/multus-ipams.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ spec:
4949
```
5050
5151
</TabItem>
52-
<TabItem value="Whereabouts" default>
52+
<TabItem value="Whereabouts">
5353
[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide.
5454
5555
To use the Whereabouts IPAM plugin, deploy Multus with the following configuration:
@@ -102,7 +102,7 @@ spec:
102102
```
103103

104104
</TabItem>
105-
<TabItem value="Multus DHCP daemon" default>
105+
<TabItem value="Multus DHCP daemon">
106106
The dhcp IPAM plugin can be deployed when there is already a DHCP server running on the network. This daemonset takes care of periodically renewing the DHCP lease. For more information please check the official docs of [DHCP IPAM plugin](https://www.cni.dev/plugins/current/ipam/dhcp/).
107107

108108
To use the DHCP plugin, deploy Multus with the following configuration:

0 commit comments

Comments
 (0)