You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This page documents the management of etcd snapshots using the `k3s etcd-snapshot` CLI, as well as configuration of etcd scheduled snapshots for the `k3s server` process, and use of the `k3s server --cluster-reset` command to reset etcd cluster membership and optionally restore etcd snapshots.
7
+
This page describes how to use the `k3s etcd-snapshot` CLI tool to manage etcd snapshots and how to restore from an etcd snapshot.
8
8
9
-
## Creating Snapshots
9
+
K3s etcd snapshots are stored on the node file system, and may optionally be uploaded to an S3 compatible object store for disaster recovery scenarios. Snapshots can be both automated on a reoccurring schedule, and taken manually on-demand. The `k3s etcd-snapshot` CLI tool offers a set of subcommands that can be used to create, delete, and manage snapshots.
10
10
11
-
Snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value, which defaults to `${data-dir}/server/db/snapshots`. The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed independently by setting the `--data-dir` flag.
11
+
| Subcommand | Description |
12
+
| ----------- | --------------- |
13
+
| delete | Delete given snapshot(s) |
14
+
| ls, list, l | List snapshots |
15
+
| prune | Remove snapshots that exceed the configured retention count |
16
+
| save | Trigger an on-demand etcd snapshot |
12
17
13
-
### Scheduled Snapshots
18
+
For additional information on the etcd snapshot subcommands, run `k3s etcd-snapshot --help`.
19
+
20
+
## Creating Snapshots
14
21
15
-
Scheduled snapshots are enabled by default, at 00:00 and 12:00 system time, with 5 snapshots retained. To configure the snapshot interval or the number of retained snapshots, refer to the [snapshot configuration options](#snapshot-configuration-options).
22
+
<TabsgroupId="snapshots">
23
+
<TabItemvalue="Scheduled">
16
24
17
-
Scheduled snapshots have a name that starts with `etcd-snapshot`, followed by the node name and timestamp. The base name can be changed with the `--etcd-snapshot-name` flag in the server configuration.
25
+
Scheduled snapshots are enabled by default, at 00:00 and 12:00 system time, with 5 snapshots retained. Scheduled snapshots have a name that starts with `etcd-snapshot`, followed by the node name and timestamp.
18
26
19
-
### On-demand Snapshots
27
+
The following options control the operation of scheduled snapshots:
20
28
21
-
Snapshots can be saved manually by running the `k3s etcd-snapshot save` command.
|`--etcd-snapshot-dir`| Directory to save db snapshots. (Default location: `${data-dir}/db/snapshots`) |
35
+
|`--etcd-snapshot-retention`| Number of snapshots to retain (default: 5) |
36
+
|`--etcd-snapshot-schedule-cron`| Snapshot interval time in cron spec. eg. every 5 hours `0 */5 * * *` (default: `0 */12 * * *`) |
22
37
23
-
On-demand snapshots have a name that starts with `on-demand`, followed by the node name and timestamp. The base name can be changed with the `--name` flag when saving the snapshot.
38
+
The data-dir value defaults to `/var/lib/rancher/k3s`and can be changed independently by setting the `--data-dir` flag.
24
39
25
-
### Snapshot Configuration Options
40
+
Scheduled snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](https://docs.k3s.io/cli/etcd-snapshot#s3-compatible-object-store-support)
26
41
27
-
These flags can be passed to the `k3s server` command to reset the etcd cluster, and optionally restore from a snapshot.
42
+
</TabItem>
43
+
<TabItemvalue="On-demand">
28
44
29
-
| Flag | Description |
30
-
| ----------- | --------------- |
31
-
|`--cluster-reset`| Forget all peers and become sole member of a new cluster. This can also be set with the environment variable `[$K3S_CLUSTER_RESET]`|
32
-
|`--cluster-reset-restore-path`| Path to snapshot file to be restored |
45
+
Snapshots can be saved manually by running the `k3s etcd-snapshot save` command. There is no retention for these on-demand snapshots and the user needs to remove them manually by using `k3s etcd-snapshot delete` or `k3s etcd-snapshot prune` commands. On-demand snapshots have a name that starts with `on-demand`, followed by the node name and timestamp.
33
46
34
-
These flags are valid for both `k3s server` and `k3s etcd-snapshot`, however when passed to `k3s etcd-snapshot` the `--etcd-` prefix can be omitted to avoid redundancy.
35
-
Flags can be passed in with the command line, or in the [configuration file,](../installation/configuration.md#configuration-file) which may be easier to use.
47
+
The following options control the operation of on-demand snapshots:
|`--etcd-snapshot-dir`| Directory to save db snapshots. (Default location: `${data-dir}/db/snapshots`) |
42
-
|`--etcd-snapshot-retention`| Number of snapshots to retain (default: 5) |
43
-
|`--etcd-snapshot-schedule-cron`| Snapshot interval time in cron spec. eg. every 5 hours `0 */5 * * *` (default: `0 */12 * * *`) |
44
54
45
-
### S3 Compatible Object Store Support
55
+
The data-dir value defaults to `/var/lib/rancher/k3s` and can be changed independently by setting the `--data-dir` flag.
46
56
47
-
K3s supports writing etcd snapshots to and restoring etcd snapshots from S3-compatible object stores. S3 support is available for both on-demand and scheduled snapshots.
57
+
The `--name` flag can only be set when running the `k3s etcd-snapshot save` command. The other two can also be part of the `k3s server`[configuration file](../installation/configuration.md#configuration-file)
58
+
59
+
On-demand snapshots are saved to the path set by the server's `--etcd-snapshot-dir` value. If you want them replicated in S3 compatible object stores, refer to [S3 configuration options](https://docs.k3s.io/cli/etcd-snapshot#s3-compatible-object-store-support)
60
+
61
+
</TabItem>
62
+
</Tabs>
63
+
64
+
65
+
## Deleting Snapshots
66
+
67
+
Scheduled snapshots are deleted automatically when the number of snapshots exceeds the configured retention count (5 by default). The oldest snapshots are removed first.
68
+
69
+
To manually delete scheduled snapshot(s) or on-demand snapshot(s), you can use the `k3s etcd-snapshot delete` command:
The `prune` subcommand removes snapshots that match the name prefix (`on-demand` by default) and exceed the configured retention count. It includes the flag `--snapshot-retention` to set the retention count. For scheduled snapshots, it overrides the default retention policy. On-demand snapshots have no retention policy and hence this flag is required.
76
+
77
+
Prune "on-demand" snapshots down to a smaller amount:
K3s supports replicating etcd snapshots to and restoring etcd snapshots from S3-compatible object stores. S3 support is available for both on-demand and scheduled snapshots.
48
89
49
90
| Flag | Description |
50
91
| ----------- | --------------- |
@@ -62,29 +103,26 @@ K3s supports writing etcd snapshots to and restoring etcd snapshots from S3-comp
|`--etcd-s3-config-secret`| Name of secret in the kube-system namespace used to configure S3, if etcd-s3 is enabled and no other etcd-s3 options are set |
64
105
65
-
To perform an on-demand etcd snapshot and save it to S3:
106
+
For example, this is how the creation and deletion of on-demand etcd snapshots in S3 would work:
66
107
67
-
```bash
68
-
k3s etcd-snapshot save \
69
-
--s3 \
70
-
--s3-bucket=<S3-BUCKET-NAME> \
71
-
--s3-access-key=<S3-ACCESS-KEY> \
72
-
--s3-secret-key=<S3-SECRET-KEY>
73
-
```
108
+
```shell-session
109
+
$ k3s etcd-snapshot --s3 --s3-bucket=test-bucket --s3-access-key=test --s3-secret-key=secret save
$ k3s etcd-snapshot --s3 --s3-bucket=test-bucket --s3-access-key=test --s3-secret-key=secret ls
122
+
Name Location Size Created
86
123
```
87
124
125
+
88
126
### S3 Configuration Secret Support
89
127
90
128
:::info Version Gate
@@ -129,94 +167,6 @@ stringData:
129
167
etcd-s3-proxy: ""
130
168
```
131
169
132
-
## Managing Snapshots
133
-
134
-
k3s supports a set of subcommands for working with your etcd snapshots.
135
-
136
-
| Subcommand | Description |
137
-
| ----------- | --------------- |
138
-
| delete | Delete given snapshot(s) |
139
-
| ls, list, l | List snapshots |
140
-
| prune | Remove snapshots that exceed the configured retention count |
141
-
| save | Trigger an immediate etcd snapshot |
142
-
143
-
These commands will perform as expected whether the etcd snapshots are stored locally or in an S3 compatible object store.
144
-
145
-
For additional information on the etcd snapshot subcommands, run `k3s etcd-snapshot --help`.
146
-
147
-
Delete a snapshot from S3.
148
-
149
-
```bash
150
-
k3s etcd-snapshot delete \
151
-
--s3 \
152
-
--s3-bucket=<S3-BUCKET-NAME> \
153
-
--s3-access-key=<S3-ACCESS-KEY> \
154
-
--s3-secret-key=<S3-SECRET-KEY> \
155
-
<SNAPSHOT-NAME>
156
-
```
157
-
158
-
Prune local snapshots with the default retention policy (5). The `prune` subcommand takes an additional flag `--snapshot-retention` that allows for overriding the default retention policy.
159
-
160
-
```bash
161
-
k3s etcd-snapshot prune
162
-
```
163
-
164
-
```bash
165
-
k3s etcd-snapshot prune --snapshot-retention 10
166
-
```
167
-
168
-
### ETCDSnapshotFile Custom Resources
169
-
170
-
:::info Version Gate
171
-
ETCDSnapshotFiles are available as of the November 2023 releases: v1.28.4+k3s2, v1.27.8+k3s2, v1.26.11+k3s2, v1.25.16+k3s4
172
-
:::
173
-
174
-
Snapshots can be viewed remotely using any Kubernetes client by listing or describing cluster-scoped `ETCDSnapshotFile` resources.
175
-
Unlike the `k3s etcd-snapshot list` command, which only shows snapshots visible to that node, `ETCDSnapshotFile` resources track all snapshots present on cluster members.
Normal ETCDSnapshotCreated 113s k3s-supervisor Snapshot on-demand-k3s-server-1-1730308816 saved on S3
217
-
```
218
-
219
-
220
170
## Restoring Snapshots
221
171
222
172
K3s runs through several steps when restoring a snapshot:
@@ -235,7 +185,7 @@ K3s runs through several steps when restoring a snapshot:
235
185
Select the tab below that matches your cluster configuration.
236
186
237
187
<Tabs queryString="etcdsnap">
238
-
<TabItem value="Single Server">
188
+
<TabItem value="Single Server" default>
239
189
240
190
1. Stop the K3s service:
241
191
```bash
@@ -305,3 +255,55 @@ In this example there are 3 servers, `S1`, `S2`, and `S3`. The snapshot is locat
305
255
```
306
256
</TabItem>
307
257
</Tabs>
258
+
259
+
260
+
## ETCDSnapshotFile Custom Resources
261
+
262
+
:::info Version Gate
263
+
ETCDSnapshotFiles are available as of the November 2023 releases: v1.28.4+k3s2, v1.27.8+k3s2, v1.26.11+k3s2, v1.25.16+k3s4
264
+
:::
265
+
266
+
Snapshots can be viewed remotely using any Kubernetes client by listing or describing cluster-scoped `ETCDSnapshotFile` resources.
267
+
Unlike the `k3s etcd-snapshot list` command, which only shows snapshots visible to that node, `ETCDSnapshotFile` resources track all snapshots present on cluster members.
Follow the [Calico CNI Plugins Guide](https://docs.tigera.io/calico/latest/reference/configure-cni-plugins). Modify the Calico YAML so that IP forwarding is allowed in the `container_settings` section, for example:
85
85
@@ -101,7 +101,7 @@ You should see that IP forwarding is set to true.
101
101
102
102
103
103
</TabItem>
104
-
<TabItemvalue="Cilium"default>
104
+
<TabItemvalue="Cilium">
105
105
106
106
Before running `k3s-killall.sh` or `k3s-uninstall.sh`, you must manually remove `cilium_host`, `cilium_net` and `cilium_vxlan` interfaces. If you fail to do this, you may lose network connectivity to the host when K3s is stopped
Copy file name to clipboardExpand all lines: docs/networking/multus-ipams.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ spec:
49
49
```
50
50
51
51
</TabItem>
52
-
<TabItem value="Whereabouts" default>
52
+
<TabItem value="Whereabouts">
53
53
[Whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is an IP Address Management (IPAM) CNI plugin that assigns IP addresses cluster-wide.
54
54
55
55
To use the Whereabouts IPAM plugin, deploy Multus with the following configuration:
@@ -102,7 +102,7 @@ spec:
102
102
```
103
103
104
104
</TabItem>
105
-
<TabItem value="Multus DHCP daemon" default>
105
+
<TabItem value="Multus DHCP daemon">
106
106
The dhcp IPAM plugin can be deployed when there is already a DHCP server running on the network. This daemonset takes care of periodically renewing the DHCP lease. For more information please check the official docs of [DHCP IPAM plugin](https://www.cni.dev/plugins/current/ipam/dhcp/).
107
107
108
108
To use the DHCP plugin, deploy Multus with the following configuration:
0 commit comments