You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If necessary, the checkpoint daemon also activates the Tarantool garbage collector that deletes old snapshot and WAL files.
197
+
198
+
.. NOTE::
199
+
200
+
This garbage collector is distinct from the `Lua garbage collector <https://www.lua.org/manual/5.1/manual.html#2.10>`_
201
+
which is for Lua objects, and distinct from the Tarantool garbage collector that specializes in :ref:`handling shard buckets <vshard-gc>`.
202
+
203
+
This garbage collector is called as follows:
204
+
205
+
* When the number of snapshots reaches the limit of :ref:`snapshot.count <configuration_reference_snapshot_count>` size.
206
+
After a new snapshot is taken, Tarantool garbage collector deletes the oldest snapshot file and any associated WAL files.
207
+
208
+
* When the size all WAL files created since the last snapshot reaches the limit of :ref:`snapshot.by.wal_size <configuration_reference_snapshot_by_wal_size>`.
209
+
Once this size is exceeded, the checkpoint daemon takes a snapshot, then the garbage collector deletes the old WAL files.
210
+
211
+
If the checkpoint daemon deletes an old snapshot file, the Tarantool garbage collector also deletes
212
+
any :ref:`write-ahead log (.xlog) <internals-wal>` files that meet the following conditions:
213
+
214
+
* The WAL files are older than the snapshot file.
215
+
* The WAL files contain information present in the snapshot file.
216
+
217
+
Tarantool garbage collector also deletes obsolete vinyl ``.run`` files.
218
+
219
+
The checkpoint daemon and the Tarantool garbage collector don't delete a file in the following cases:
220
+
221
+
* A **backup** is running, and the file has not been backed up
222
+
(see :ref:`"Hot backup" <admin-backups-hot_backup_vinyl_memtx>`).
223
+
224
+
* **Replication** is running, and the file has not been relayed to a replica
225
+
(see :ref:`"Replication architecture" <replication-architecture>`),
226
+
227
+
* A replica is connecting.
228
+
229
+
* A replica has fallen behind.
230
+
The progress of each replica is tracked; if a replica's position is far
231
+
from being up to date, then the server stops to give it a chance to catch up.
232
+
If an administrator concludes that a replica is permanently down, then the
233
+
correct procedure is to restart the server, or (preferably) :ref:`remove the replica from the cluster <replication-remove_instances>`.
0 commit comments