This module defines a file-system that already exists (i.e. it does not create a new file system) in a way that can be shared with other modules. This allows a compute VM to mount a filesystem that is not part of the current deployment group.
The pre-existing network storage can be referenced in the same way as any Cluster Toolkit supported file-system such as filestore.
For more information on network storage options in the Cluster Toolkit, see the extended Network Storage documentation.
- id: homefs
source: modules/file-system/pre-existing-network-storage
settings:
server_ip: ## Set server IP here ##
remote_mount: nfsshare
local_mount: /home
fs_type: nfsThis creates a pre-existing-network-storage module in terraform at the
provided IP in server_ip of type nfs that will be mounted at /home. Note
that the server_ip must be known before deployment.
The following is an example of using pre-existing-network-storage with a GCS
bucket:
- id: data-bucket
source: modules/file-system/pre-existing-network-storage
settings:
remote_mount: my-bucket-name
local_mount: /data
fs_type: gcsfuse
mount_options: defaults,_netdev,implicit_dirsThe implicit_dirs mount option allows object paths to be treated as if they
were directories. This is important when working with files that were created by
another source, but there may have performance impacts. The _netdev mount option
denotes that the storage device requires network access.
The following is an example of using pre-existing-network-storage with the lustre
filesystem:
- id: lustrefs
source: modules/file-system/pre-existing-network-storage
settings:
fs_type: lustre
server_ip: 192.168.227.11@tcp
local_mount: /scratch
remote_mount: /exacloudNote the use of the MGS NID (Network ID) in the server_ip field - in
particular, note the @tcp suffix.
The following is an example of using pre-existing-network-storage with the
managed_lustre filesystem:
- id: lustrefs
source: modules/file-system/pre-existing-network-storage
settings:
fs_type: managed_lustre
server_ip: 192.168.227.11@tcp
local_mount: /scratch
remote_mount: /mg_lustreThis is similar to the lustre filesystem, with the exception that it connects
with a managed Lustre instance hosted by GCP. Currently only Rocky 8 and
Ubuntu 20.04 and Ubuntu 22.04 are supported.
The following is an example of using pre-existing-network-storage with the daos
filesystem. In order to use existing parallelstore instance, fs_type needs to be
explicitly mentioned in blueprint. The remote_mount option refers to access_points
for parallelstore instance.
- id: parallelstorefs
source: modules/file-system/pre-existing-network-storage
settings:
fs_type: daos
remote_mount: "[10.246.99.2,10.246.99.3,10.246.99.4]"
mount_options: disable-wb-cache,thread-count=16,eq-count=8Parallelstore supports additional options for its mountpoints under parallelstore_options setting.
Use daos_agent_config to provide additional configuration for daos_agent, for example:
- id: parallelstorefs
source: modules/file-system/pre-existing-network-storage
settings:
fs_type: daos
remote_mount: "[10.246.99.2,10.246.99.3,10.246.99.4]"
mount_options: disable-wb-cache,thread-count=16,eq-count=8
parallelstore_options:
daos_agent_config: |
credential_config:
cache_expiration: 1mUse dfuse_environment to provide additional environment variables for dfuse process, for example:
- id: parallelstorefs
source: modules/file-system/pre-existing-network-storage
settings:
fs_type: daos
remote_mount: "[10.246.99.2,10.246.99.3,10.246.99.4]"
mount_options: disable-wb-cache,thread-count=16,eq-count=8
parallelstore_options:
dfuse_environment:
D_LOG_FILE: /tmp/client.log
D_APPEND_PID_TO_LOG: 1
D_LOG_MASK: debugFor the fs_type listed below, this module will provide client_install_runner
and mount_runner outputs. These can be used to create a startup script to
mount the network storage system.
Supported fs_type:
- nfs
- lustre
- managed_lustre
- gcsfuse
- daos
scripts/mount.sh is used as the contents of
mount_runner. This script will update /etc/fstab and mount the network
storage. This script will fail if the specified local_mount is already being
used by another entry in /etc/fstab.
Both of these steps are automatically handled with the use of the use command
in a selection of Cluster Toolkit modules. See the compatibility matrix in
the network storage doc for a complete list of supported modules.
| Name | Version |
|---|---|
| terraform | >= 0.14.0 |
No providers.
No modules.
No resources.
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| fs_type | Type of file system to be mounted (e.g., nfs, lustre) | string |
"nfs" |
no |
| local_mount | The mount point where the contents of the device may be accessed after mounting. | string |
"/mnt" |
no |
| managed_lustre_options | Managed Lustre specific options: gke_support_enabled (bool, default = false) Note: gke_support_enabled does not work with Slurm, the Slurm image must be built with the correct compatibility. |
object({ |
{} |
no |
| mount_options | Options describing various aspects of the file system. Consider adding setting to 'defaults,_netdev,implicit_dirs' when using gcsfuse. | string |
"defaults,_netdev" |
no |
| parallelstore_options | Parallelstore specific options | object({ |
{} |
no |
| remote_mount | Remote FS name or export. This is the exported directory for nfs, fs name for lustre, and bucket name (without gs://) for gcsfuse. | string |
n/a | yes |
| server_ip | The device name as supplied to fs-tab, excluding remote fs-name(for nfs, that is the server IP, for lustre [:]). This can be omitted for gcsfuse. | string |
"" |
no |
| Name | Description |
|---|---|
| client_install_runner | Runner that performs client installation needed to use file system. |
| mount_runner | Runner that mounts the file system. |
| network_storage | Describes a remote network storage to be mounted by fs-tab. |