Skip to content

Commit 13f129f

Browse files
committed
docs: more docs
1 parent 71757bd commit 13f129f

File tree

1 file changed

+14
-5
lines changed

1 file changed

+14
-5
lines changed

docs/src/tutorials/sharding.md

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -82,12 +82,10 @@ compiled_big_sin_sharded_both = @compile big_sin(x_sharded_both)
8282
compiled_big_sin_sharded_both(x_sharded_both)
8383
```
8484

85-
Sharding in reactant requires you to specify how the data is s
85+
Sharding in reactant requires you to specify how the data is sharded across devices on a mesh. We start by specifying the mesh [`Sharding.Mesh`](@ref) which is a collection of the devices reshaped into an N-D grid. Additionally, we can specify names
8686

8787
<!-- TODO describe how arrays are the "global data arrays, even though data is itself only stored on relevant device and computation is performed only devices with the required data (effectively showing under the hood how execution occurs) -->
8888

89-
<!-- TODO simple case that demonstrates send/recv within (e.g. a simple neighbor add) -->
90-
9189
<!-- TODO make a simple conway's game of life, or heat equation using sharding simulation example to show how a ``typical MPI'' simulation can be written using sharding. -->
9290

9391
## Simple 1-Dimensional Heat Equation
@@ -187,11 +185,22 @@ end
187185
@jit simulate(data, 100)
188186
```
189187

188+
189+
## Devices
190+
191+
You can query the available devices that Reactant can access as follows:
192+
```
193+
TODO
194+
```
195+
196+
You can inspect the type of the device, as well as its properties.
197+
198+
One nice feature about how Reactant's handling of multiple devices is that you don't need to s handles sharding is that
190199
:::
191200

192-
<!-- TODO describe generation of distributed array by concatenating local-worker data -->
201+
## Generating Distributed Data by Concatenating Local-Worker Data
193202

194-
<!-- TODO more complex tutorial describing replicated -->
203+
## Handling Replicated Tensors
195204

196205
## Sharding in Neural Networks
197206

0 commit comments

Comments
 (0)