Skip to content

docs: Pipelines guide #1405

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Jul 17, 2025
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions apps/typegpu-docs/astro.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,11 @@ export default defineConfig({
slug: 'fundamentals/tgsl',
badge: { text: 'new' },
},
{
label: 'Pipelines',
slug: 'fundamentals/pipelines',
badge: { text: 'new' },
},
{
label: 'Buffers',
slug: 'fundamentals/buffers',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Input values are accessible through the `in` keyword, while the automatically cr

## Usage in pipelines

Typed functions are crucial for simplified pipeline creation offered by TypeGPU. You can define and run pipelines as follows:
Typed functions are crucial for simplified [pipeline](/TypeGPU/fundamentals/pipelines) creation offered by TypeGPU. You can define and run pipelines as follows:

```ts
const pipeline = root['~unstable']
Expand Down
305 changes: 305 additions & 0 deletions apps/typegpu-docs/src/content/docs/fundamentals/pipelines.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,305 @@
---
title: Pipelines
description: A guide on how to use TypeGPU render and compute pipelines.
---

:::caution[Experimental]
Pipelines are an *unstable* feature. The API may be subject to change in the near future.
:::

:::note[Recommended reading]
It is assumed that you are familiar with the following concepts:
- <a href="https://webgpufundamentals.org/webgpu/lessons/webgpu-fundamentals.html" target="_blank" rel="noopener noreferrer">WebGPU Fundamentals</a>
- [TypeGPU Functions](/TypeGPU/fundamentals/functions)
:::

TypeGPU introduces a custom API to easily define and execute render and compute pipelines.
It abstracts away the standard `device.createRenderPipeline`/`device.createComputePipeline`, `device.createCommandEncoder`, `encoder.beginRenderPass`/`encoder.beginComputePass`... procedures,
to offer a convenient, type-safe way to run shaders on the GPU.

## Creating pipelines

A pipeline definition starts with the [root](/TypeGPU/fundamentals/roots) object and follows a builder pattern.

```ts
const renderPipeline = root['~unstable']
.withVertex(mainVertex, {})
.withFragment(mainFragment, { format: navigator.gpu.getPreferredCanvasFormat() })
.createPipeline();

const computePipeline = root['~unstable']
.withCompute(mainCompute)
.createPipeline();
```

### withVertex

Creating a render pipeline requires calling a `withVertex` method first, which accepts a tgpu vertexFn and vertex attributes.
The attributes are passed in a record, where the keys match the vertex function input parameters, and the values are attributes retrieved from a specific [tgpu.vertexLayout](/TypeGPU/fundamentals/vertex-layouts).
If the vertex shader does not use vertex attributes, then the latter argument should be an empty object.
The compatibility between vertex input types and vertex attribute formats is validated at the type level.

```ts
.withVertex(mainVert, {
v: vertexLayout.attrib,
center: instanceLayout.attrib.position,
velocity: instanceLayout.attrib.velocity,
})
```

### withFragment

The next step is calling the `withFragment` method, which accepts a tgpu fragmentFn and a *targets* argument defining the formats and behaviors of the color targets the pipeline writes to.
Each target is specified the same as in the WebGPU API (*GPUColorTargetState*).
The difference is that when there are multiple targets, they should be passed in a record, not an array.
This way each target is identified by a name and can be validated against the outputs of the fragment function.

```ts
const fragmentFn = tgpu['~unstable'].fragmentFn({
out: {
color: d.vec4f,
shadow: d.vec4f,
},
})`(...)`;

const renderPipeline = root['~unstable']
.withVertex(vertexFn, {})
.withFragment(fragmentFn, {
color: {
format: 'rg8unorm',
blend: {
color: {
srcFactor: 'one',
dstFactor: 'one-minus-src-alpha',
operation: 'add',
},
alpha: {
srcFactor: 'one',
dstFactor: 'one-minus-src-alpha',
operation: 'add',
},
},
},
shadow: { format: 'r16uint' },
})
.createPipeline();
```

### Type-level validation

Using the pipelines should ensure the compatibility of the vertex output and fragment input on the type level --
`withFragment` only accepts fragment functions, which all non-builtin parameters are returned in the vertex stage.
These parameters are identified by their names, not by the `location` index.
In general, when using vertex and fragment functions with TypeGPU pipelines, it is not necessary to set locations on the IO struct properties.
The library automatically matches up the corresponding members (by their names) and assigns common locations to them.
When a custom location is provided by the user (via the `d.location` attribute function) it is respected by the automatic assignment procedure,
as long as there is no conflict between vertex and fragment location value.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';

const vertex = tgpu['~unstable'].vertexFn({
out: {
pos: d.builtin.position,
},
})`(...)`;
const fragment = tgpu['~unstable'].fragmentFn({
in: { uv: d.vec2f },
out: d.vec4f,
})`(...)`;

const root = await tgpu.init();

// @errors: 2554
root['~unstable']
.withVertex(vertex, {})
.withFragment(fragment, { format: 'bgra8unorm' });
// ^?
```

### Additional render pipeline methods

After calling `withFragment`, but before `createPipeline`, it is possible to set additional pipeline settings.
It is done through builder methods like `withDepthStencil`, `withMultisample`, `withPrimitive`.
They accept the same arguments as their corresponding descriptors in the WebGPU API.

```ts
const renderPipeline = root['~unstable']
.withVertex(vertexShader, modelVertexLayout.attrib)
.withFragment(fragmentShader, { format: presentationFormat })
.withDepthStencil({
format: 'depth24plus',
depthWriteEnabled: true,
depthCompare: 'less',
})
.withMultisample({
count: 4,
})
.withPrimitive({ topology: 'triangle-list' })
.createPipeline();
```

### withCompute

Creating a compute pipeline is even easier -- the `withCompute` method accepts just a tgpu computeFn with no additional parameters.
Please note that compute pipelines are separate identities from render pipelines. You cannot combine `withVertex` and `withFragment` methods with `withCompute` in a singular pipeline.

### createPipeline

The creation of TypeGPU pipelines ends with calling a `createPipeline` method on the builder.

:::caution
The underlying WebGPU resource is created lazily, that is just before the first execution or as part of a `root.unwrap`, not immediately after the `createPipeline` invocation.
:::

## Execution

```ts
renderPipeline
.withColorAttachment({
view: context.getCurrentTexture().createView(),
loadOp: 'clear',
storeOp: 'store',
})
.draw(3);

computePipeline.dispatchWorkgroups(16);
```

### Attachments

Render pipelines require specifying a color attachment for each target.
The attachment is specified in the same way as in the WebGPU API
(but accepting tgpu resources, as well as the regular WebGPU),
however akin to the *targets* argument, multiple targets require being passed in a record, where each target is identified via a name.

Similarly, when using `withDepthStencil` it is necessary to pass in a depth stencil attachment, via the `withDepthStencilAttachment` method.

```ts
renderPipeline
.withColorAttachment({
color: {
view: msaaTextureView,
resolveTarget: context.getCurrentTexture().createView(),
loadOp: 'clear',
storeOp: 'store',
},
shadow: {
view: shadowTextureView,
clearValue: [1, 1, 1, 1],
loadOp: 'clear',
storeOp: 'store',
},
})
.withDepthStencilAttachment({
view: depthTextureView,
depthClearValue: 1,
depthLoadOp: 'clear',
depthStoreOp: 'store',
})
.draw(vertexCount);
```

### Resource bindings

Before executing pipelines, it is necessary to bind all of the utilized resources, like bind groups, vertex buffers and slots. It is done using the `with` method. It accepts a pair of arguments: [a bind group layout and a bind group](/TypeGPU/fundamentals/bind-groups) (render and compute pipelines) or [a vertex layout and a vertex buffer](/TypeGPU/fundamentals/vertex-layouts) (render pipelines only).

```ts
// vertex layout
const vertexLayout = tgpu.vertexLayout(
(n) => d.disarrayOf(d.float16, n),
'vertex',
);
const vertexBuffer = root
.createBuffer(d.disarrayOf(d.float16, 8), [0, 0, 1, 0, 0, 1, 1, 1])
.$usage('vertex');

// bind group layout
const bindGroupLayout = tgpu.bindGroupLayout({
size: {
uniform: d.vec2u,
},
});

const sizeBuffer = root
.createBuffer(d.vec2u, d.vec2u(64, 64))
.$usage('uniform');

const bindGroup = root.createBindGroup(bindGroupLayout, {
size: sizeBuffer,
});

// binding and execution
renderPipeline
.with(vertexLayout, vertexBuffer)
.with(bindGroupLayout, bindGroup)
.draw(8);

computePipeline
.with(bindGroupLayout, bindGroup)
.dispatchWorkgroups(1);
```

### Timing performance

Pipelines also expose the `withPerformanceCallback` and `withTimestampWrites` methods for timing the execution time on the GPU.
For more info about them, refer to the [Timinging Your Pipelines guide](/TypeGPU/fundamentals/timestamp-queries/).

### draw, dispatchWorkgroups

After creating the render pipeline and setting all of the attachments, it can be put to use by calling the `draw` method.
It accepts the number of vertices and optionally the instance count, first vertex index and first instance index.
After calling the method, the shader is set for execution immediately.

Compute pipelines are executed using the `dispatchWorkgroups` method, which accepts the number of workgroups in each dimension.
Unlike render pipelines, after running this method, the execution is not submitted to the GPU immediately.
In order to do so, `root['~unstable'].flush()` needs to be run.
However, that is usually not necessary, as it is done automatically when trying to read the results.

## Low-level render pipeline execution API

The higher-level API has several limitations, therefore another way of executing pipelines is exposed, for some custom, more demanding scenarios. For example, with the high-level API, it is not possible to execute multiple pipelines per one render pass. It also may be missing some more niche features of the WebGPU API.

`root['~unstable'].beginRenderPass` is a method that mirrors the WebGPU API, but enriches it with a direct TypeGPU resource support.

```ts
root['~unstable'].beginRenderPass(
{
colorAttachments: [{
...
}],
},
(pass) => {
pass.setPipeline(renderPipeline);
pass.setBindGroup(layout, group);
pass.draw(1);
},
);

root['~unstable'].flush();
```

It is also possible to access the underlying WebGPU resources for the TypeGPU pipelines, by calling `root.unwrap` and passing the tgpu pipelines as arguments.
That way, they can be used with a regular WebGPU API, but unlike the `root['~unstable'].beginRenderPass` API, it also requires unwrapping all the necessary resources and will not allow using fixed tgpu bufferUsages occupying an automatic bind group.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';

const root = await tgpu.init();

const pipeline = root['~unstable']
.withVertex(
tgpu['~unstable'].vertexFn({ out: { pos: d.builtin.position } })`...`,
{},
).withFragment(
tgpu['~unstable'].fragmentFn({ out: d.vec4f })`...`,
{ format: 'rg8unorm' },
)
.createPipeline();

const rawPipeline = root.unwrap(pipeline);
// ^?
```

Original file line number Diff line number Diff line change
Expand Up @@ -274,8 +274,8 @@ function frame(timestamp: DOMHighResTimeStamp) {
p.backgroundColor.z,
1,
],
loadOp: 'clear' as const,
storeOp: 'store' as const,
loadOp: 'clear',
storeOp: 'store',
})
.withDepthStencilAttachment({
view: depthTexture.createView(),
Expand All @@ -297,8 +297,8 @@ function frame(timestamp: DOMHighResTimeStamp) {
p.backgroundColor.z,
1,
],
loadOp: 'load' as const,
storeOp: 'store' as const,
loadOp: 'load',
storeOp: 'store',
})
.withDepthStencilAttachment({
view: depthTexture.createView(),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ export async function loadModel(
);

return {
vertexBuffer: vertexBuffer,
polygonCount: polygonCount,
texture: texture,
vertexBuffer,
polygonCount,
texture,
};
}
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
<canvas data-fit-to-container></canvas>
<p class="absolute px-2 py-1 rounded-xl top-2 mx-auto text-lg text-center text-white bg-black/50">
<p
class="absolute px-2 py-1 rounded-xl top-2 mx-auto text-lg text-center text-white bg-black/50"
>
Port of "Centrifuge 2" by XorDev (<a
class="text-purple-300"
href="https://xordev.com"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
<canvas data-fit-to-container></canvas>
<p class="absolute px-2 py-1 rounded-xl top-2 mx-auto text-lg text-center text-white bg-black/50">
<p
class="absolute px-2 py-1 rounded-xl top-2 mx-auto text-lg text-center text-white bg-black/50"
>
Port of "Runner" by XorDev (<a
class="text-purple-300"
href="https://xordev.com"
Expand Down
Loading