This is a proposal for using 2D and 3D <canvas> to customize the rendering of HTML content.
This is a living explainer which is continuously updated as we receive feedback.
The APIs described here are implemented behind a flag in Chromium and can be enabled with chrome://flags/#canvas-draw-element.
There is no web API to easily render complex layouts of text and other content into a <canvas>. As a result, <canvas>-based content suffers in accessibility, internationalization, performance, and quality.
- Styled, Laid Out Content in Canvas. There’s a strong need for better styled text support in Canvas. Examples include chart components (legend, axes, etc.), rich content boxes in creative tools, and in-game menus.
- Accessibility Improvements. There is currently no guarantee that the canvas fallback content used for
<canvas>accessibility always matches the rendered content, and such fallback content can be hard to generate. With this API, elements drawn into the canvas will match their corresponding canvas fallback. - Composing HTML Elements with Effects. A limited set of CSS effects, such as filters, backdrop-filter, and mix-blend-mode are already available, but there is a desire to use general WebGL shaders with HTML.
- HTML Rendering in a 3D Context. 3D aspects of sites and games need to render rich 2D content into surfaces within a 3D scene.
The solution introduces three main primitives: an attribute to opt-in canvas elements, methods to draw child elements into the canvas, and an event which fires to handle updates.
The layoutsubtree attribute on a <canvas> element opts in canvas descendants to layout and participate in hit testing. It causes the direct children of the <canvas> to have a stacking context, become a containing block for all descendants, and have paint containment. Canvas element children behave as if they are visible, but their rendering is not visible to the user unless and until they are explicitly drawn into the canvas via a call to drawElementImage() (see below).
The drawElementImage() method draws a child of the canvas into the canvas, and returns a transform that can be applied to element.style.transform to align its DOM location with its drawn location. A snapshot of the rendering of all children of the canvas is recorded just prior to the paint event. When called during the paint event, drawElementImage() will draw the child as it would appear in the current frame. When called outside the paint event, the previous frame's snapshot is used. An exception is thrown if drawElementImage() is called with a child before an initial snapshot has been recorded.
Requirements & Constraints:
layoutsubtreemust be specified on the<canvas>in the most recent rendering update.- The
elementmust be a direct child of the<canvas>in the most recent rendering update. - The
elementmust have generated boxes (i.e., notdisplay: none) in the most recent rendering update. - Transforms: The canvas's current transformation matrix is applied when drawing into the canvas. CSS transforms on the source
elementare ignored for drawing (but continue to affect hit testing/accessibility, see below). - Clipping: Overflowing content (both layout and ink overflow) is clipped to the element's border box.
- Sizing: The optional
width/heightarguments specify a destination rect in canvas coordinates. If omitted, thewidth/heightarguments default to sizing the element so that it has the same on-screen size and proportion in canvas coordinates as it does outside the canvas.
WebGL/WebGPU Support:
Similar methods are added for 3D contexts: WebGLRenderingContext.texElementImage2D and copyElementImageToTexture.
A paint event is added to canvas elements and fires if the rendering of any canvas children has changed. This event fires just after intersection observer steps have run during update-the-rendering. The event contains a list of the canvas children which have changed. Because CSS transforms on canvas children are ignored for rendering, changing the transform does not cause the paint event to fire in the next frame.
To support application patterns which update every frame, a new requestPaint() function is added which will cause the paint event to fire once, even if no children have changed (analagous to requestAnimationFrame()).
The paint event also fires for OffscreenCanvas (main thread or worker) after the canvas element children have been prepared for rendering.
Browser features like hit testing, intersection observer, and accessibility rely on an element's DOM location. To ensure these work, the element's transform property should be updated so that the DOM location matches the drawn location.
Calculating a CSS transform to match a drawn location
The general formula for the CSS transform is:Where:
-
$$T_{\text{draw}}$$ : Transform used to draw the element in the canvas grid coordinate system. FordrawElementImage, this is$$CTM \cdot T_{(\text{x}, \text{y})} \cdot S_{(\text{destScale})}$$ , where$$CTM$$ is the Current Transformation Matrix,$$T_{(\text{x}, \text{y})}$$ is a translation from the x and y arguments, and$$S_{(\text{destScale})}$$ is a scale from the width and height arguments. -
$$T_{\text{origin}}$$ : Translation matrix of the element's computedtransform-origin. -
$$S_{\text{css} \to \text{grid}}$$ : Scaling matrix converting CSS pixels to Canvas Grid pixels.
To assist with synchronization, drawElementImage() returns the CSS transform which can be applied to the element to keep its location synchronized. For 3D contexts, the getElementTransform(element, drawTransform) helper method is provided which returns the CSS transform, provided a general transformation matrix.
The transform used to draw the element on the worker thread needs to be synced back to the DOM, and can simply be postMessage()'d back to the main thread.
<canvas id="canvas" style="width: 200px; height: 200px;" layoutsubtree>
<div id="form_element">
name: <input>
</div>
</canvas>
<script>
const ctx = document.getElementById('canvas').getContext('2d');
canvas.onpaint = () => {
ctx.reset();
let transform = ctx.drawElementImage(form_element, 0, 0);
form_element.style.transform = transform.toString();
};
</script>In this example, OffscreenCanvas in a worker is used. The canvas child elements are represented as ElementImage objects in the paint event, and are distinguished by their IDs.
<!DOCTYPE html>
<canvas id="canvas" style="width: 300px; height: 200px;" layoutsubtree>
<div id="label">enter your fullname:</div>
<input id="input">
</canvas>
<script>
// 1. Setup worker thread.
const worker = new Worker("worker.js");
// 2. Transfer control to the worker.
const offscreen = canvas.transferControlToOffscreen();
worker.postMessage({ canvas: offscreen }, [offscreen]);
// 3. Synchronize the element's CSS transform to match its drawn location.
worker.onmessage = (data) => {
document.getElementById(data.id).style.transform = data.transform.toString();
};
</script>worker.js:
onmessage = ({data}) => {
const ctx = data.canvas.getContext('2d');
data.canvas.onpaint = (event) => {
const changedLabel = event.changedElements.find(item => item.id === 'label');
if (changedLabel) {
let transform = ctx.drawElementImage(changedLabel, 0, 0);
self.postMessage({id: 'label', transform: transform});
}
const changedInput = event.changedElements.find(item => item.id === 'input');
if (changedInput) {
let transform = ctx.drawElementImage(changedInput, 0, 100);
self.postMessage({id: 'input', transform: transform}); }
};
};partial interface HTMLCanvasElement {
[CEReactions, Reflect] attribute boolean layoutSubtree;
attribute EventHandler onpaint;
void requestPaint();
DOMMatrix getElementTransform((Element or ElementImage) element, DOMMatrix drawTransform);
};
partial interface OffscreenCanvas {
attribute EventHandler onpaint;
void requestPaint();
DOMMatrix getElementTransform((Element or ElementImage) element, DOMMatrix drawTransform);
};
partial interface CanvasRenderingContext2D {
DOMMatrix drawElementImage((Element or ElementImage) element,
unrestricted double x, unrestricted double y);
DOMMatrix drawElementImage((Element or ElementImage) element,
unrestricted double x, unrestricted double y,
unrestricted double dwidth, unrestricted double dheight);
};
partial interface OffscreenCanvasRenderingContext2D {
DOMMatrix drawElementImage((Element or ElementImage) element,
unrestricted double x, unrestricted double y);
DOMMatrix drawElementImage((Element or ElementImage) element,
unrestricted double x, unrestricted double y,
unrestricted double dwidth, unrestricted double dheight);
};
partial interface WebGLRenderingContext {
void texElementImage2D(GLenum target, GLint level, GLint internalformat,
GLenum format, GLenum type, (Element or ElementImage) element);
};
partial interface GPUQueue {
void copyElementImageToTexture((Element or ElementImage) source,
GPUImageCopyTextureTagged destination);
}
[Exposed=(Window,Worker)]
interface PaintEvent : Event {
constructor(DOMString type, optional PaintEventInit eventInitDict);
readonly attribute FrozenArray<Element or ElementImage> changedElements;
};
dictionary PaintEventInit : EventInit {
sequence<Element or ElementImage> changedElements = [];
};
[Exposed=(Window,Worker)]
interface ElementImage {
// dimensions in device pixels
readonly attribute unsigned long width;
readonly attribute unsigned long height;
// value of `id` attribute on element, or the empty string
readonly attribute DOMString id;
};
A demo of the same thing using an experimental extension of three.js is here. Further instructions and context are here.
The drawElementImage() method and any other methods that draw element image snapshots, as well as the paint event, must not reveal any security- or privacy-sensitive information that isn't otherwise observable to author code.
Both painting (via canvas pixel readbacks or timing attacks) and invalidation (via onpaint) have the potential to leak sensitive information, and this is prevented by excluding sensitive information when painting and invalidating.
Sensitive information includes:
- Cross-origin data in embedded content (e.g.,
<iframe>,<img>),<url>references (e.g.,background-image,clip-path), and SVG (e.g.,<use>). Note that same-origin iframes would still paint, but cross-origin content in them would not. - System colors, themes, or preferences.
- Spelling and grammar markers.
- Visited link information.
- Pending form autofill information not otherwise available to JavaScript.
The following new information is not considered sensitive:
- Search text (find-in-page) and text-fragment (fragment url) markers.
- Form element appearance.
- Caret blink rate.
The HTML-in-Canvas features may be enabled with chrome://flags/#canvas-draw-element in Chrome Canary.
We are most interested in feedback on the following topics:
- What content works, and what fails? Which failure modes are most important to fix?
- How does the feature interact with accessibility features? How can accessibility support be improved?
Please file bugs or design issues here.
A new paint event is needed to give developers an opportunity to update their canvas rendering in response to paint changes. This is integrated into update the rendering so that canvas updates can occur in sync with the DOM.
There are several opportunities in the update the rendering steps where the paint event could fire:
-
14. Run animation frame callbacks.
-
16.2.1. Recalculate styles and update layout.
-
16.2.6. Deliver resize observers, looping back to 16.2.1 if needed.
-
Option A: Fire
paintat resize observer timing, looping back to 16.2.1 if needed. -
19. Run the update intersection observations steps.
-
Paint, where the painted output of elements is calculated. This is not an explicitly named step in update the rendering.
-
Option B: Fire
paintimmediately after Paint, looping back to 16.2.1 if needed. -
Option C: Fire
paintimmediately after Paint. -
Commit / thread handoff, where the painted output is sent to another process. This is not an explicitly named step in update the rendering.
Note that the paint event is the new event on canvas introduced in this proposal, and the Paint step is the existing operation that browsers perform to record the painted output of the rendering tree following paint order.
Similar to resize observer, a looping approach is needed to handle cases where the paint event performs modifications (including of elements outside the canvas). There is no mechanism for preventing arbitrary javascript from modifying the DOM. Looping will be required for more conditions than those required by ResizeObserver, such as background style changes. A downside of looping is that the user's canvas code may need to run multiple times per frame.
One option is to do a synchronous Paint step to snapshot the painted output of canvas children. A downside of this approach is that the Paint step may be expensive to run, and may need to be run multiple times. This approach has unique implementation challenges in Gecko, and possibly other engines, due to architectural limitations.
A second option is to not run the Paint step synchronously, but instead record a placeholder representing how an element will appear on the next rendering update (see design). This model can be implemented with 2D canvas by buffering the canvas commands until the next Paint step. When the next Paint step occurs, the placeholders would then be replaced with the actual rendering. Canvas operations such as getImageData require synchronous flushing of the canvas command buffer and would need to show blank or stale data for the placeholders. Unfortunately, this approach has a fundamental flaw for WebGL because many APIs require flushing (e.g., getError(), see callsites of WaitForCmd), and calling any of these APIs would result in a deadlock or inconsistent rendering. Therefore, we must run the paint event at a time where we have the complete painted display list of an element already available.
See above for the reasons and downsides of looping when there are modifications made during the paint event.
The upside of option B as compared with option A is that it does not require partial Paint of canvas children. An additional downside is that even more steps of update the rendering need to run on each iteration of the loop.
This is the design approach taken for the API.
This approach only runs paint once per frame, similar to the browser's own Paint step. To solve the issue of javascript being able to perform arbitrary modifications, it is important to ensure that before paint runs we have locked in the contents of the rendering update, except for one intentional carve-out: the drawn content of the canvas. DOM invalidations that may occur in the paint event apply to the subsequent frame, not the current frame.
To support threaded effects, we explored a design where canvas children "snapshots" are sent to a worker thread. In response to threaded scrolling and animations, the worker thread could then render the most up-to-date rendering of the snapshots into OffscreenCanvas. This model requires that javascript can be synchronously called on scroll and animation updates, which is difficult for architectures that perform threaded scroll updates in a restricted process.
To support threaded effects such as scrolling and animations, we are considering a future "auto-updating canvas" mode.
In this model, drawElementImage records a placeholder representing the latest rendering. Canvas retains a command buffer which can be automatically replayed following every scroll or animation update. This allows the canvas to re-rasterize with updated placeholders that incorporate threaded scrolling and animations, without needing to block on script. This would enable visual effects that stay perfectly in sync with native scrolling or animations within the canvas, independent of the main thread. This design is viable for 2D contexts, and may be viable for WebGPU with some small API additions.