Recently I spent quite a while writing a PBR (Physically Based Rendering) renderer with IBL (Image Based Lighting) in WebGPU. There's nothing novel about PBR itself. It's a technique that exists since 2014 and all modern graphic engines use it. But what I haven't seen yet is any tutorial for doing that in WebGPU directly, without using any library or engine. So I decided to write one.
I was tempted to create a full-blown PBR guide with proper theory explanation but there's already a lot of materials for that on the Internet and it would take me months to recreate that. I decided I will just write what missing parts I had to figure out compared to what is available on the internet as of now. So for any why are we doing this questions, refer to rich resources available on the internet. I am going to answer how questions for WebGPU.
Drag to rotate. Top row: metallic surface. Bottom row: dielectric surface (plastic). The X axis is roughness ranging from 0 to 1. See on a separate page to switch between different tone mapping formulas and environment maps.
All HDRIs are from Poly Haven.
NOTEOne thing to keep in mind: I am not sure what I am doing. I have quite a good grasp of using GPU APIs but I am new to WebGPU and not an expert in computer graphics programming so some things I do might not really make sense. But it seems to work!
All my work here is based on the excellent Learn OpenGL tutorials. I have implemented basic PBR shading with lambert diffuse, GGX specular and Fresnel Schlick approximation. I implemented IBL using prefiltered environment map and irradiance map.
Physically Based Rendering is the most modern approach to rendering in computer graphics. It has been a research topic since 1980s and since around 2010s started being effective enough to be used in real-time game engines. Two famous publications about it are Real Shading in Unreal Engine 4 by Brian Karis and Moving Frostbite to PBR from Sébastien Lagarde and Charles de Rousiers.
The basic idea is that it's high time to use exact physical properties of light, unlike previously very rough approximations. To achieve real-time performance it's still necessary to approximate and cache a lot of calculations, but the results are much more realistic than before.
PBR divides materials into two categories: metals and dielectrics. Metal is a material that reflects light like a mirror, dielectric is a material that reflects light like a glass. The difference is that metals have a lot of free electrons that can move around and reflect light, while dielectrics don't. The most important consequence of that is that metals have a specular reflection, while dielectrics don't (don't quote me on that; copilot suggested this paragraph).
IBL (Image Based Lighting) is a technique that allows to use environment maps to light the scene. It approximates the light bouncing off all surroundings and coming from all directions to slightly light up the scene. It's often used as part of PBR.
Simply put, it's a successor to WebGL. It reflects evolution of computer graphics APIs and follows approach of Vulkan, Metal and DirectX 12 with their lower-level and more explicit designs.
If you are curious about history of graphics APIs and would like to know more how WebGPU came to life and what compromises were taken there, check this wonderful article: I want to talk about WebGPU.
There are couple things that might not be obvious to people new to WebGPU and certainly weren't obvious for me. When translating the code from OpenGL, I struggled with:
For parsing HDR texture I used simple but excellent parse-hdr library by Marcin Ignac. I only had to modify one thing: I got into a situation where my platform was refusing to handle rgba32float
, but the library was using Float32Array
. I found out that there's a library "@petamoriken/float16
which implements Float16Array
and I used it successfully. So I just patched the library and was good to go.
I used Cubemap from WebGPU samples. Nothing particularly surprising there.
I don't remember exactly how I found it, but the trick is to pass baseArrayLayer
and arrayLayerCount
when creating the view.
Other than that it requires similar math operations to corresponding OpenGL solution, like preparing 6 view matrices, one for each face of the cubemap.
const passEncoder = commandEncoder.beginRenderPass({
colorAttachments: [
{
view: cubemapTexture.createView({
baseArrayLayer: i,
arrayLayerCount: 1,
}),
loadOp: "load",
storeOp: "store",
},
],
// ...
});
I used WebGPU Fundamentals tutorial, specifically a guide for generating mips on the GPU.
The trick is to just specify baseMipLevel
and mipLevelCount
apart from the previous ones.
const passEncoder = commandEncoder.beginRenderPass({
colorAttachments: [
{
view: prefilterTexture.createView({
baseArrayLayer: i,
arrayLayerCount: 1,
baseMipLevel: mip,
mipLevelCount: 1,
}),
clearValue: [0.3, 0.3, 0.3, 1],
loadOp: "load",
storeOp: "store",
},
],
// ...
});
Also turns out it's possible to read a specific mip level using textureSampleLevel
function. That was the most worrying part as I knew there won't really be any way around that if it's not directly available in the API. Fortunately WebGPU is now a mature and serious API and handles things like that.
let prefilteredColor = textureSampleLevel(
prefilterMap, // texture
ourSampler, // sampler
r, // texture coordinate
roughness * MAX_REFLECTION_LOD, // mip level
).rgb;
See full source code: GitHub or live demo.
There's so much more to add about the rendering techniques, optimizations, compromises and so on, but I am not the best person to talk about that. If you want to learn more check the recommended reading section below. I hope this article and the source code will help you to get started with the topic if you wish to try it.
Sometimes I write blogposts. It doesn’t happen very often or in regular intervals, so subscribing to my newsletter might come in handy if you enjoy what I am writing about.
Never any spam, unsubscribe at any time.