Chapters

Hide chapters

Metal by Tutorials

Third Edition · macOS 12 · iOS 15 · Swift 5.5 · Xcode 13

Section I: Beginning Metal

Section 1: 10 chapters
Show chapters Hide chapters

Section II: Intermediate Metal

Section 2: 8 chapters
Show chapters Hide chapters

Section III: Advanced Metal

Section 3: 8 chapters
Show chapters Hide chapters

14. Deferred Rendering
Written by Caroline Begbie & Marius Horga

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

Up to now, your lighting model has used a simple technique called forward rendering. With traditional forward rendering, you draw each model in turn. As you write each fragment, you process every light in turn, even point lights that don’t affect the current fragment. This process can quickly become a quadratic runtime problem that seriously decreases your app’s performance.

Assume you have a hundred models and a hundred lights in the scene. Suppose it’s a metropolitan downtown where the number of buildings and street lights could quickly amount to the number of objects in this scene. At this point, you’d be looking for an alternative rendering technique.

Deferred rendering, also known as deferred shading or deferred lighting, does two things:

  • In the first pass, it collects information such as material, normals and positions from the models and stores them in a special buffer for later processing in the fragment shader. Unnecessary calculations don’t occur in this first pass. The special buffer is named the G-buffer, where G is for Geometry.
  • In the second pass, it processes all lights in a fragment shader, but only where the light affects the fragment.

This approach takes the quadratic runtime down to linear runtime since the lights’ processing loop is only performed once and not once for each model.

Look at the forward rendering algorithm:

// single pass
for each model {
  for each fragment {
    for each light {
      if directional { accumulate lighting }
      if point { accumulate lighting }
      if spot { accumulate lighting }
    }
  }
}

You effected this algorithm in Chapter 10, “Lighting Fundamentals”.

Point lights affecting fragments
Point lights affecting fragments

In forward rendering, you process both lights for the magnified fragments in the image above even though the blue light on the right won’t affect them.

Now, compare it to the deferred rendering algorithm:

// pass 1 - g-buffer capture
for each model {
  for each fragment {
    capture color, position, normal and shadow
  }
}
// pass 2 - light accumulation
render a quad
for each fragment { accumulate directional light }
render geometry for point light volumes
for each fragment { accumulate point light }
render geometry for spot light volumes
for each fragment { accumulate spot light }

Four textures comprise the G-buffer
Four textures comprise the G-buffer

While you have more render passes with deferred rendering, you process fewer lights. All fragments process the directional light, which shades the albedo along with adding the shadow from the directional light. But for the point light, you render special geometry that only covers the area the point light affects. The GPU will process only the affected fragments.

Here are the steps you’ll take throughout this chapter:

  • The first pass renders the shadow map. You’ve already done this.
  • The second pass constructs G-buffer textures containing these values: material color (or albedo) with shadow information, world space normals and positions.
  • Using a full-screen quad, the third and final pass processes the directional light. The same pass then renders point light volumes and accumulates point light information. If you have spotlights, you would repeat this process.

Note: Apple GPUs can combine the second and third passes. Chapter 15, “Tile-Based Deferred Rendering”, will revise this chapter’s project to take advantage of this feature.

The Starter Project

➤ In Xcode, open the starter project for this chapter. The project is almost the same as the end of the previous chapter, with some refactoring and reorganization. There’s new lighting, with extra point lights. The camera and light debugging features from the previous chapter are gone.

Take note of the following additions:

  • In the Game group, in SceneLighting.swift, createPointLights(count:min:max:) creates multiple point lights.
  • Since you’ll deal with many lights, the light buffer is greater than 4k. This means that you won’t be able to use setFragmentBytes(_:length:index:). Instead, scene lighting is now split out into three light buffers: one for sunlight, one for point lights and one that contains both sun and point lights, so that forward rendering still works as it did before. Spotlighting isn’t implemented here.
  • In the Render Passes group, GBufferRenderPass.swift is a copy of ForwardRenderPass.swift and is already set up in Renderer. You’ll work on this render pass and change it to suit deferred rendering.
  • In the app, a radio button below the metal view gives you the option to switch between render pass types. There won’t be any difference in the render at this point.
  • For simplicity, the renderer returns to phong shading rather than processing textures for PBR.
  • In the Shaders group, in Lighting.metal, phongLighting’s conditional code is refactored into separate functions, one for each lighting method.
  • icosphere.obj is a new model you’ll use later in the chapter.

➤ Build and run the app, and ensure that you know how all of the code fits together.

The starter app
The starter app

The twenty point lights are random, so your render may look slightly different.

Note: To visualize where the point lights are, uncomment the DebugLights draw at the end of ForwardRenderPass.swift. You’ll see the point light positions when you choose the Forward option in the app.

The G-buffer Pass

All right, time to build up that G-buffer!

var albedoTexture: MTLTexture?
var normalTexture: MTLTexture?
var positionTexture: MTLTexture?  
var depthTexture: MTLTexture?
albedoTexture = Self.makeTexture(
  size: size,
  pixelFormat: .bgra8Unorm,
  label: "Albedo Texture")
normalTexture = Self.makeTexture(
  size: size,
  pixelFormat: .rgba16Float,
  label: "Normal Texture")
positionTexture = Self.makeTexture(
  size: size,
  pixelFormat: .rgba16Float,
  label: "Position Texture")
depthTexture = Self.makeTexture(
  size: size,
  pixelFormat: .depth32Float,
  label: "Depth Texture")
typedef enum {
  RenderTargetAlbedo = 1,
  RenderTargetNormal = 2,
  RenderTargetPosition = 3
} RenderTargetIndices;
extension RenderTargetIndices {
  var index: Int {
    return Int(rawValue)
  }
}
extension MTLRenderPipelineDescriptor {
  func setGBufferPixelFormats() {
    colorAttachments[RenderTargetAlbedo.index]
      .pixelFormat = .bgra8Unorm
    colorAttachments[RenderTargetNormal.index]
      .pixelFormat = .rgba16Float
    colorAttachments[RenderTargetPosition.index]
      .pixelFormat = .rgba16Float
  }
}
pipelineDescriptor.colorAttachments[0].pixelFormat
  = colorPixelFormat
pipelineDescriptor.colorAttachments[0].pixelFormat
  = .invalid
pipelineDescriptor.setGBufferPixelFormats()
"fragment_gBuffer"
descriptor = MTLRenderPassDescriptor()
let textures = [
  albedoTexture,
  normalTexture,
  positionTexture
]
for (index, texture) in textures.enumerated() {
  let attachment =
    descriptor?.colorAttachments[RenderTargetAlbedo.index + index]
  attachment?.texture = texture
  attachment?.loadAction = .clear
  attachment?.storeAction = .store
  attachment?.clearColor =
    MTLClearColor(red: 0.73, green: 0.92, blue: 1, alpha: 1)
}
descriptor?.depthAttachment.texture = depthTexture
descriptor?.depthAttachment.storeAction = .dontCare
renderEncoder.setFragmentBuffer(
  scene.lighting.lightsBuffer,
  offset: 0,
  index: LightBuffer.index)
gBufferRenderPass.descriptor = descriptor
#import "Vertex.h"
#import "Lighting.h"

fragment float4 fragment_gBuffer(
  VertexOut in [[stage_in]],
  depth2d<float> shadowTexture [[texture(ShadowTexture)]],
  constant Material &material [[buffer(MaterialBuffer)]])
{
  return float4(material.baseColor, 1);
}
The current drawable contains randomness
Xde weqkemg yjibaxxa fesxiuxk labwaspecq

Frame capture with G-buffer textures
Rxeje zahfape mush R-juyvuh gajyalez

struct GBufferOut {
  float4 albedo [[color(RenderTargetAlbedo)]];
  float4 normal [[color(RenderTargetNormal)]];
  float4 position [[color(RenderTargetPosition)]];
};
// 1
fragment GBufferOut fragment_gBuffer(
  VertexOut in [[stage_in]],
  depth2d<float> shadowTexture [[texture(ShadowTexture)]],
  constant Material &material [[buffer(MaterialBuffer)]])
{
  GBufferOut out;
  // 2
  out.albedo = float4(material.baseColor, 1.0);
  // 3
  out.albedo.a = calculateShadow(in.shadowPosition, shadowTexture);
  // 4
  out.normal = float4(normalize(in.worldNormal), 1.0);
  out.position = float4(in.worldPosition, 1.0);
  return out;
}
G-buffer textures containing data
H-xulpir modbiwan lagqailigf xiri

The Lighting Pass

Up to this point, you rendered the scene to multiple render targets, saving them for later use in the fragment shader. By rendering a full-screen quad, you can cover every pixel on the screen. This lets you process each fragment from your three textures and calculate lighting for each fragment. The results of this composition pass will end up in the view’s drawable.

import MetalKit

struct LightingRenderPass: RenderPass {
  let label = "Lighting Render Pass"
  var descriptor: MTLRenderPassDescriptor?
  var sunLightPSO: MTLRenderPipelineState
  let depthStencilState: MTLDepthStencilState?
  weak var albedoTexture: MTLTexture?
  weak var normalTexture: MTLTexture?
  weak var positionTexture: MTLTexture?

  func resize(view: MTKView, size: CGSize) {}

  func draw(
    commandBuffer: MTLCommandBuffer,
    scene: GameScene,
    uniforms: Uniforms,
    params: Params
  ) {
  }
}
"vertex_quad"
"fragment_deferredSun"
pipelineDescriptor.vertexDescriptor =
  MTLVertexDescriptor.defaultLayout
init(view: MTKView) {
  sunLightPSO = PipelineStates.createSunLightPSO(
    colorPixelFormat: view.colorPixelFormat)
  depthStencilState = Self.buildDepthStencilState()
}
guard let descriptor = descriptor,
  let renderEncoder =
    commandBuffer.makeRenderCommandEncoder(
    descriptor: descriptor) else {
      return
}
renderEncoder.label = label
renderEncoder.setDepthStencilState(depthStencilState)
var uniforms = uniforms
renderEncoder.setVertexBytes(
  &uniforms,
  length: MemoryLayout<Uniforms>.stride,
  index: UniformsBuffer.index)
renderEncoder.setFragmentTexture(
  albedoTexture,
  index: BaseColor.index)
renderEncoder.setFragmentTexture(
  normalTexture,
  index: NormalTexture.index)
renderEncoder.setFragmentTexture(
  positionTexture, index:
  NormalTexture.index + 1)
func drawSunLight(
  renderEncoder: MTLRenderCommandEncoder,
  scene: GameScene,
  params: Params
) {
  renderEncoder.pushDebugGroup("Sun Light")
  renderEncoder.setRenderPipelineState(sunLightPSO)
  var params = params
  params.lightCount = UInt32(scene.lighting.sunlights.count)
  renderEncoder.setFragmentBytes(
    &params,
    length: MemoryLayout<Params>.stride,
    index: ParamsBuffer.index)
  renderEncoder.setFragmentBuffer(
    scene.lighting.sunBuffer,
    offset: 0,
    index: LightBuffer.index)
  renderEncoder.drawPrimitives(
    type: .triangle,
    vertexStart: 0,
    vertexCount: 6)
  renderEncoder.popDebugGroup()
}
drawSunLight(
  renderEncoder: renderEncoder,
  scene: scene,
  params: params)
renderEncoder.endEncoding()

Updating Renderer

You’ll now add the new lighting pass to Renderer and pass in the necessary textures and render pass descriptor.

var lightingRenderPass: LightingRenderPass
lightingRenderPass = LightingRenderPass(view: metalView)
lightingRenderPass.resize(view: view, size: size)
lightingRenderPass.albedoTexture = gBufferRenderPass.albedoTexture
lightingRenderPass.normalTexture = gBufferRenderPass.normalTexture
lightingRenderPass.positionTexture = gBufferRenderPass.positionTexture
lightingRenderPass.descriptor = descriptor
lightingRenderPass.draw(
  commandBuffer: commandBuffer,
  scene: scene,
  uniforms: uniforms,
  params: params)

The Lighting Shader Functions

First, you’ll create a vertex function that will position a quad. You’ll be able to use this function whenever you simply want to write a full-screen quad.

constant float3 vertices[6] = {
  float3(-1,  1,  0),    // triangle 1
  float3( 1, -1,  0),
  float3(-1, -1,  0),
  float3(-1,  1,  0),    // triangle 2
  float3( 1,  1,  0),
  float3( 1, -1,  0)
};
vertex VertexOut vertex_quad(uint vertexID [[vertex_id]])
{
  VertexOut out {
    .position = float4(vertices[vertexID], 1)
  };
  return out;
}
fragment float4 fragment_deferredSun(
  VertexOut in [[stage_in]],
  constant Params &params [[buffer(ParamsBuffer)]],
  constant Light *lights [[buffer(LightBuffer)]],
  texture2d<float> albedoTexture [[texture(BaseColor)]],
  texture2d<float> normalTexture [[texture(NormalTexture)]],
  texture2d<float> positionTexture [[texture(NormalTexture + 1)]])
{
  return float4(1, 0, 0, 1);
}
Returning red from the fragment function
Megormuct qel wpoj nti krayxevw citssaaj

uint2 coord = uint2(in.position.xy);
float4 albedo = albedoTexture.read(coord);
float3 normal = normalTexture.read(coord).xyz;
float3 position = positionTexture.read(coord).xyz;
Material material {
  .baseColor = albedo.xyz,
  .specularColor = float3(0),
  .shininess = 500
};
float3 color = phongLighting(
  normal,
  position,
  params,
  lights,
  material);
color *= albedo.a;
return float4(color, 1);
Accumulating the directional light and shadows
Umwoyozaqavs hmo tarekliejik fojhr els swanijk

Adding Point Lights

So far, you’ve drawn the plain albedo and shaded it with directional light. You need a second fragment function for calculating point lights.

Blending the light volume
Kyibmefb tge ludyj niwemo

Icosphere and UV sphere
Ehedjbeze omz AP znnele

var icosphere = Model(name: "icosphere.obj")
var pointLightPSO: MTLRenderPipelineState
pointLightPSO = PipelineStates.createPointLightPSO(
  colorPixelFormat: view.colorPixelFormat)
func drawPointLight(
  renderEncoder: MTLRenderCommandEncoder,
  scene: GameScene,
  params: Params
) {
  renderEncoder.pushDebugGroup("Point lights")
  renderEncoder.setRenderPipelineState(pointLightPSO)
  renderEncoder.setVertexBuffer(
    scene.lighting.pointBuffer,
    offset: 0,
    index: LightBuffer.index)
  renderEncoder.setFragmentBuffer(
    scene.lighting.pointBuffer,
    offset: 0,
    index: LightBuffer.index)
}
guard let mesh = icosphere.meshes.first,
  let submesh = mesh.submeshes.first else { return }
for (index, vertexBuffer) in mesh.vertexBuffers.enumerated() {
  renderEncoder.setVertexBuffer(
    vertexBuffer,
    offset: 0,
    index: index)
}

Instancing

If you had one thousand point lights, a draw call for each light volume would bring your system to a crawl. Instancing is a great way to tell the GPU to draw the same geometry a specific number of times. The GPU informs the vertex function which instance it’s currently drawing so that you can extract information from arrays containing instance information.

renderEncoder.drawIndexedPrimitives(
  type: .triangle,
  indexCount: submesh.indexCount,
  indexType: submesh.indexType,
  indexBuffer: submesh.indexBuffer,
  indexBufferOffset: submesh.indexBufferOffset,
  instanceCount: scene.lighting.pointLights.count)
renderEncoder.popDebugGroup()
drawPointLight(
  renderEncoder: renderEncoder,
  scene: scene,
  params: params)

Creating the Point Light Shader Functions

➤ Open Deferred.metal, and add the new structures that the vertex function will need:

struct PointLightIn {
  float4 position [[attribute(Position)]];
};

struct PointLightOut {
  float4 position [[position]];
  uint instanceId [[flat]];
};
vertex PointLightOut vertex_pointLight(
  PointLightIn in [[stage_in]],
  constant Uniforms &uniforms [[buffer(UniformsBuffer)]],
  constant Light *lights [[buffer(LightBuffer)]],
  // 1
  uint instanceId [[instance_id]])
{
  // 2
  float4 lightPosition = float4(lights[instanceId].position, 0);
  float4 position =
    uniforms.projectionMatrix * uniforms.viewMatrix
  // 3
    * (in.position + lightPosition);
  PointLightOut out {
    .position = position,
    .instanceId = instanceId
  };
  return out;
}
fragment float4 fragment_pointLight(
  PointLightOut in [[stage_in]],
  texture2d<float> normalTexture [[texture(NormalTexture)]],
  texture2d<float> positionTexture
    [[texture(NormalTexture + 1)]],
  constant Light *lights [[buffer(LightBuffer)]])
{
  Light light = lights[in.instanceId];
  uint2 coords = uint2(in.position.xy);
  float3 normal = normalTexture.read(coords).xyz;
  float3 position = positionTexture.read(coords).xyz;

  Material material {
    .baseColor = 1
  };
  float3 lighting =
    calculatePoint(light, position, normal, material);
  lighting *= 0.5;
  return float4(lighting, 1);
}
Point light volume drawing
Laeyn lezgf vasaqi mxabirj

static func buildDepthStencilState() -> MTLDepthStencilState? {
  let descriptor = MTLDepthStencilDescriptor()
  descriptor.isDepthWriteEnabled = false
  return Renderer.device.makeDepthStencilState(descriptor: descriptor)
}
Rendering icospheres
Tebtenokf uwihrzexut

Blending

➤ Open Pipelines.swift. In createPointLightPSO(colorPixelFormat:), add this code before return:

let attachment = pipelineDescriptor.colorAttachments[0]
attachment?.isBlendingEnabled = true
attachment?.rgbBlendOperation = .add
attachment?.alphaBlendOperation = .add
attachment?.sourceRGBBlendFactor = .one
attachment?.sourceAlphaBlendFactor = .one
attachment?.destinationRGBBlendFactor = .one
attachment?.destinationAlphaBlendFactor = .zero
attachment?.sourceRGBBlendFactor = .one
attachment?.sourceAlphaBlendFactor = .one
A few point lights rendering
A hay haevr hoptbj feyboxody

pointLights = Self.createPointLights(
  count: 20,
  min: [-3, 0.1, -3],
  max: [3, 0.3, 3])
pointLights = Self.createPointLights(
  count: 200,
  min: [-6, 0.1, -6],
  max: [6, 0.3, 6])
Two hundred point lights
Dni memnpoc wuarw kunjyd

Render algorithm comparison
Koybif ompesaczr mudnedezug

Key Points

  • Forward rendering processes all lights for all fragments.
  • Deferred rendering captures albedo, position and normals for later light calculation. For point lights, only the necessary fragments are rendered.
  • The G-buffer, or Geometry Buffer, is a conventional term for the albedo, position, normal textures and any other information you capture through a first pass.
  • An icosphere model provides a volume for rendering the shape of a point light.
  • Using instancing, the GPU can efficiently render the same geometry many times.
  • The pipeline state object specifies whether the result from the fragment function should be blended with the currently attached texture.
Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2024 Kodeco Inc.

You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now