Chapters

Hide chapters

Metal by Tutorials

Second Edition · iOS 13 · Swift 5.1 · Xcode 11

Before You Begin

Section 0: 3 chapters
Show chapters Hide chapters

Section I: The Player

Section 1: 8 chapters
Show chapters Hide chapters

Section III: The Effects

Section 3: 10 chapters
Show chapters Hide chapters

5. Lighting Fundamentals
Written by Caroline Begbie

Heads up... You're reading this book for free, with parts of this chapter shown beyond this point as scrambled text.

In this chapter, you’ll learn basic lighting. However, more importantly, you’ll learn how to manipulate data in shaders and be on the path to mastering shader artistry. Lighting, shadows, non-photorealistic rendering — these are all techniques that start with the methods you’ll learn in this chapter.

One of the simplest methods of lighting is the Phong reflection model. It’s named after Bui Tong Phong who published a paper in 1975 extending older lighting models. The idea is not to attempt duplication of light and reflection physics but to generate pictures that look realistic.

This model has been popular for over 40 years and is a great place to start learning how to fake lighting using a few lines of code. All computer images are fake, but there are more modern real-time rendering methods that model the physics of light.

In Chapter 7, “Maps & Materials,” you’ll briefly look at Physically Based Rendering (PBR), the lighting technique that your renderer will eventually use.

The starter project

Open the starter project for this chapter. There’s substantial refactoring, but no new Metal code. The resulting render is the same as from the end of the previous chapter, but the refactored code makes it easier to render more than one model.

  • Node.swift: This defines the base class for everything that needs a transform matrix. Models, camera and lights will all need position information, so they will all eventually be subclasses of Node. The transform information in Node abstracts away the matrices. So you just set the position, rotation and scale of a Node, and Node will automatically update its model matrix.

  • Camera.swift: All the code dealing with the view matrix and the projection matrix is now abstracted into a Camera class. By having Camera as a subclass of Node, you can give the camera a position which will automatically update the view matrix. This will make it easier, later on, to move about the scene or have a First or Third Person camera. In Camera, you can also update field of view and aspect ratio properties which will affect the projection matrix. A new Camera subclass, ArcballCamera, with some fancy matrix calculation, allows you to rotate the scene and zoom into it, so that you’ll be able to fully appreciate your new lighting.

  • Model.swift: Most of the code from Renderer’s init(metalView:) that set up the train model is now in the Model class. You can simply set up a Model instance with a name and Model will load up the model file into a mesh with submeshes. You’re not restricted to .obj files now either. Model I/O will also read in .usdz files. Try importing the wheelbarrow from Apple’s AR samples at https://developer.apple.com/augmented-reality/quick-look/. You’ll need to change the model’s scale to about 0.01. Not all USDZ files will work - you’ll be able to import animated models after Chapter 8, “Character Animation”.

  • Mesh.swift: So far, you have been taking the first MDLMesh and converting it to the MTKMesh that goes into the Metal vertex buffer. Some models will have more than one MDLMesh, so Mesh uses zip() to combine all the MDLMeshes and MTKMeshes to create a Mesh array held by Model.

  • Submesh.swift: Submeshes are in a class of their own. Submesh will later hold surface material and texture information.

  • VertexDescriptor.swift: Vertex descriptor creation is now an extension on MDLVertexDescriptor.

Take some time to review the above changes, as these will persist throughout the book.

Renderer now has a models property which is an array of Models. You’re no longer limited to just one train. To render a second model, you create a new instance of Model, specifying the filename. You can then append the model to Renderer’s models array and change the new model’s position, rotation and scale at any time.

You now can rotate the camera using your mouse or trackpad. ViewControllerExtension.swift is two files: one for the macOS target and one for iOS. It adds the appropriate gestures to the view. ViewControllerExtension.swift contains the handler functions to do the zooming and rotating. These update the camera’s position and rotation, which in turn updates the scene’s view matrix. On macOS, you can scroll to zoom, and click and drag to rotate the scene. On iOS, you can pinch to zoom and pan to rotate.

The project also contains an extra model: a tree. You’ll add this to your scene during the chapter.

DebugLights.swift contains some code that you’ll use later for debugging where lights are located. Point lights will draw as dots and the direction of the sun will draw as a line.

Familiarize yourself with the code and build and run the project and rotate and zoom the train with your mouse or trackpad.

Note: Experiment with projection too. When you run the app and rotate the train, you’ll see that the distant pair of wheels is much smaller than the front ones. In Renderer, in mtkView(_:drawableSizeWillChange:), change the projection field of view from 70º to 40º and rerun the app. You’ll see that the size difference is a lot less due to the narrower field of view. Remember to change the field of view back to 70º.

Representing color

The physics of light is a vast, fascinating topic with many books and a large part of the internet dedicated to it. However, in this book, you’ll learn the necessary basics to get you rendering light, color and simple shading. You can find further reading in references.markdown in the resources directory for this chapter.

let result = float3(1.0, 0.0, 0.0) * float3(0.5, 0.5, 0.5) 

Normals

The slope of a surface can determine how much a surface reflects light.

Add normals to vertex descriptor

To be able to assess the slope of the surface in the fragment function, you’ll need to send the vertex normal to the fragment function via the vertex function. You’ll add the normals to the vertex descriptor so that the vertex function can process them.

vertexDescriptor.attributes[1] =
      MDLVertexAttribute(name: MDLVertexAttributeNormal,
                         format: .float3,
                         offset: offset,
                         bufferIndex: 0)
offset += MemoryLayout<float3>.stride
[0:position, 0:normal, 1:position, 1:normal, ...]

Update the shader functions

Remember that the pipeline state uses this vertex descriptor so that the vertex function can process the attributes. You added another attribute to the vertex descriptor, so in Shaders.metal, add this to the struct VertexIn:

float3 normal [[attribute(1)]];
struct VertexOut {
  float4 position [[position]];
  float3 normal;
};
vertex VertexOut 
       vertex_main(const VertexIn vertexIn [[stage_in]],
                   constant Uniforms &uniforms [[buffer(1)]])
{
  VertexOut out {
    .position = uniforms.projectionMatrix * uniforms.viewMatrix
                  * uniforms.modelMatrix * vertexIn.position,
    .normal = vertexIn.normal
  };
  return out;
}
fragment float4 fragment_main(VertexOut in [[stage_in]]) {
  return float4(in.normal, 1);
}

Depth

You may remember from Chapter 3, “The Rendering Pipeline,” that during the rendering pipeline, the Stencil Test unit checks whether fragments are visible after the fragment function. If a fragment is determined to be behind another fragment, then it’s discarded. You’ll give the render encoder an MTLDepthStencilState property that will describe how this testing should be done.

metalView.depthStencilPixelFormat = .depth32Float
pipelineDescriptor.depthAttachmentPixelFormat = .depth32Float
let depthStencilState: MTLDepthStencilState
static func buildDepthStencilState() -> MTLDepthStencilState? {
// 1
  let descriptor = MTLDepthStencilDescriptor()
// 2
  descriptor.depthCompareFunction = .less
// 3
  descriptor.isDepthWriteEnabled = true
  return
      Renderer.device.makeDepthStencilState(
          descriptor: descriptor)
}
depthStencilState = Renderer.buildDepthStencilState()!
renderEncoder.setDepthStencilState(depthStencilState)

Hemispheric lighting

Hemispheric lighting is where half of a scene is lit in one color, and the other half in another. In the following image, the sky lights the top of the sphere and the ground lights the bottom of the sphere.

fragment float4 fragment_main(VertexOut in [[stage_in]]) {
  float4 sky = float4(0.34, 0.9, 1.0, 1.0);
  float4 earth = float4(0.29, 0.58, 0.2, 1.0);
  float intensity = in.normal.y * 0.5 + 0.5;
  return mix(earth, sky, intensity);
}

Light types

There are several standard light options in computer graphics, each of which has their origin in the real world.

Directional light

A scene can have many lights. In fact, in studio photography, it would be highly unusual to have just a single light. By putting lights into a scene, you control where shadows fall and the level of darkness. You’ll add several lights to your scene through the chapter.

typedef enum {
  unused = 0,
  Sunlight = 1,
  Spotlight = 2,
  Pointlight = 3,
  Ambientlight = 4
} LightType;
typedef struct {
  vector_float3 position;  
  vector_float3 color;
  vector_float3 specularColor;
  float intensity;
  vector_float3 attenuation;
  LightType type;
} Light;
func buildDefaultLight() -> Light {
  var light = Light()
  light.position = [0, 0, 0]
  light.color = [1, 1, 1]
  light.specularColor = [0.6, 0.6, 0.6]
  light.intensity = 1
  light.attenuation = float3(1, 0, 0)
  light.type = Sunlight
  return light
}
lazy var sunlight: Light = {
  var light = buildDefaultLight()
  light.position = [1, 2, -2]
  return light
}()
var lights: [Light] = []
lights.append(sunlight)
typedef struct {
  uint lightCount;
  vector_float3 cameraPosition;
} FragmentUniforms;
var fragmentUniforms = FragmentUniforms()
fragmentUniforms.lightCount = UInt32(lights.count)
renderEncoder.setFragmentBytes(&lights,
         length: MemoryLayout<Light>.stride * lights.count,
         index: 2)
renderEncoder.setFragmentBytes(&fragmentUniforms, 
         length: MemoryLayout<FragmentUniforms>.stride, 
         index: 3)

The Phong reflection model

In the Phong reflection model, there are three types of light reflection. You’ll calculate each of these, and then add them up to produce a final color.

The dot product

Fortunately, there’s a straightforward mathematical operation to discover the angle between two vectors called the dot product.

Diffuse reflection

In this app, shading from the sun does not depend on where the camera is. When you rotate the scene, you’re rotating the world, including the sun. The sun’s position will be in world space, and you’ll put the model’s normals into the same world space to be able to calculate the dot product against the sunlight direction. You can choose any space, as long as you are consistent and are sure to calculate with vectors and positions in the same space.

struct VertexOut {
  float4 position [[position]];
  float3 worldPosition;
  float3 worldNormal;
};
.normal = vertexIn.normal
matrix_float3x3 normalMatrix;
uniforms.normalMatrix = uniforms.modelMatrix.upperLeft
.worldPosition = (uniforms.modelMatrix * vertexIn.position).xyz,
.worldNormal = uniforms.normalMatrix * vertexIn.normal
fragment float4 fragment_main(VertexOut in [[stage_in]],
// 1
    constant Light *lights [[buffer(2)]],
    constant FragmentUniforms &fragmentUniforms [[buffer(3)]]) {
  float3 baseColor = float3(0, 0, 1);
  float3 diffuseColor = 0;
  // 2
  float3 normalDirection = normalize(in.worldNormal);
  for (uint i = 0; i < fragmentUniforms.lightCount; i++) {
    Light light = lights[i];
    if (light.type == Sunlight) {
      float3 lightDirection = normalize(-light.position);
      // 3
      float diffuseIntensity = 
              saturate(-dot(lightDirection, normalDirection));
      // 4
      diffuseColor += light.color 
                        * baseColor * diffuseIntensity;
    }
  }
  // 5
  float3 color = diffuseColor;
  return float4(color, 1);
}
debugLights(renderEncoder: renderEncoder, lightType: Sunlight)

Ambient reflection

In the real-world, colors are rarely pure black. There’s light bouncing about all over the place. To simulate this, you can use ambient lighting. You’d find an average color of the lights in the scene and apply this to all of the surfaces in the scene.

lazy var ambientLight: Light = {
  var light = buildDefaultLight()
  light.color = [0.5, 1, 0]
  light.intensity = 0.1
  light.type = Ambientlight
  return light
}()
lights.append(ambientLight)
float3 ambientColor = 0;
else if (light.type == Ambientlight) {
  ambientColor += light.color * light.intensity;
}
float3 color = diffuseColor;
float3 color = diffuseColor + ambientColor;

Specular reflection

Last, but not least, is the specular reflection. Your train is starting to look great, but now you have a chance to put a coat of shiny varnish on it and make it spec(-tac-)ular. The specular highlight depends upon the position of the observer. If you pass a shiny car, you’ll only see the highlight at certain angles.

fragmentUniforms.cameraPosition = camera.position
float3 specularColor = 0;
float materialShininess = 32;
float3 materialSpecularColor = float3(1, 1, 1);
if (diffuseIntensity > 0) {
  // 1 (R)
  float3 reflection = 
      reflect(lightDirection, normalDirection);
  // 2 (V)
  float3 cameraDirection = 
      normalize(in.worldPosition 
        - fragmentUniforms.cameraPosition); 
  // 3
  float specularIntensity = 
      pow(saturate(-dot(reflection, cameraDirection)), 
          materialShininess);
  specularColor += 
      light.specularColor * materialSpecularColor 
        * specularIntensity;
}
float3 color = diffuseColor + ambientColor;
float3 color = diffuseColor + ambientColor + specularColor;
let fir = Model(name: "treefir.obj")
fir.position = [1.4, 0, 0]
models.append(fir)
lazy var camera: Camera = {
  let camera = ArcballCamera()
  camera.distance = 2.5
  camera.target = [0.5, 0.5, 0]
  camera.rotation.x = Float(-10).degreesToRadians
  return camera
}()

Point lights

As opposed to the sun light, where we converted the position into parallel direction vectors, point lights shoot out light rays in all directions.

lazy var redLight: Light = {
  var light = buildDefaultLight()
  light.position = [-0, 0.5, -0.5]
  light.color = [1, 0, 0]
  light.attenuation = float3(1, 3, 4)
  light.type = Pointlight
  return light
}()
lights.append(redLight)
debugLights(renderEncoder: renderEncoder, lightType: Sunlight)
debugLights(renderEncoder: renderEncoder, lightType: Pointlight)

else if (light.type == Pointlight) {
  // 1
  float d = distance(light.position, in.worldPosition);
  // 2
  float3 lightDirection = normalize(in.worldPosition 
                                    - light.position);
  // 3
  float attenuation = 1.0 / (light.attenuation.x + 
      light.attenuation.y * d + light.attenuation.z * d * d);

  float diffuseIntensity = 
      saturate(-dot(lightDirection, normalDirection));
  float3 color = light.color * baseColor * diffuseIntensity;
  // 4
  color *= attenuation;
  diffuseColor += color;
}
float3 baseColor = float3(1, 1, 1);

Spotlights

The last type of light you’ll create in this chapter is the spotlight. This sends light rays in limited directions. Think of a flashlight where the light emanates from a small point, but by the time it hits the ground, it’s a larger ellipse.

float coneAngle;
vector_float3 coneDirection;
float coneAttenuation;
lazy var spotlight: Light = {
  var light = buildDefaultLight()
  light.position = [0.4, 0.8, 1]
  light.color = [1, 0, 1]
  light.attenuation = float3(1, 0.5, 0)
  light.type = Spotlight
  light.coneAngle = Float(40).degreesToRadians
  light.coneDirection = [-2, 0, -1.5]
  light.coneAttenuation = 12
  return light
}()
lights.append(spotlight)
debugLights(renderEncoder: renderEncoder, lightType: Spotlight)
else if (light.type == Spotlight) {
  // 1
  float d = distance(light.position, in.worldPosition);
  float3 lightDirection = normalize(in.worldPosition 
                                    - light.position);
  // 2
  float3 coneDirection = normalize(light.coneDirection);
  float spotResult = dot(lightDirection, coneDirection);
  // 3
  if (spotResult > cos(light.coneAngle)) {
    float attenuation = 1.0 / (light.attenuation.x +
        light.attenuation.y * d + light.attenuation.z * d * d);
    // 4
    attenuation *= pow(spotResult, light.coneAttenuation);
    float diffuseIntensity = 
             saturate(dot(-lightDirection, normalDirection));
    float3 color = light.color * baseColor * diffuseIntensity;
    color *= attenuation;
    diffuseColor += color;
  }
}

Challenge

You’re currently using hard-coded magic numbers for all the buffer indices and attributes. As your app grows, these indices and attributes will be much harder to keep track of. Your challenge for this chapter is to hunt down all of the magic numbers and give them names. Just as you did for LightType, you’ll create an enum in Common.h.

typedef enum {
  BufferIndexVertices = 0,
  BufferIndexUniforms = 1
} BufferIndices;
//Swift
renderEncoder.setVertexBytes(&uniforms,
                  length: MemoryLayout<Uniforms>.stride,
                  index: Int(BufferIndexUniforms.rawValue))

// Shader Function
vertex VertexOut 
    vertex_main(const VertexIn vertexIn [[stage_in]],
                constant Uniforms &uniforms 
                        [[buffer(BufferIndexUniforms)]])

Where to go from here?

You’ve covered a lot of lighting information in this chapter. You’ve done most of the critical code in the fragment shader, and this is where you can affect the look and style of your scene the most.

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2024 Kodeco Inc.

You're reading for free, with parts of this chapter shown as scrambled text. Unlock this book, and our entire catalogue of books and videos, with a Kodeco Personal Plan.

Unlock now