Categories See All →
Building and Rendering SimCity (2013)
Building and Rendering SimCity
By Ryan Ingram, Maxis Graphics Developer
Hi everyone – I’m Ryan Ingram, Graphics Developer on SimCity. I’m excited to present our building rendering technology, the part of the game I’ve personally poured my heart and soul into over the past few years. I’m proud of how it looks and I hope you’ll enjoy this peek into the mechanics of SimCity’s rendering. Here’s the building we’re going to look at – it’s a mansion that your high level wealth Sims might live in.
How do we get there? Well, it all starts with triangles (http://stackoverflow.com/questions/6100528/why-are-there-always-triangles-used-in-a-3d-engine). One thousand, two hundred twenty triangles, to be specific.
Not much to look at yet. Let’s fix that.
Here are the texture pages that the art team put together for this class of building. They have all the architectural elements needed for a particular building style.
To get the best performance, we re-use the same facade texture set on all the buildings of that style. This lets us render lots of buildings in the same batch, which is crucial given just how many buildings there might be in your city.
The first texture is the color map. We use a palette on it. Palletizing is a technique that allows re-use of the same texture with all sorts of different colors. The colors can be changed on different buildings, or even on different parts of the same building.
The second texture is a normal map (http://en.wikipedia.org/wiki/Normal_(geometry)). You can kind of see the shape of the texture in the normal map already, and it’s used for exactly that reason.
Here’s how this building looks before and after it’s been colorized with one of its possible palettes:
I use interior mapping for the insides of the buildings – you can see lots of examples of interior mapping in Creative Director Ocean Quigley’s night videos where the lights come on inside skyscrapers. The building is split into rooms and I render the appearance of the floor, ceiling, side walls, and rear walls into each room.
The mushy grey areas are actually lots and lots of very tiny rooms – the art team didn’t tune those areas since they have no windows and aren’t visible. Putting this all together, a building finally emerges! A pretty flat looking one, but it’s starting to look recognizable.
The next step is lighting. Using the normal map to figure out reflection angles, our light model gives us the contribution from the atmospheric light (moonlight, in this case), along with the lights on the building itself. This also includes our ambient occlusion and shadowing.
Putting it all together, it’s starting to look kind of pretty:
You can see a nice specular highlight on the windows from this angle.
We weren’t satisfied with just this, though. The stairs felt flat, and the windows and doors are missing something. So if you are running on our recommended spec (“Medium” lighting or higher), it’s time to bring out the big guns: relief mapping.
Here’s what I want to show:
We didn’t want to store a high resolution mesh for every building though – for one thing, that’s a lot of vertex memory, and it’s wasteful to repeat that same mesh for bricks, stairs, windows, doors, and so on. Not only that, but storing highly detailed geometry makes it a lot harder to scale the memory use down for lower-spec machines.
Instead, we kept all of that data in a texture so that a high-resolution mesh can be reconstructed on the fly. The flat low-polygon mesh is ray-traced into beautiful high-resolution geometry with hundreds of thousands of tiny virtual polygons.
We have a debug mode that lets me take the texture and re-construct the 3D mesh it was originally exported from. Here I’ve taken the door on the bottom right of the original texture and turned it into a real 3D mesh:
This isn’t just a lighting trick. For example, if we’re looking up at the door from the ground, you can see the underside of the doorframe:
Here’s that earlier screenshot with relief mapping turned on. Notice that the building’s facade has actual surface relief.
These two zoom boxes are showing exactly the same location relative to the camera and the building polygon. The center of the “flat” one shows the window, but with relief mapping you see past the recessed window to the edge of the door.
So to find the target, we’re going to step along that ray from the viewpoint, asking the mesh “what do you look like here?” There’s a tradeoff, however – querying the mesh takes time, so we want to do it as few times as possible.
To facilitate this, we pre-calculated a cone for every point in the mesh. We made the cone wide enough to intersect with the geometry around it, and maybe a little bit more. It’s a cone of “guaranteed coverage” because no ray coming down and going through the cone’s center-line can pass all the way through the surface mesh and come out on the other side without hitting the insides of the cone.
In this case, the cone just barely intersects the surface. Of course, we want the widest cones that we can make – wider cones allow for bigger steps, making the technique more efficient. Big steps means fewer samples are required to find the target, making the shader less taxing on your graphics card.
Once the ray goes inside the mesh, we have both our old point that’s on the outside and the new point that’s on the inside. Now we know that the mesh surface is somewhere between them. To find out exactly where that surface is, we use a Binary Search. With each step, we divided the space between the points in half, always keeping one point on the inside and the other one outside. This narrows down the space really fast; it only takes a few steps to get as close to the surface as we need to.
This is what the whole process looks like:
Zoomed in, notice that the center of the light blue cube is almost exactly on the edge of the surface:
Now from the top, the target has been hit exactly at the right spot.
Here’s another example – this time we run out of cone tests before quite reaching the edge, so we gave up at that point. This can lead to minor artifacts, but it’s not usually noticeable in-game. It’ll look like the surface curves towards the edge, instead of being a clean cut.
The amazing thing is that your graphics card will do all this calculation for every single pixel of every building in the game, every frame.
That’s it – I hope you enjoyed reading about how we render buildings in the game.