Fake it till’ you make it – simulating a longer draw distance in mobile games
Optimization is a cornerstone of mobile game development. With thousands of phone models in circulation, many with outdated chipsets, every game must target a reasonable lowest common denominator, and one of the most consistent ways to optimize performance in 3D games is to manage draw distance.
The draw distance should be as short as possible to achieve a stable FPS. But what about open worlds, where players need to be able to see the entire map from any point? This is the challenge we faced at Cubic Games during the development of Block City Wars, and below we’ll explore the solution we arrived at, and the strengths of this particular approach.
The problem:
In a game like Block City Wars, each player must see the entire map from every position or they are at a disadvantage, and simply increasing the far clip plane won’t work. Increasing the draw distance increases the number of triangles that pass all selection stages: more objects undergo bounding box checks on the CPU and more fragments are drawn on the GPU.
Using a different camera for the background with a different draw distance complicates camera management and adds unnecessary overhead. Finally, experiments with HLOD (Hierarchical Level-Of-Detail) were also found unsuitable to solve this problem. While some of these solutions may apply to other games, they did not meet our needs. When all else fails, shader magic saves the day.
The essence of the solution:
The solution we settled on was to use a combination of shader tricks in combination with our existing simple fog effect to provide useful but largely faked detail. Using a shader we can create the illusion that an object is far away, when in reality it is close to the player. This allows us to choose which objects will always be visible, regardless of distance.
It makes sense to only use high enough objects so that players can orient themselves on the map, allowing us to completely remove visual clutter from the final rendering. To ensure a seamless transition between ‘fake’ objects and real objects, we will render silhouettes in fog color. This also allows us to significantly reduce the details. It will look like this:
For
After
Deceptive CPU culling:
To achieve this effect we can use the tools that Unity offers us. In order to submit a mesh for rendering, its boundaries must fall within the camera’s frustum. This can easily be done using, for example, this MonoBehaviour. We will do this in Start() because Unity recalculates the boundaries when the mesh is initialized. For our purposes, we need to set the size so that the player’s camera is always within its boundaries; thus the mesh is always sent for rendering on the GPU, easing the load on older CPU models.
void Start()
{
Mesh mesh = selectedMeshFilter.sharedMesh;
Bounds bounds = mesh.bounds;
bounds.center = newCenter;
bounds.size = newSize;
mesh.bounds = bounds;
}
Deceptive GPU culling:
Once the mesh is on the GPU, there is another phase of frustum culling: between the vertex and fragment phases. To get around this, we need to transform the vertex coordinates so that all vertices are within the camera’s field of view while maintaining perspective.
v2f vert (appdata v)
{
v2f o;
float3 worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
float3 directionToOriginal = normalize(worldPos - _WorldSpaceCameraPos);
float3 scaledPos = _WorldSpaceCameraPos + directionToOriginal*_ScaleDownFactor;
float3 objectPos = mul(unity_WorldToObject, float4(scaledPos,1));
o.vertex =UnityObjectToClipPos(objectPos);
return o; }
_ScaleDownFactor is the distance from the camera at which all vertices are located. It needs to be adjusted to the fog distance to hide the transition.
All we need to do in the fragment shader is simply draw the fog color, which will mask the geometry cutoff.
fixed4 frag (v2f i) : SV_Target
{
return unity_FogColor;
}
Example with an island mesh:
This effect is clearly visible in Blender. If you place the camera at the origin and point it at a cube, then duplicate the cube and scale it relative to 0, there will be no difference between these cubes from the camera’s perspective. It’s obviously a trick that won’t work quite well in VR, but we’re developing for mobile here, so depth perception isn’t something we need to work around.
In our case, an extra step is added: the mesh is “squished” so that it fits exactly on the edge of the camera’s drawing distance. This is done to avoid overlapping with the z-buffer of other objects that need to be closer to the player. When dealing with these types of “impostor” detail objects, one small error in the rendering is enough to break the illusion and draw attention to background objects that should normally be seamless.
We also need to consider cases where the camera can get caught in the silhouette mesh. Vertices in one triangle can end up on different sides of the camera, causing it to span the entire screen. This should be taken into account when creating the silhouette mesh, making sure the camera doesn’t enter it or disabling meshes as the camera approaches.
Conclusion
While this approach won’t be applicable to all games, it fits perfectly with Block City Wars and the existing fog effects. This approach makes it possible to quickly increase the effective draw distance using ‘spoofed’ silhouette details under severe performance constraints, taking advantage of the existing fog effects to hide the smoke-and-mirrors used. It is easily reproduced in any rendering pipeline and engine, and does not require any changes to existing code.
Even if many of the fine details are faked and hidden behind fog effects, the distant silhouettes still provide players with useful gameplay information at minimal performance cost. A net win for players on all platforms, especially older hardware.