top of page

RICO: London

Overview

RICO: London (2021) is an arcade first-person shooter, featuring some of Ground Shatter's signature rogue-like mechanics. Karen Redfern (Amelia Tyler) defies her orders to patrol the streets at the turn of the millennium, instead following up on leads of a terrorist threat in the heart of London. Alone, or with a fellow officer, breach a procedurally-generated high rise and scale its floors, kicking doors and taking names as you fight your way to the top. Use slow motion to your advantage and keep your score multiplier high to earn cash for new weapons and attachments.
 
Developer: Ground Shatter
Publisher: Numskull Games
Platforms: Steam, Xbox, PlayStation, Nintendo Switch

My Involvement

My responsibilities included:

  • Building, maintaining and iterating upon all of the game's systems

  • Providing tools to facilitate the creation of game content and promotional material

Base Game

When I joined Ground Shatter in 2019, I remember seeing RICO: London in its infancy - a grey room with debug projectiles firing at a blank wall from an empty point in space, as our lead programmer laid the foundations of the basic systems.

As a professional new to the industry, I started by working on UI, including the front end menus and in-game shop, and my responsibilities grew over the course of the project. When our lead programmer left the company at the end of 2020, responsibility for all of RICO: London's code fell to me. By the time the game shipped, I had worked on entities, weapons, AI behaviour, environments, animations, audio, procedural generation, UI for front end and gameplay, saving and loading, building editor tools and back-end code for managing leaderboards. All of these game elements had to work in single player, as well as both offline splitscreen and online multiplayer co-op.

Much of RICO: London was developed simultaneously with Fights in Tight Spaces. ​As part of a small programming team, I spent a great deal of time working on both games.

The Procedural Lighting Problem

One of the most memorable areas of work was in relation to procedural generation. Levels in RICO: London are composed using a palette of rooms built by our artists and designers. We created data assets that defined the floor space of each room, their heights, and the positions of all the points where doors could be placed. At runtime, the game places rooms one by one, aligning them by their doors and ensuring that their volumes don't overlap.

We targeted the Nintendo Switch for launch, so it was especially important to make the game performant on lower-end hardware. One of our areas of focus was lighting. We used realtime lights sparingly, putting a greater emphasis on baked lightmaps and light probes. Each of the available rooms was created as its own Unity scene, which allowed us to properly visualise and bake the lighting. The game loads the rooms using asynchronous level streaming, then moves all of the game objects within that scene to the desired worldspace position.


For the game's lighting to work, it is necessary to translate and rotate the lighting of each instance of a room independently. However, in Unity, light probe groups cannot be moved once they are baked. While the geometry and even the light sources can be moved correctly, they will not have the correct effect on dynamic entities and the game will look broken.

I solved this problem by writing an automated process that copies out the values of the light probes to a separate file whenever a lighting bake was performed. Now, when a room is placed at runtime, it also loads its additional light probe data, translates them in space and rotates the spherical harmonic data to align them correctly with the room. Once all the rooms are placed, we end up with a cloud of new light probes. All that remains then is to connect them.

Light probes work by generating volumes in 3D space consisting of four vertices. Lighting can then be determined for a mesh by finding the volume it is in, getting the lighting data from the four points that form that volume, and weighting them based on how close it is to each. In order to create these volumes, I used a process called Delaunay tetrahedralisation on our cloud of light probe points. This is an intensive process that can take a few seconds to complete, so it had to run on a separate thread as a Unity Job to ensure it didn't impact frames.

Every time player(s) reach the end of a section of gameplay, we unload old rooms and place new ones, creating an endless conveyor belt of gameplay. When this happens, we run the tetrahedralisation process in the background, preserving the old light probe volumes until the new ones were calculated. There are always enough prepared rooms ahead that the player can carry on playing, and never have to worry about stopping to wait for the lighting processes to catch up.

The result of this work was performant lighting, set up by artists and compatible with our procedural generation architecture, that not only overcame a limitation in Unity's lighting code but also allowed for smooth light blending between rooms with variable lighting conditions.

bottom of page