Life is Strange: Double Exposure

Published: October 2024, Square Enix

After finishing up my last remaining work on the DLC for The Expanse, I was moved on to a project that had been in development since I had started work on The Expanse – Life is Strange: Double Exposure. I was originally brought on for a temporary period while a different game (that eventually got canned basically as soon as I started working on it) was in pre-pre-production. I was originally brought on to maintain and revamp the locomotion system, as the original owner of it had left the company.

Within a short amount of time, I realized that I was out of my depth in trying to understand the system as it was- the previous feature owner had set up Double Exposure with the Advanced Locomotion System (ALS) plugin, and I had never used it before. All my locomotion experience up to then was proprietary nodes that handled a couple animations at a time, and this was using layers and state machines and who knows WHAT else- What I did know was that the system was more powerful than we needed it to be for our purposes, and we weren’t even using it to the capabilities we needed. So I was able to convince the locomotion strike team – let me rebuild it.

This proposal was accepted, and I started on the long journey of once again building a locomotion system from scratch- though, this time, I wanted to do it on Unreal’s terms, and better understand it’s built in solutions to these problems.

Making a new start (walk)

In True Colors and The Expanse, the locomotion system was handled with single animation nodes, with the animation states and state requests handled directly in code. I decided when moving to Double Exposure that I would explore working within Unreal itself. This meant making a more complicated system on the anim BP, but would be much easier to maintain within the engine.

I hadn’t used the state machine before, so I just started out with making the basics- Idle, Start Move, Move, and Stop Move. I originally sampled current velocity data within the transition condition, but then I decided instead I could use the event graph to handle the state machine requests. this way, I can just check the desired state in the transition condition instead.


I decided again to try out multiple start walk directions, which lead me to discover the state alias and conduit system- I was able to keep the simple high level state machine diamond, but then was able to have more complicated sections regulated off somewhere else.


One issue with this was that we wanted to be able to interrupt the start walk with a new start walk- this proved pretty difficult, as a state could not interrupt itself. it might not be a super great solution, but to fix this I decided to create a second set of start walk states that were exact copies of the first set, but where you could go between the same animation state over and over again- IE, turning left 180 degrees, , then triggering another 180 left turn from that turn. every start walk would then funnel to the single movement state


I then wanted to create a simple version of motion matching. The new UE5 motion matching hadn’t been released yet, so I had to find a different way. to handle this, I worked out a simple solution- if we break our animations into chunks based on which foot is down, we can roughly figure out the desired start time for the next animation state from the current animation time.

both this and wanting multiple start walks required more connection between the animation state machine and the event graph. I then learned of the hidden power of the Unreal animation system- hooking up anim nodes to event functions. This made sure I could keep track of the current animation time of an animation to know which “chunk” of time I was in to know what chunk to start in in the next anim. It also meant, at transition ends or state starts, I can set and reset data for the event graph to better handle state transitions. There was a lot of timing issues with this, but as there are a lot of ways to handle these hookups within Unreal, there’s a lot of room for trial and error to figure out where the correct hookups need to be set up.


One thing we’ve always avoided that we wanted to make work was stair locomotion. To get working stair locomotion, we needed some hard constraints and some simple smoke and mirrors. We were able to avoid dealing with the feet by simply restraining the camera- you could only look down so far.

otherwise, I created a big restraint- you can only move in one direction along the stairs- no diagonal movement. I created a trigger volume that can be placed on stairs, whose forward vector decided both the locked direction along the stairs, as well as the direction of Upward facing stairs animations.

Instead of allowing turning, when the desired movement direction in world space would be going along the opposite direction along the stairs the player is moving, we would trigger a 180 turn animation. This meant utilizing all the tools at our disposal- doubled up state sets, state aliases, and function hookups- to create a working stairs system.

After all this, we had an issue- there were a lot of places we wanted small, precise movement, that our multitude of start walks did not help with. While the animation itself looked great, the movement in enclosed spaces was cumbersome. I tested out taking out all the directions, save for 180’s during hard direction changes during stop walks, and it felt much better. One day we’ll get it!

From here, we then pass the pose from this locomotion through a whole system of additions and tweaks to create our full gameplay animation sytem.

Customizing character movement

Start walks introduced a different problem as well. We preferred to used a root motion system, but part of root motion in Unreal is that it’s always motion relative to the mesh and not the player’s input.

When removing the start walks, this meant when starting in the opposite direction the player is facing, there would be a big swing around depending on how fast the player would rotate to match the desired direction. We could fix this by removing root motion, but then the movement wouldn’t match the feet and might feel pretty robotic.

Instead of deciding between the two, I decided we could merge them. In our custom character movement component, I had it handle velocity and direction in the same way it would without root motion, but with the root motion’s magnitude used to clamp the speed of the movement. This way, start walks can still rotate quickly toward the desired direction, while the movement itself is still in the input direction. With a couple exceptions, I also made sure that stop walks still applied their root motion movement and direction when input is removed.

The Bagstrap

I will not mince words- I’m not a fan of the messenger bag. I like it aesthetically, it’s a great part of Max’s character design and I really like how it looks.

However.

The bag strap itself, while it looked perfectly fine in each individual animation, would clip the body at various parts when blending between animations. This was a hard issue to fix, but I feel like the final result worked really great!

To fix this, I ended up creating a control rig to handle it. I initially tried putting in multiple sphere controls along the back that the bag strap joints would stay on the outside of, but that looked terrible and was very inconsistent. So instead, I tried this:

  • Create a sequential list of the bag strap joints
  • Create a position for each bag strap joint based on it’s relative position to the clavicle bone (from the reference pose)
  • Create a curve with a value at each point along the X axis that determines a blend between the animation and the reference pose
  • With each joint being a sample along the X axis of the curve, blend the position of the bag strap between the reference pose location relative to the clavicle bone and the original bag strap animation

Doing this, I was able to ensure that the bag strap was steady around the shoulder area (blended to the reference pose) and animating as expected near the bag (blended to the animation)

this worked mostly well, but the mesh being used for the player at the time would affect the way it blended. Since we had logic to switch between outfits, I was able to use this to make sure we had multiple curves to choose from based on each given outfit mesh.

When it came to the big hooded coat, we had more issues- the strap in the model was threaded between the shoulder and the hoodie. to fix that, I worked with the person in charge of Max’s textures to make sure there was a cutout along the strap texture that made it invisible in the area under the hoodie- that way if it clipped through, we’d never see it!

(Hidden) Emotions

While fully developed for the final game, I ‘m not sure if this feature ever got utilized. But! It’s still something I worked on and still something I’m going to talk about.

When I started working on the new locomotion system, I was also tasked with getting the look ats working- I rebuilt a lot of the system that we used for The Expanse, and then added on top of that one more thing. In True Colors, I implemented a facial animation system that allowed us to have an ambient facial idle that played on top of the locomotion animations, in a way that made the face carry over between animation blends to ensure some continuity between animations. I rebuilt a version of that system In Double Exposure, but with the added ability to ALSO include emotions- emotions that could be triggered by the look ats!

I built out the animation handling and face request calls in Max’s BPs, and then added a drop down on the look ats where you could select the emotion, how strong it would be, and how long it would linger.

NPC behavior state placement

One big cinematic bug I needed to deal with had to do with popping NPCs out of their cinematic animations and into gameplay animations- there was always a single frame pop before the gameplay animation actually started playing.

after a lot of digging, I was able to find the issue. when setting an NPC to their gameplay aniamtion state, we tell their behavior state tree to load up the desired idle animation. the problem is, the behavior state tree did not actually set the animation until the next frame, leaving us with a full frame of whatever animation was going on before it- or worse, a T-pose. Fixing this meant bypassing the behavior tree to pre-load the animation onto the character. once that was fixed, the system worked flawlessly!

… Mostly. when putting an NPC into gameplay mode, that also means they stop taking in key frame positioning data. with some finagling, I was able to create a system that allowed cinematics artists to be able to apply a position key on the same frame as a state change, meaning you could in the same sequence, change the gameplay idle animation and position as many times as one wanted.

Various other things

One bug I was sent that took me for a ride was where in cinematics, even if the prop and the actor were exactly at the same positions, they would have huge mismatches. in test scenes there was no weird offset, so finding the offset within the scene meant digging pretty deep.

as it turns out, the issue had to do with how our keyed positions were actually applied to characters and props. When an object is keyed in cinematics, the position is where we want the skeleton to be positioned, but the rotation is based around where the actor is located. this is not an issue for props- these are always the same. However, the same can’t be said for the actors, whose root bones are 90 units below their actual location in the centers of their collision capsules. the actor and the prop get rotated the same, but around different pivot points, causing a mismatch. The easiest way to fix this was to just tell cinematics that actors and props should only have yaw rotations, which bypass this issue altogether, and cleared out completely unnecessary roll and pitch rotations.

In the second episode of the game, there is a section where Max switches between timelines multiple times to complete a stealth mission. The main issue here was that we needed the ability to switch nicely between crouching and standing. This meant needing to find a way to plug in a transition animation that we didn’t really have a place for- I was able to create a little overlay node in the player gameplay animation layer that we could fire on transition calls from cinematics- This required some finagling to make sure that during the blend into the transition animation, we held whatever animation we were currently in so we didn’t blend to standing in the middle of an animation where Max tries to stand.

I was tasked with getting the peek and shifts to work much nicer, including doing a lot of work on the peek animation handling, working with design to tweak the peek behavior code side, and working on creating a system to stop the player and reset the camera when starting a shift to make sure the transition looked consistent.

When I received the camera system, we were using a custom extension of Unreal’s camera arm. It did not have any smoothing at the time, so I was able to add that, as well as other features like an idle camera bob, position lag, smoother collision and having collision not clip through the player’s head, camera look ats when activating interactable objects, and creating smooth blends for leaving cinematics.