Code Does Not Control Everything….Mostly.

There was a discussion on /r/battlefield3 some time ago when the latest tentative changelog was announced. The talk was related to 3rd person player animations and how they sync up with what is happening in first person.

In short, the focus was on the reload animations; anyone who has played Battlefield 3 can tell you that the reload animations in first person don’t match up well with the 3rd person animation, often leading to silly situations where a player is firing their gun while reloading it at the same time.

During this discussion several comments and suggestions had been made to remedy it, including using the short reload animation at all times, instead of the current long one. However, the DICE developer, user name “Demize99″, gave the ultimatum that it’s too late to go back. Even novice game developers would understand why. But, the flood of misinformed comments flooded in anyway…

“it can’t be done” = don’t have the time/allocated resources to re write the code.

The thing is, programming and code have very little to do with an actual animation done by a mesh of shapes, polygons, and skeletal bones. At best, programming can control the speed and timing of an animation. It can control how and which bones react, but they can only perform the actions they’ve been assigned.

So in the instance of the reload animation talk, the best the DICE developers could do is speed up the animation or cut it off midway through. Both solutions would look wrong, and the proper thing to do is redesign the whole animation.But that is a time-consuming process. Rewriting some stuff in code is not. If they had the option to do it, they probably would have, but it isn’t as simple as changing some values, it’s actual reworking of a model and animations. On top of that, each reload animation for every single weapon would have to be redone. Hopefully you can see how this would eat up a lot of time very quickly at this point….

BUT, on the contrary…

Valve software has put out some impressive technology in their Source engine. One of the most profound parts are the animation systems. There is a particular tool in the Source SDK, a set of programs available through Steam which allows for Source game/engine modding, which allows you to animate an NPC’s mouth, facial expressions, and eye movements all through recording a voice clip. Somehow, the software recognizes speech and replicates the mouth movements onto an NPC. Granted, it’s not always 100% accurate and often requires a bit of custom tweaking every few frames, but it does a great job of a baseline at the least.

How did they pull this one off? Doesn’t this contradict what I mentioned?

Well, indeed. It’s impressive stuff for sure. My best guess is that they have pre-computed facial positions for various sounds made by the human mouth when talking, and then chains them together in a smooth animation by interpreting the audio data. This is in simplest terms, of course, nor do I even know if this is how it really works but it’s the best guess I can make. Just looking through the source files supports this a bit, as ALL talking NPCs have to include some basic skeletons when the model is compiled.

Also, have more content to come in the following days. Stay tuned!

Leave a Reply