There’s a little feature of Blender that most game devs I know never touch. That feature is the physics engine.
Why do we need the physics engine in Blender, when all of our Unitys and Unreals already have physics right in it? We don’t need our modeling program to simulate rigidbodies. This is probably why Maya LT, which is aimed at game devs, doesn’t include the physics simulation part of full blown Maya.
But did you know you can use it to create intricate, baked physics animations and play them back in your game? Imagine showing the collapse of a building by shattering the mesh into hundreds of tiny fragments. Sounds expensive on the CPU, right? With physics as an animation, you can calculate all those physics interactions and just play it back!
I’m a huge Battlefield fan. I love the teamwork, enormous battles, and yes, the testosterone-fueled carnage and gun porn. When Battlefield 4 released, I was only slightly impressed by the “levolution” events, such as the centerpiece Shanghai skyscraper falling into the water. They happened the same way, every single time. It was clear to me and millions of other players that it was pre-baked. The skyscraper falling, while visually amazing, was not simulated. The levolution event was merely the playback of an animation!
I wanted to take that into my own work. It’s a great method. Take the strain off the CPU by baking the amazing physics into an animation!
Blender comes with this feature. You can add hundreds of meshes to a Blender scene, make them all rigidbodies, and then bake the physics to keyframes for an animation. You can then take that mesh and that animation and play it back in your middleware game engine of choice.
Too good to be true, clearly. A free, open-source engine giving you this immensely performance-friendly happiness effortlessly! But of course. This method has its downsides. I’ll try to elaborate on a few of my personals.
You can’t interact with the physics. It’s completely static. Clearly. Since it’s an animation, it will play out the same way every time, and it will go through anything in its path that wasn’t simulated to be there for the animation.
You can’t quickly iterate on it. At least, not as quickly as using in-engine physics. You need to jump back into your modeling program, make your changes, then resimulate, re-bake, and re-import. That’s a lot of overhead for someone like me.
It doesn’t reduce any GPU load, only CPU load. In graphics, draw calls are terrible. You want to have as few as possible. On a lot of machines, Zarvot is GPU-bound. Trying to render hundreds of meshes on Unity’s flaky batching system means to stress a slow notebook with hundreds of additional draw calls. Fortunately, I came up with a great method to reduce the draw calls by mesh combination. I hope to cover that in a future post.
Physics baking is not a magic bullet. It is fit for setpieces with minimal interactivity. It can be urged a little to overcome some of its shortcomings, but it will always take a back seat in game fx.