Like most University degrees, the end of my third year in Computer Games Design had its success rely substantially on the production and submission of a final thesis/dissertation to my lecturers.
The decision of what to write about was a no-brainer for me, as I’d spent over seven years at that point producing Machinima, so the topic I was interested in writing about was one of the easiest decisions I’d ever made in my life. The years I’d spent with other gamers online, playing games like Halo 2 and finding ways to work around the HUD systems and film ourselves in-game, it was all leading up to this final moment where I could write 10,000 words about it, and earn myself the fanciest and most expensive piece of paper I’d ever own.
I’m getting ahead of myself though. Most people in the modern age of YouTube believe that Machinima is a YouTube channel and not a noun used for a concept/idea. You see, whilst Machinima is now indeed a YouTube channel, it used to be a website portal for hosting Machinima content, much like how YouTube is a portal for hosting general video content. So what is Machinima? Paul Marino describes it best as “animated filmmaking within a real-time virtual 3D environment”. Again, what does that mean? To put it into its most simplest explanation, if you play a game in real-time, and film the gameplay process or its demo replay, it’s Machinima. Now whether you have a structured narrative, remove the HUD, and have in-game actors play out certain roles, or whether you just film yourself playing a game; they’re all Machinima.
First argument a lot of people might make is “Well, if I film myself playing a game, isn’t that just a ‘Let’s Play’ video?”. If we’re going to be all academic and act smart about this, then it’s technically both. “Let’s Play” is a term used to define a certain style of documentary filmmaking through the medium of Machinima. So recording yourself playing a game in real-time, is pretty much just a loose form of documentary, about you playing the game. So “Let’s Play” is just a continuation of defining a certain kind of documentary-style, Machinima-based filmmaking. The narrative types are the ones you’ll be more familiar with, like the popular web-series Red Vs Blue. This is traditional filmmaking in the sense that there is a structured narrative, characters, and actors with voice-overs (or record using the in-game voice chat).
“To put it into its most simplest explanation, if you play a game in real-time, and film the gameplay process or its demo replay, it’s Machinima”
Machinima was fairly limited back in the day of Halo 2 and other early used games. Graphics were decent, but still not far ahead enough that you could pass something off as an an animated, rendered 3D film (at least without having to heavily edit the footage in a clever manner). So when you watched a Machinima, you knew full well that it was a game and not a pre-rendered 3D animation. These days, the line is more blurred, with Battlefield and Call of Duty Machinima being quite hard to tell apart from amateur/medium-quality type 3D animation videos, due to their realistic graphics. Because animations are also become more dynamic and randomised as part of making the world more believable in motion, Machinima has also gotten more interesting to watch, without seeing the same repeated animations played over in a clip, and being able to do more than simply strafe, jump, and shoot. As graphics continue to improve, there’ll be a point where it’s more efficient and quicker to render a 3D scene in real-time with a 3D engine, than it will be to use a traditional 3D animation software package like Maya/Max/Blender. In popular 3D films, a single frame of animation (most films play 24/25 frames of images per second for video) can take hours, if not days to render. So the appeal of being able to render something in a real-time 3D engine like games use, can be an appealing option if the graphical technologies can arrive in a certain area of quality.
In some ways, we’re seeing the result of this already. Here’s two videos of real-time animations played out in the Unreal Engine 4. The Unreal Engine cheats a bit however with Machinima’s definition, because you can render a camera track frame by frame at full quality, to avoid dropping frames by playing the 3D scene out in real-time, game-style. It’s basically a method the Unreal Engine uses with their built in system called Matinee, which handles these kinds of in-engine animations and events, to render the scene in full quality, but without having to render them in a real-time like a video game. You can imagine if you have a very complicated scene that you want to record, and the graphics being turned up to their full settings along with a heavily populated scene, you might not even get 1fps in your runtime let alone 30fps/60fps. So Unreal simply waits until it renders the frame, then renders the next one, and creates a recording of the scene for you to play back at regular speed at the end of the rendering cycle. So in a sense, this is using a traditional method of rendering 3D graphics, but is a lot quicker to render a game-quality scene’s frame in a few seconds, versus hours or even days.
So the appeal is, that since video games are achieving greater graphical results in real-time, at 30fps, then we can simply film them as we’re playing, and we get the same results as if we were shooting a film in the real world. We can take advantage of all of the benefits that 3D films offer us, like not having to setup a scene each time we want a take, mistakes can simply be undone with the press of a button, and the budget for achieving complicated effects like explosions or grand, epic sets is non-existent beyond the salary of a 3D artist and animator. Even if we can’t render high quality scenes in a video game engine in real-time, we can still take the Unreal Engine 4 method, and simply render the game/scene at 1fps instead, to maintain a quick output of rendered frames, that uses a more restricted pipeline of graphical rendering techniques in comparison to fully-featured 3D software packages.
Even a simple scene in Blender such as placing a cube in a room and having a light switched on from the ceiling casting shadows, can take a whole minute to render depending on how many samples you want to use for the scene (basic terminology for how pretty you want it to be at the end of its processing). Even when you turn the samples right down to only a dozen or so, and you make all kinds of efficient workarounds and restrictions for the rendering system, you still only achieve fuzzy/blurred shadows, and low quality lighting. Video games use different rendering methods that are designed to look good, in milliseconds of rendering time. Because game engines focus so much on this, there’s yet to be a 3D software package released that manages to achieve similar high-quality results, in a fraction of the time 3D artists/animators are used to waiting. Traditional 3D rendering will always achieve the best results, because it can calculate any amount of complexity within a scene, provided you’re willing to wait for the amount of time it takes to calculate those results. Game engines will always provide real-time playback of your scene, so if you have a scene you’ve animated that’s 20 seconds long, then it will only take 20 seconds to playback and render. If you want to turn up the graphical settings and have a more complicated/populated scene, then just take snapshots of the game running at a lower framerate, and have it compiled into a video at the end; simple.
To summarise an entire article in a sentence then, Machinima is a cost-effective and time-effective way of rendering 3D animated films, regardless of whether the Director wants to act out the scenes in real-time using real in-game actors in the forms of players playing out the roles in-game, or if they want to have their 3D animations rendered through a game engine versus a traditional 3D package. So where does that leave us? Well currently, game-engines are starting to use DX12, and have never looked better. Things like real-time reflections on surfaces, and the uses of texture techniques such as sub-surface scattering, and employing a physical-based-rendering pipeline, means that we’re able to create incredibly convincing 3D scenes in real-time, that are beginning to truly rival pre-rendered 3D films. We may even reach a point where you won’t receive video files to play on your computer in the future, but you’ll receive data files with all of the 3D models and animations, so that you can watch the film rendered in real-time on your computer, and be able to fly around the scene with a camera if you want to detach yourself from the pre-choreographed cinematography the Director and Editor have edited in for you (think of an in-game, in-engine cutscene).
I’ll discuss filming techniques of traditional, in-game acted Machinima in my next article, and talk you through the processes I’ve used over the years in various games, on how I achieved certain results. I’ll leave you with a few Unity 3D engine real-time animations, that are real-time playbacks, that you could play on your computer right now much like you would a video game if you were able to get your hands on the master files.