In Part 1 of my Overview Of Machinima series, I gave an overview of what Machinima was as a medium, and what it holds the potential to become, through a few example videos I provided. In Part 2, I’m going to discuss the actual practicality and production techniques behind creating a Machinima, used by amateur filmmakers during the underground-inception of the medium.
The processes used pre-Red Vs Blue were fairly universal and didn’t deviate much from each others’ specific iterations of the techniques used. Halo had not established itself as the game of choice for budding Machinima creators quite yet, and RvB had yet to become the phenomenon it did. The most popular method for recording Machinima was to use the Quake-engine based games of the time, and playback what is known as a demo file. Much like most modern games with replay systems such as the more recent Halo series (starting with Halo 3), GTA V, and some of the more recent Call of Duty games, the idea was if you enjoyed your performance or found a specific moment where your skills were exceptional and wanted to show off to other players, you would record the match inside a demo file.
A demo file is not the same as a video replay. A replay in the traditional sense would be video footage that is recorded onto some form of media, that can be re-viewed at any point in the future beyond when the original footage took place, from that particular perspective/angle. In Video Games, a replay usually involves the recording of gameplay, which is not a visual artefact which can be stored in a format such as a video file. Gameplay is compromised of a game engine, performing various calculations, methods/functions, and input/output operations, that results in a frame being rendered of the on-screen action. It’s all binary digits; 1s and 0s. Basically, without getting technical and using in-industry terminology, a gameplay recording in terms of a demo file is simply storing the values of object positions, rotations, animations, states, and properties, at a certain rate per second.
“Think of how you could share your replays in Halo 3 with your Xbox Live friends, and they could download an entire match within 10 seconds or so, play back the entire match on their own Xbox, and fly around to view the action from whatever angle they desired”
Some game engines will record and store the value of objects every 1/10th of a second and interpolate between these values (a smooth transition from an objects position/rotation to its next), it might record them every 1/120th of a second (double the usual 60fps framerate gamers aim for), or it may create an artificial timeline where trivial/interpolate movement and rotation is recorded at a specific refresh rate, and then it might instantiate methods/functions on the exact moment they are called upon in the original match. Meaning characters shoot on the frame they shot at in the original match, and not once the refresh rate had caught up and recorded that change; a hybrid of refresh rate based updates, and instant, real-time data recording.
In a way, a demo file is much lower in size than a video file to playback. Because you’re only storing number values and only making a reference to the original game objects and assets used in the demo file (since you’ll be using the game engine to playback the demo file, it will re-use the original assets from its own library of assets installed), an entire 15 minute match might only be a handful of megabytes, opposed to the gigabytes a video file might take up in high definition. The luxury of a demo file is that since it’s played back using the game-engine, not only can you view it at different resolutions when played back since the game-engine will scale the UI and the rest of its assets natively to your screen, but you’ll also not be constrained to one perspective of viewing the replay. You can explore the 3D scene, move around, pause and view projectiles in mid-flight, you have time to find the perfect moments within the demo, and record a video yourself from various angles. All of this control is offered to you, via the small container of a 1-5mb demo file, that plays back in the game engine.
Quake-engine based games were used for this feature because of the small file sizes of demo files compared to recorded video, which also held the restriction of being taken from one camera angle/perspective. Many gamers chose to share their demo files instead for others to play back on their computers within the game engine. Think of how you could share your replays in Halo 3 with your Xbox Live friends, and they could download an entire match within 10 seconds or so, play back the entire match on their own Xbox, and fly around to view the action from whatever angle they desired. This is the power of demo replay within a game engine context. As long as the person playing the demo file has the 3D objects/assets and code needed to play the original game that recorded the demo file, their own copy of the game will simply borrow its assets for use in replaying the demo file.
Some pioneers of the genre decided they would create entire narrative Machinimas within this context; act out a scene or story in-engine in real-time, and they would have custom cameras that were programmed to teleport around the level to different camera angles. This meant that an entire 10 minute comedy sketch shot in Quake might only be 1mb, as opposed to the 500mb it could have been in standard definition video. The restrictions of this were obvious of course. How would you voice act? Would you give the voice clips as modifications for the game to install in a mod folder, and have your demo file reference them for playback along the action? This meant you would need the demo file at 1mb or so, and would also need to download the voice clips, which for a 10 minute Machinima, could range anywhere from 10mb to over 100mb depending on how much dialogue there was. At this point, you might as well record it in a video format and upload that online instead. This also meant no special effects: editing was entirely reliant on the rhythm of the acting in-game, and there could be awkward pauses, or moments that couldn’t be cut ahead to, because the entire performance needed to be played back in real-time. Then there’s the requirement that anyone who wanted to watch this Machinima would need to own the game to view it as well! Not much Machinima was made at this point in this format because of the obvious limitations. It was far easier to record the footage yourself, and create a traditional video of your Machinima, which enabled you to edit etc.
Then came along Red vs Blue. Its good editing, writing, and voice acting opened up the floodgates for imitators and gave widespread awareness of Machinima as an art form/medium. With nobody considering the future of demo/replay files, and their place within a narrative storytelling-based context, there was no innovation, evolution, or experimentation from game developers with their demo recording systems included with their game engines. They were primarily seen as a method for recording matches and your best moments in-game, and never given much thought beyond that. This gave way for traditional filmmaking techniques to take over, and led the way for Machinima to mainly grow into that traditional based form of filmmaking that we know today. RvB found limitations and exploits within the Halo engine, and found ways to circumvent those limitations and use these exploits to create a platform that would lend itself well to filmmaking. First of all, Halo has a well-known bug where if you lower your gun all the way downwards onto the floor, your gun is pointed down, but your face is looking straight ahead. If you moved your gun slightly up and down in a fast, alternating motion, the character’s head would bob, as if they were talking. This simple bug played a major factor in popularising Machinima, as it took away the aggressive stance of the characters in Halo, and allowed them to be filmed in a more passive and friendlier role, which lends themselves more towards creating a narrative with no expectation of action/fighting for the viewer.
“Then came along Red vs Blue. Its good editing, writing, and voice acting opened up the floodgates for imitators and gave widespread awareness of Machinima as an art form/medium”
The crosshair on screen was not removable on the Xbox version of Halo, so this unfortunately remained. Equipping a pistol (the smallest on-screen weapon in the game), and placing black bars at the top and bottom of the screen to hide the UI elements and weapon models, gave the illusion that the series was being filmed from a camera, and not a player’s perspective. The camera was obviously limited to constraints of the player’s movement, so the camera would essentially have to walk everywhere or use clever editing tricks to conceal the limitation of literally being a camera-man. One effect Red vs Blue might have used was to stand on top of a vehicle to get a higher camera viewing angle, and have the vehicle drive slowly along the camera’s planned path. Speed up the footage where needed, and you have a method for creating smooth camera movement where the camera man could pan effectively without breaking the illusion that it was all a man running along the ground.
It was only when Red vs Blue and the Machinima community got their hands on Halo 2, did amateur Machinima making really kick off. A new bug was found in Halo 2, where charging up a Plasma Pistol and then picking up objective-based object such as a bomb or flag meant that you would hold the flag/bomb, and drop the Plasma Pistol. Once you pulled the right trigger to drop the flag/bomb on the floor, your character wouldn’t be holding any equipment or weapons at all. The screen was now completely void of crosshairs or any on-screen weapon/equipment model to block the camera’s view. The only limitation with this bug, was now that the game knew your character had nothing to equip, it was constantly searching your immediate area for something to pick up as a default weapon, so walking over any weapon on the floor or bomb/flag would make your character automatically pick it up and equip it. You had to plan your camera paths ahead of time, or have someone pick up the weapon and hope it didn’t spawn back while your were filming, otherwise you would have to die and re-perform the bug/glitch again. Another bug that was useful was to waste your grenades until they were all gone, and by holding the left trigger (throw grenade), you would lower your gun into a folded-arm/at-ease stance, which gave the impression that you were talking again, just as in the original Halo. This was likely implemented on purpose, specifically for Roosterteeth to create Red vs Blue, as well as for the use of the entire Machinima community. Once again, moving your view up and down would create a convincing head-bobbing movement for dialogue scenes.
Since home movie-making software wasn’t cheap or very powerful at the time, nobody really made Halo Machinima because you needed to crop the screen, or place an overlayed alpha/chroma keyed image to block out the UI elements. These were features you would find in more expensive film editing software, so most amateur filmmakers who were dependant on using Windows Movie Maker (free video editing software with Windows XP) would simply leave the UI elements on screen, and hope their viewers would overlook this visual annoyance. Of course, this also meant that mainstream viewing of Machinima wouldn’t happen, as the only viewers of Machinima in this manner would be other Machinima creators themselves, or very devout/niche collections of gamers who were loyal fans of the game being used. Of course there were exceptions, but generally, there wouldn’t be a mass-audience of Machinima until this was fixed, and presented to them in a more comfortable and familiar visual style that they associate with regular TV/film.
This began to change slowly, over time. Whilst loyal Machinima makers would go out of their way to finds mods and other work-arounds to remove HUD elements and weapon models on screen, when programs such as Sony Vegas and Adobe Cut Premiere came down to affordable prices, providing the means of adding black bars on screen, or cropping to a certain area of the screen, it meant quality filmmaking became accessible for more Machinima creators. Now that Machinima was finally looking like a proper TV show or film, with no health bars, crosshairs or weapons on screen, it was finally on a level playing field with traditional filmmaking from a technical standpoint. Websites and forums were flooded with people recruiting actors for their Halo 2 Machinima projects. Halo 2 being the most popular at the time, Machinima began to pick up around 2004-5 and everyone would use any game that allowed them a first person perspective, with the ability to remove the HUD to bring their project to life.
That’s what became the criteria for what a Machinima-enabled game engine would be, screen-space viability. Since we weren’t really in high definition quite yet, Machinima was still stuck in 480p standard definition. This meant that screen-space real-estate was a premium. If your game couldn’t make its screen completely blank and first person, then you would hope that there was a section of the screen that would let you crop or zoom into a blank portion for your project. PC gaming was obviously in high definition at this point, so this wasn’t a problem for the PC Machinima creators. For console Machinima creators however, this meant that their already low-resolution Machinima might need to go even lower to maintain that film-quality screen recording of removing the game-elements of the screen that gave it away.
Assuming however that you had worked all of this out, found a suitable game, and had something a little more powerful than Windows Movie Maker on hand to create your films, you would use video capturing software for your PC games, or for consoles, you would need what is known as a capture card. PC game recording was fairly well supported, with programs such as FRAPS or by using desktop recording software, the process was fairly straightforward. For console gamers, this meant finding a USB capture card that you would plug into a computer, plug your console’s video and audio output cables into the card, and your computer would recognise the capture card as a webcam-style device that you could then record in your video editing software.
“Now that Machinima was finally looking like a proper TV show or film, with no health bars, crosshairs or weapons on screen, it was finally on a level playing field with traditional filmmaking from a technical standpoint”
Capture cards were expensive, and I remember saving up for a while to get one. For UK Machinima makers, this was a horrible setup to figure out, because of the PAL60hz video output signals that were used by a portion of games at the time. If your game outputted an NTSC-style 60hz signal from your console, most cheap capture cards would not accept these signals as compatible to record for on your PC, and would just record a black screen (audio still worked fine though!). As most teenagers getting into Machinima at the time would find out the hard way, this meant you had to check whether your UK copy of the game would output its signal as PAL60, or PAL50 (the signal supported by most UK-sold capture cards). Even then, some capture cards supporting PAL60 input signals would be lower quality than the PAL50 equivalent, and the compression codecs they would use to send the video to the PC might be blurry/fuzzy compared to others. This meant finding the right capture card was a nightmare, resulting in most Machinima creators flooding the forums with questions on what to buy, usually resulting in a Turtle Beach capture card (yes, the headset company!), or a Dazzle DVC. After a few years and with high definition entering the fray, there would be HD options available for recording from an Xbox 360 and PS3, one of the more popular ones being a BlackMagic card which came in both external (USB device) and internal (PCI slot card) varieties to record consoles at 720p/1080p. This all came of course, after you worked out what types of cables your console outputted – composite, component, or HDMI – which influenced which capture card you would opt for as well.
Once video was taken care of, then there was audio. Would you use voice chat and record the dialogue in real-time alongside the acting, recording the voices as part of the recorded game audio? Cutting to a new camera angle would mean that the audio changes drastically suddenly within a split-second, as the wind speed changes, new sound effects are included, or some other time-sensitive audio cue that gets cut out abruptly, or included suddenly halfway through playback. This was unprofessional, but quick. It meant that as dialogue was included in the game’s recording, it didn’t need much editing, and it was a case of simply ordering the scenes in your editing software and trimming them to the correct lengths. If you wanted to make sure the background ambience was correct however, then you would have to use only the video recordings of the game and place your own dubs/ sound effects onto the footage in the editing process as is done in real filmmaking. This is obviously much more tedious, and requires a huge amount of effort from the editor. It makes a real difference however, and places your work into a much more professional area of quality and polish. It’s the kind of thing that goes unnoticed when done correctly, but is noticed when absent. For most people wanting to have fun with their friends, just recording the in-game audio and voice chat was fine for their needs and audience.
So for most games, running around with a blank screen and the UI/HUD removed was the way in which most creators filmed their masterpieces. Whilst Halo remained the most popular due to Red vs Blue and everyone trying to emulate their success (as well as Halo‘s popularity in general), other games were frequently used as well. Titles such as World of Warcraft had very different methods of green-screening the model viewer’s characters onto background images/videos that were recorded in-game, but you could ultimately see the effect, as they weren’t running on the ground at the correct angles, or looked out of place on the environment. The Sims 2 had an actual free-cam mode that would remove UI elements and allow you to record the game in high resolutions, which is still present in Sims 3 and Sims 4 I believe. The processes used are virtually endless, as they are tailored and adapted to each different game’s needs and restrictions, but having read how Halo 2 managed these problems with the bugs/glitches I mentioned hopefully gives you an idea of what kind of work-arounds Machinima creators all used to achieve their filmmaking dreams. Anyone with a video game and video recording software/hardware could be their own Michael Bay without having to dream up a budget of millions to make their visions a reality. With everyone else online willing to act for free, you could avoid having to animate your own Machinima and could rely on in-game players to act out the various roles in real-time for you. This meant that if you were able to pick up a controller and play a game, you were qualified enough to be able to film with it, provided you found ways of clearing the screen of the UI, or found ways around that limitation.
In Part 3, we’ll look at some Machinima communities that were around at the time Machinima began to become really popular, and what websites were available when YouTube hadn’t really sunk its feet in quite yet. We’ll also take a look at some important examples of Machinima that stood apart from the typical amateur affair, and began to showcase the potential for Machinima as a commercial medium with their high-quality production values.