Game AI Engine Archives - Gameopedia

Category: Game AI Engine

How AI can Revolutionise the Game Industry

When Lee Sedol, the world champion Go player, defeated the AI AlphaGo in their fourth match, the people of South Korea rejoiced

Go, an ancient strategy board game, is integral to South Korean culture and Sedol is one of the greatest players in the country’s history, but he had already lost the five-game series against AlphaGo, having resigned from the first three matches. The South Koreans didn’t care – he was the human champion who had scored a win against an AI that had seemed omnipotent at the ‘most complex game devised by man.’

Lee Sedol, one of the greatest Go players in South Korean history (Courtesy AP)
Lee Sedol, one of the greatest Go players in South Korean history (Courtesy AP)

Sedol lost the fifth game too and three years later in 2019, he retired from the professional circuit, stating that even if he was number one, there was an ‘entity that could not be defeated.’ He now trains other AI Go programs.

AI in the Game Industry: An Overview

Today’s deep-learning neural networks, which mimic human learning patterns, can be trained on vast data sets to achieve superhuman proficiency at any given task – AlphaGo learned to play Go, and then mastered it. Generative AIs use such neural networks to create new content in response to a textual or visual prompt, or even certain contextual cues. 

In this blog we will explore various types of AI tol sets that are applicable to game development. Each game contains thousands of models, textures and other assets, and AI can be harnessed to generate these at scale, and at a fraction of the cost and time that is currently spent developing them. We will also discuss companies that are working on, or offering, AI solutions for key parts of the game asset pipeline. 

We will also delve into the use of AI in game testing and playtesting for bugs – AI can potentially automate quality assurance. Games have grown bigger and bigger, and quality assurance has become increasingly challenging. AI can help spare developers the thankless, time-consuming task of playtesting and bug-fixing. 

AI is thus both a literal and figurative game changer for developers, and in the following sections we deal with the main contexts in which AI is being used to help streamline how games are made – from the creation of game assets to the testing of games in the development phase. 

The Generative Revolution in Game Development

According to venture firm Andreessen Horowitz, even small game studios can now finally achieve quality without punitive costs and time, because they can harness generative AI tools to create game content with unprecedented ease. 

Generative AI thus holds great promise for gaming because the AAA game industry has a steep barrier to entry – consider the budget, the man-hours, and the crunch behind games like Red Dead Redemption 2 (RDR 2, 2018) and other large-scale games. In fact, RDR 2’s estimated budget of $540 mn comfortably exceeds the most expensive Hollywood filmPirates of the Caribbean: On Stranger Tides ($379 mn). 

To compete with the likes of Rockstar, developers need to find cost-effective tools for the game development pipeline, and even giants like Rockstar or Ubisoft can benefit from such solutions – in fact, Ubisoft is working on both an AI-powered animation system, and an AI bug-fixing tool. Quite a few studios are hence already trying to enhance their workflows with AI, as we will discuss below.

2D Assets and Concept Art

AI-powered programs such as MidJourney, Stable Diffusion and Dall-E 2 can generate high-quality image assets, such as concept art and 2D game content from text prompts and they have already found a place in game asset production – a developer has used these AI generators in tandem, with a professional artist, to create concept art within days rather than weeks. 

These aren’t enterprise tools – they are available to enthusiasts as well, and Youtube has videos on how to generate concept art or any type of 2D image using such AI generators for free.

Character Concept Created by MidJourney in Response to a Text Prompt
Character Concept Created by MidJourney in Response to a Text Prompt

Ludo, in turn, is a company which offers an image generation solution geared for studios. Ludo is an AI-powered game ideation and creation platform that is intended to streamline the creative process of game development, and one of the ways it helps game developers is by using Stable Diffusion to generate images during the ideation phase, and even create high-quality 2D artwork and assets further down the pipeline. 

Concept Art Generated by Ludo’s Image Generator (Courtesy Ludo)
Concept Art Generated by Ludo’s Image Generator (Courtesy Ludo)

In fact, the content created by Ludo’s image generator can be fine-tuned to the studio’s needs. Developers can use keywords to generate game images, icons and even more detailed assets like character concepts, in-game items and more. The image generator can also transform one image to another, essentially creating variations of the input, or rendering it with different art styles. Developers can also condition Ludo to use specific colours, styles and themes to get results that are consistent with their art design.

As is perhaps evident, AI-powered 2D art generation is quite mature already, and can be deployed not just by studios but even by hobbyists who want to use these tools to generate images, or even use such images as references for their original artwork.

3D Artwork and Models

AI-generated 3D artwork is yet to be wholly integrated into the asset creation pipeline, but Nvidia is to some extent leading the charge on this aspect of game development with its Omniverse. In fact, the very purpose of the Omniverse, per Nvidia, is to help individuals and teams develop seamless, AI-enhanced 3D workflows.

Developers using the Omniverse can deploy Lumirithmic to generate high-fidelity, movie-grade 3D head models from facial scans. Lumirithmic is a scalable solution that can be used in just about any digital content creation pipeline.

Facial Scans Turned into High-Quality 3D Models (Courtesy Lumirithmic)
Facial Scans Turned into High-Quality 3D Models (Courtesy Lumirithmic)

Elevate3D uses 360-degree videos of products to make highly-accurate 3D models, which can then be used in product presentations, demos and even animations. In this video, the user captures a 360-degree video of a hair-care product on a turntable using their cellphone, and feeds it to Elevate3D – and that’s all it takes to create a full 3D model that can be modified and rendered inside a 3D application. Elevate 3D could be perfect for the creation of in-game props and items, especially for realistic games set in the present – a game like Grand Theft Auto V (2013) has countless items and props, and Elevate3D could allow the studio to devote more time to hero assets, which the player will focus on and interact with, rather than working on mundane props that flesh out the game world. 

Perhaps the most tantalising Omniverse solution is Get3D, an AI tool that can generate detailed models with textures using just 2D images, text prompts and random number seeds. The tool can also generate variations for its models, apply multiple textures to a model on the fly, and can even interpolate between two generated models – for example, morph a fox into a dog and then into an elephant and so on. 

Get3D can Dynamically Morph 3D objects from One Form to Another (Courtesy nVidia Labs)
Get3D can Dynamically Morph 3D objects from One Form to Another (Courtesy nVidia Labs)

One can only imagine what the industry could achieve with something like Get3D – the morphing feature can prove incredibly powerful in asset creation – imagine a game world where every creature is generated at runtime from a single 3D base. Currently, game models are carefully constructed and textured by hand, and then optimised for use in-game. If Get3D matures into a scalable solution, developers could spend their time on experimenting with every imaginable 3D creature concept and merely feed it into Get3D to get an entire ecosystem of creatures into the game. 

Level Design and World Building

Level design and world building have become increasingly relevant – and challenging – as game worlds have grown larger and more complex. One of the more promising companies in this space is Promethean AI. The company aims to address the challenge of creating large and detailed game worlds at scale. 

It was founded by Andrew Maximov, a former lead artist who collaborated with hundreds of other colleagues while working on the game worlds of the Uncharted series – a process he characterises as ‘overwhelming’ at times. 

Promethean AI is intended to make world-building a less onerous task, and seems absurdly easy to use. A human artist tells it to build a bedroom, and it does. They then ask it to add a desk, and it does, and so on – the AI keeps plugging pre-created assets into the 3D space as needed and as specified. This patent-pending machine-learning solution spares artists the drudgery of adding and removing 3D assets into the environment, and allows them to concentrate on the virtual world as a whole. 

A Room Generated Purely via Audio Prompts to an AI (Courtesy Promethean AI)
A Room Generated Purely via Audio Prompts to an AI (Courtesy Promethean AI)

Promethean AI’s output can be fine-tuned to the artist’s style, and can generate much of the game world, allowing the artist to polish and tweak its output to a high-quality game environment. This process is scalable, allowing developers to make larger and larger games. Promethean AI is also integrated into the Unreal Engine, allowing even enthusiasts to experiment with its capabilities. 

Promethean AI can replace and improve upon procedural generation – a core component of games like No Man’s Sky (2016). Procedural techniques enabled indie developer Hello Games to create a vast space exploration game with limited resources. However, No Man’s Sky did not quite live up to the hype at launch – lacking key features promised by the developer – and incurred severe backlash from gamers. If Hello Games had had a tool like Promethean AI at their disposal when they were building a universe with 256 galaxies to explore, they may well have been ready at launch with all the features they had promised.

No Man’s Sky, a Game that Uses Procedural Content Generation (Courtesy Hello Games)
No Man’s Sky, a Game that Uses Procedural Content Generation (Courtesy Hello Games)

Dynamic In-Game Music

Numerous companies are at work making AI music generators that can change tracks on the fly, in real-time – which is perfect for games as in-game music is meant to change based on the context and even transition seamlessly from one track to another, serving as audible cues that tell the gamer what to expect in a given setting.  

Activision Blizzard has a solid head start in this department. In 2022, the company patented a new AI-driven music generation system, which goes beyond just randomising music or generating musical cues procedurally. 

The AI creates music specific to each player using machine learning trained on contextual data such as the player’s actions, their in-game behaviour patterns, their skill level and the in-game situation. Most games do use specific musical cues for different contexts (combat music vs exploration music in a game like The Witcher 3), but Blizzard’s AI can create new music (or at least variations on a theme) for any in-game context based on the data on which it is trained. The AI can even modulate the music tracks’ beat, tempo, volume, and length based on the player’s actions. 

Realistic AI-Based Animations

Several companies in the game industry are working hard on streamlining the process of creating seamless animations – many games suffer from stiff transitions, awkward rag-dolls and other immersion-breaking glitches because animation for video games is complex, and constrained by hardware limitations as well. 

Move.ai is an application that allows for motion capture (mocap) in any setting using any camera, including phone cams, and uses deep learning to digitise the motion capture into an animation. It is also integrated with the Omniverse – developers in nVidia’s ecosystem are spared at least some of the complexity in animating game characters. 

Perhaps Electronic Arts’ in-house tool HyperMotion constitutes the most robust use of machine learning for animation creation. EA essentially made 22 professionals play football in mocap suits and fed 8.7 million frames of motion capture into a machine learning algorithm that then learned to create animations in real time, thereby making every interaction on the field realistic. The ML-Flow algorithm’s animations allow players to strike and control the ball with complete ease and precision. 

EA’s researchers are also working on a deep-learning solution for fluid movements and transitions for particularly challenging animations, such as martial arts manoeuvres. As is evident, a martial arts game or even a Mortal Kombat game will absolutely break if its animations are crude, and thus requires animators to carefully edit, mix, blend and layer motion sources to produce seamless animations. The deep learning framework is meant to automate this manual layering using neural networks.

Neural Layering for Creating Seamless, Complex Animations (Courtesy EA)
Neural Layering for Creating Seamless, Complex Animations (Courtesy EA)

Ubisoft is also trying to solve the problem of creating seamless animations at scale. The company’s Learned Motion Matching System uses AI to improve animations created by using motion capture as a base. Motion matching is behind some of the best animation systems achieved in games, and it essentially allows mocap to be utilised in creating realistic game animations. Mocap can be highly detailed and realistic, but is raw, unstructured data – a digital recreation of movement, like a person walking in a circle, cannot simply be plugged into a game’s animation system without manual editing.

Raw, Unstructured Mocap Animation (Courtesy Ubisoft)
Raw, Unstructured Mocap Animation (Courtesy Ubisoft)

Motion matching is the process by which such data is translated into other movements – like making the character walk back and forth, instead of in circles. This is done by hand-picking motion data and adding various subtle tweaks manually to interpolate mocap data with other animations. 

Mocap Translated into Animations Usable in-Game (Courtesy Ubisoft)
Mocap Translated into Animations Usable in-Game (Courtesy Ubisoft)

Motion matching systems, however, can hog system memory, and also scale poorly. Ubisoft’s Learned Motion Matching uses machine learning not only to automate motion matching, but also allow it to scale without straining memory resources.

AI in Playtesting and Quality Assurance

Generative AI is an alluring prospect for developers, especially considering the reduction in cost and time in making quality game assets. But AI can also assist in yet another time consuming, costly aspect of game development: quality assurance (QA). As games grow bigger and bigger, QA has become increasingly challenging

Studios have two options for bug testing or playtesting – using bots or human play testers (or both in tandem). Humans are far better at identifying problems, but are also prone to exhaustion or distraction, because they need to play the same levels over and over again, repeatedly checking for exploits, unexpected behaviours, random instability and more, in a draining process designed to weed out every possible bug in the game. Bots, however, will never get exhausted or distracted, no matter how many times they play a level, and are even scalable, but can’t match a human’s capacity to identify bugs.

The QA testers for Fallout 76 (2018) were essentially put through the grinder because of the game’s troubled development cycle and bad launch. AI can spare humans the thankless task of bug testing, playtesting, and patching buggy code. 

Fallout 76’s QA Testers Endured Severe Crunch During Development and After Launch (Courtesy Bethesda)
Fallout 76’s QA Testers Endured Severe Crunch During Development and After Launch (Courtesy Bethesda)

AI That Learns to Playtest

EA’s researchers have achieved promising results with AI playtesting by using a technique called reinforcement learning (RL), in which the AI is trained with positive reinforcement – rewarded for desired behaviour and punished for undesired outcomes.

RL agents master games by modelling their actions on the rewards and punishments they receive. Losing territory in Go is a punishment, while gaining ground is the way to victory. In video games, levelling up or killing a boss is a reward, but dying is a punishment. As the RL agent continues to play the game, it rapidly learns to avoid punishment and seek rewards. It soon achieves superhuman proficiency at the game and can start identifying bugs and other in-game problems. However, an AI trained to master a particular game has a very narrow range – it can achieve superhuman results at Go or Dota, but not much else, and cannot playtest another game unless it goes through the exact same process of reinforcement learning all over again. 

Researchers at EA essentially used reinforcement learning to make AIs better at playtesting rather than playing, by pitting two sub-AIs against each other. One AI creates levels or environments, and the other tries to ‘solve’ these challenges. The solver is rewarded for successfully completing a task, challenge or level. The AI making the environments is rewarded for creating a challenging level that still remains playable.  

The AI’s range is widened – it is trained to generate more and complex levels, and also trained to become more versatile at testing such levels. Essentially, this technique allows a developer to test the game even during the development stage, by letting EA’s AI duo create and test maps based on game assets and code. 

EA’s research is still in a nascent stage, and it may be a few years before it is implemented. But a sufficiently advanced AI for playtesting can allow human QA testers to focus on issues that cannot be easily identified by AI. Fallout 76’s QA testers may have been spared a lot of toil if they had had such AI tools at their disposal.

Ubisoft’s Bug-Preventive AI

Ubisoft has taken a different but equally novel approach to bug fixing – squashing them before they are even coded. Ubisost fed its Commit Assistant AI with ten years’ worth of code from its software library, training the AI to identify where bugs were historically introduced, how and when they were fixed, and then predict the time when a coder is likely to write buggy code, essentially creating a ‘super-AI’ for its programmers. 

Ubisoft claims that bug-fixing during the development phase can swallow up to 70% of costs. The Commit Assistant has not been integrated into the coding pipeline but is being shared with select teams, as there are concerns that programmers may baulk at an AI that is telling them they are doing their job wrong. Ubisoft wants the AI to speed up the coding process – it wants its coders to treat the AI as a useful tool rather than a hindrance, and intends to be completely transparent about how the AI was trained. 

A limitation that can plague any AI-based bug fixing is the problem of bug reporting. AI’s can be trained on data sets to master games, and even become proficient at identifying bugs. But how would an AI report bugs, considering that one of the crucial aspects of bug fixing is having recourse to a well-written bug report

Open AI’s ChatGPT can converse with humans, answer their questions and is also proficient at fixing code. But it’s even better at bug-fixing when it is engaged in natural language dialogue with the human coder – Ubisoft’s Commit Assistant could well be trained like ChatGPT to communicate with coders to build trust, and playtesting AI’s may need natural language dialogue capabilities to tell humans about game bugs, or even fix them while conversing with programmers, as ChatGPT does. 

What AI Implies for the Game Industry’s Future

We have discussed various AI tools that can assist studios in the game development pipeline. In a sense, many of these solutions are meant to reduce manual work and allow developers to focus on the things that really count. 

However, AI can also be used in novel ways that have nothing to do with the asset pipeline or game development, and can also empower very small teams to create ambitious games, democratising the game industry. But this revolution can end before it begins as legal issues loom over AI generators – and we will deal with these issues in brief, before discussing how AI can change the game industry’s landscape.

The Legal Wrangle over Generative AI

Multiple lawsuits have been filed against AI-powered image generators. One of the plaintiffs is none other than Getty Images, a behemoth that owns one of the world’s largest repositories of images, vector graphics, videos and other media, and predominantly provides stock photos for corporations and the news media. 

Getty’s suit contends that Stability AI, the creator of Stable Diffusion, copied over 12 million images from its stock library without ‘permission or compensation’, as part of an effort to ‘build a competing business’. As we have said before, generative AI tools are trained on vast datasets. But if such data is copyrighted and trademarked, then they arguably need to be paid for – whether the image is supposed to fill out a newspaper column or a corporate brochure, or an AI’s training dataset. 

This lawsuit, and another filed by three artists, threaten the continued existence of Stable Diffusion or MidJourney unless the courts rule that providing an AI with copyrighted images solely for training purposes constitutes fair use, especially as AI generators arguably transform the data they are fed with to create original content. 

One can argue that such lawsuits have already done enough damage. Litigation takes years and game studios, filmmakers and other media houses that could use such AI tools will be wary of integrating them into the pipeline until the legal tangle is resolved. However, AIs such as Ubisoft’s Commit Assistant and EA’s HyperMotion are arguably immune from litigation, as training data is also generated in-house – legal issues over copyright can be circumvented by sourcing data using the right means.

AI-Powered Innovation in Gaming

As early as 2018, Activision used machine learning to make players improve their gaming skills. Activision’s tool was integrated into Alexa, and helped train gamers to get better at playing Call of Duty: Black Ops 4 (2018). The tool is no longer available, but it was still an interesting experiment in using machine learning and a human-like interlocutor, such as Alexa, to guide gamers through a play session.

Activision Created a Short-Lived AI Coach to Guide Players through Call of Duty: Black Ops 4 (Courtesy Activision)
Activision Created a Short-Lived AI Coach to Guide Players through Call of Duty: Black Ops 4 (Courtesy Activision)

While Activision’s experiment was short-lived, Ludo’s solution for de-risking the gaming industry may well become integral to the game maker’s toolkit. As we have mentioned above, Ludo helps developers ideate and develop 2D assets with its image generator. It also has a market analysis tool that can help studios get a good sense of how their game might perform. 

The developer can feed the AI-based tool with a proposed game concept, and Ludo will scour its vast database to determine if the idea has been thought of before. This is critical for mobile game developers, who work in a field where games struggle to rise to the top. Ludo can also identify trending genres and top charts, to help developers model their game on ideas and titles that are performing well. Since its launch in 2021, Ludo has more than 8000 developers using it. 

Conclusion

In recent years, academic papers about generative AI have been published at an exponential rate and many companies are now working on AI-based solutions not only for gaming, but for other industries too. This spike in research and development has been called a ‘Cambrian explosion’, likening the emergence of generative AI toolsets to the proliferation of animal species during the Cambrian Period 539 million years ago. The game industry stands to benefit immensely from this surge in practical AI solutions.

However, using AI to enhance game development is not without its challenges – generative AI is a nascent field and legal issues loom over it already. Even Ubisoft is treading lightly with its bug-preventive AI so that programmers can gradually accept the tool as a benefit rather than a hindrance. 

Despite these challenges, AI has the potential to democratise the gaming industry and act as a force multiplier for small developer teams, empowering them to make ambitious games by using AI to streamline the game development process, circumvent budget constraints, and even innovate with AI tools to create truly unique gaming experiences. 

Gameopedia can provide tailored solutions to meet your particular data needs. Contact us to receive useful information and derive actionable insights regarding generative AI techniques, and their impact on the game industry in general and game development in particular.


Read More

Game AI: Breathing Life into Digital Denizens

The outlaw Arthur Morgan has waylaid a rich-looking man, who is sprawled on the grass. Morgan fires a warning shot in the air to assert his dominance. A moment later, a bird flops to the ground, felled by his bullet. 

A gamer inadvertently fired this one-in-a-million shot in Red Dead Redemption 2 (RDR 2), and his clip of the scene went viral, with fans wondering if the bird’s death at the hands of RDR 2’s protagonist was a scripted event. It is unlikely to have been scripted, but is rather the result of the interplay between complex AI systems in RDR 2 (2018). 

When attacked, the AI-driven NPC responds realistically, trying to fend off the player. In response, the player fires a warning shot. As a result, another AI-driven NPC – a hapless bird – meets an untimely demise while flying directly overhead. The bird’s flight path has not been scripted so that it gets shot down by the player, it is merely following its own routine because RDR 2 endows both human and animal NPCs with complex behaviours and schedules, and the bird’s death is just one of the outcomes when such complex AI systems intersect. 

In this blog we will explore key aspects of game AI, and the development of seemingly intelligent behaviours in NPCs in various games and franchises. Game AI has evolved from the simple computer operated opponents in Pong and early arcade games to increasingly complex NPC agents in games such as RDR 2, the Halo franchise and Bethesda’s Elder Scrolls games. Developers have contributed significantly to defining, and redefining game AI, and games such as RDR 2 have pushed game AI to the limits, creating NPCs so convincing that they seem to have a life of their own.

What is Game AI?

Artificial intelligence in games – or game AI – is used to generate apparently intelligent and responsive behaviours mostly in non-player characters (NPCs, including human, humanoid and animal agents), allowing them to behave naturally in various game contexts, with human or humanoid characters even performing human-like actions. 

AI has been integral to gaming from the arcade age – AI opponents became prominent during this period with the introduction of difficulty scaling, discernible enemy movement patterns and the triggering of in-game events based on the player’s input. 

Game AI is distinct from the sort of AI we have become familiar with today, which is powered by machine learning and uses artificial neural networks. A key reason why in-game AI has remained distinct from today’s AI constructs is that game AI needs to be predictable to some degree. A deep learning AI can rapidly become unpredictable as it learns and evolves, whereas game AI should be controlled by algorithms that give the player a clear sense of how to interact with NPCs to achieve their in-game goals.  

According to Tanya Short, game designer and co-founder of KitFox Games, game AI is to some extent “smoke and mirrors” – complex enough to make players think they are interacting with a responsive intelligence that is nevertheless controlled and predictable so that gameplay doesn’t go awry. 

Within this relatively narrow scope, however, in-game AI can be quite complex, and game developers expertly fake the illusion of intelligence with various clever tricks – some developers have even experimented with giving more freedom to game AI, leading to interesting and unforeseen results

What Types of AI are Used in Gaming?

Arcade games were the first to use stored patterns to direct enemy movements and advances in microprocessor technology allowed for more randomness and variability, as seen in the iconic Space Invaders (1978) game. Stored patterns for this game randomised alien movements, so that each new game had the potential to be different. In Pac-Man (1980), the ghosts’ distinct movement patterns made players think they had unique traits, and made them feel they were up against four distinct entities. 

Space Invaders Used Stored Patterns to Randomise Enemy Movements (Courtesy Taito)
Space Invaders Used Stored Patterns to Randomise Enemy Movements (Courtesy Taito)


Over the years, certain key game AI techniques, such as pathfinding, finite state machines and behaviour trees have been crucial in making games more playable and NPC’s more responsive and intelligent. We delve into these below. 

Pathfinding

A relatively simple problem for humans – getting from point A to B – can be quite challenging for AI-driven agents

The answer to this problem is the pathfinding algorithm, which directs NPCs through the shortest and most efficient path between two parts of the game world. The game map itself is turned into a machine-readable scene graph with waypoints, the best route is calculated, and NPCs are set along this path. 

Such an algorithm is particularly important and prevalent in real-time strategy (RTS) games, where player-controlled units need pathfinding to follow commands, and enemy-controlled units need the algorithm to respond to the player.

Early pathfinding algorithms used in games such as StarCraft (1998) ran into a problem – each single unit lined up and took the same path, slowing down the movement of the entire cohort. Many games then used various methods to solve this problem – Age of Empires (1997) simply turned the cohorts into an actual formation to navigate the best route, and StarCraft II (2010) used ‘flocking’ movement, or ‘swarming’, an algorithm devised by AI pioneer Craig Reynolds in 1986. Flocking simulates the movement of real-life groups such as flocks of birds, human crowds moving through a city, military units and even schools of fish in water bodies. 

StarCraft II Used the Flocking Algorithm to Refine Pathfinding During Gameplay (Courtesy Blizzard)
StarCraft II Used the Flocking Algorithm to Refine Pathfinding During Gameplay (Courtesy Blizzard)

Finite State Machines

Finite state machines (FSM) are algorithms that determine how NPCs react to various player actions and environmental contexts. At its simplest a finite state machine defines various ‘states’ for the NPC AI to inhabit, based on in-game events. NPCs can transition from one state to another based on the context and act accordingly. If the NPC is designed to be hostile to the player character, seeing the player may lead it to run toward them and attack them. Defeating this NPC may impel it to run away, to go into a submissive mode, or simply enter the death state (i.e., die).

In fact, FSMs can ‘tell’ an NPC to ‘hunt’ players based on cues like audible or visible disturbances to the environment – this is a staple of stealth games, and the Metal Gear Solid franchise has used the hunting mechanic to create tense situations between the player and the NPC. Finite-state machines can also tell NPCs how to survive – under attack, they can take cover to improve health levels, reload ammunition or search for more weapons, and generally take action to evade death at the player’s hands. 

The Metal Gear Solid Franchise Uses ‘Hunting’ Behaviours to Lend Realism to Stealth Gameplay (Courtesy Konami)
The Metal Gear Solid Franchise Uses ‘Hunting’ Behaviours to Lend Realism to Stealth Gameplay (Courtesy Konami)

Behaviour Trees

Unlike finite state machines, a behaviour tree controls the flow of decisions made by an AI agent rather than the states it inhabits. The tree comprises nodes arranged in a hierarchy. At the far ends of this hierarchy are ‘leaves’ that constitute commands that NPCs follow. Other nodes form the tree’s branches, which the AI selects and traverses based on the game context to give NPCs the best sequence of commands in any particular situation. 

Behaviour trees can be extremely complex, with nodes attached to entire sub-trees that perform specific functions, and such nested trees enable the developer to create whole collections of actions that can be daisy-chained together to simulate very believable AI behaviour. As such, they are more powerful than finite state machines, which can become unmanageably complex as the number of possible states grows.

Finite State Machines Can Grow Increasingly Tangled as the Number of States Grows (Courtesy Unreal Engine)
Finite State Machines Can Grow Increasingly Tangled as the Number of States Grows (Courtesy Unreal Engine)

Behaviour trees are also easier to fine-tune and can often be altered using visual editors. You can even create behaviour trees in the Unreal Engine using a visual editor.

Notably used in the Halo franchise, behaviour trees have been part of developers’ AI toolkit for a while, and were used effectively in Alien: Isolation (2014). We will discuss their implementation both in Halo 2 and Alien: Isolation below.

How does Game AI Make NPCs Act Intelligently?

Various developers largely make use of the same fundamental concepts and techniques in creating game AI, but now employ them at much larger scales thanks to greater processing power. According to Julian Togelius, a New York University computer science professor, game AI is far more complex than the models discussed above, but are essentially variations on such core principles. In this section, we discuss some games that used AI inventively to create immersive encounters with responsive, intelligent NPCs. 

Tactical Communications in F.E.A.R

First Encounter Assault Recon, or F.E.A.R (2006) created the illusion of tactical AI combatants using mainly finite state machines, with a twist – developers gave enemies combat dialogue that broadcast their strategy, which changed based on the game context. These ‘communications’ made players think they were up against situationally-aware enemies working together to defeat them. 

F.E.A.R Verbalised its AI to Make it Seem Like Enemies were Working as a Team (Courtesy Vivendi Games)
F.E.A.R Verbalised its AI to Make it Seem Like Enemies were Working as a Team (Courtesy Vivendi Games)

The dialogue merely ‘verbalised’ the algorithms that directed NPC behaviour, but it added realism to enemy encounters – in real-life combat, soldiers do call out to their comrades to coordinate tactics, and in F.E.A.R, NPC soldiers would tell others to flank the player when possible, and even call for backup if the player was slaughtering them with ease. No real communication was taking place, but the NPC dialogue during combat gave the impression that the enemies were acting in concert. 

F.E.A.R helped pioneer ‘lightweight AI’ and added nuance by giving voice to the AI’s ‘inner thoughts’. Snippets of combat dialogue beguiled players into thinking they were working against organised, tactical squads.

Halo 2: Aliens that Behave Sensibly

A key feature of the Halo franchise is the enemy – alien NPCs who have formed an alliance to defeat humankind. These visually unique NPCs give cues to the player about how to take them down. Grunts are small and awkward, and may flee from the player, but elites and larger NPCs may take on even the Master Chief in direct combat. 

Halo’s Masterful Use of Behaviour Trees Made Each Alien Enemy Behave Uniquely in Context (Courtesy Bungie)
Halo’s Masterful Use of Behaviour Trees Made Each Alien Enemy Behave Uniquely in Context (Courtesy Bungie)

Rather than using finite state machines, Bungie used behaviour trees in Halo 2 to direct the actions of enemy aliens, because of the range of tactics made possible by sufficiently detailed behaviour trees. 

At a very abstract level, Bungie’s behaviour trees have various conditional nodes that determine NPC actions. But a lot happens at any point in the game and the ‘conditions’ for many nodes may be satisfied, leading to game-breaking ‘dithering’, when an NPC rapidly alternates between various actions that are all deemed relevant. To prevent this, Bungie used ‘smart systems’ that enabled game AI to think in context. 

Based on contextual cues (like the NPC type, its proximity to the Master Chief, whether it is on foot or in a vehicle), a system blocks off whole sections of the behaviour tree, restricting the NPC to a relatively small but relevant range of actions. Some of these remaining actions are prioritised over others, fostering sensible behaviour.

Stimulus behaviours’ then shift these priorities based on in-game triggers. If the Master Chief gets on a vehicle, then an enemy will seek a vehicle of its own, attempting to level the playing field for itself. Grunts will flee in the middle of combat if their captain is killed by the player – nodes that tell them to attack or take cover are simply overridden. 

This can lead to repeated and predictable behaviour – the player might see the grunts fleeing and choose to target their captain the next time. But that strategy won’t work either: after a high-priority action such as fleeing is executed, a delay is injected to this behaviour to stop the NPC from repeating it immediately – the next time you take out the captain, the grunts may choose to stand their ground. 

Bungie’s expert use of behaviour trees has led developers to adapt this game AI technique and it has since been used in several games, such as Bioshock Infinite (2013), Far Cry 4 (2014), Far Cry Primal (2016) and Alien: Isolation (2014).

Bethesda’s Radiant and Murderous NPCs

Halo’s enemy NPCs were essentially reacting to the player, but what if a developer wants to give the impression that NPCs are living lives of their own? Such an AI system would be especially useful in an open-world game, where a lot of NPCs may never engage in direct combat with the player, and exist to flesh out the world. 

 In Bethesda’s The Elder Scrolls III: Morrowind (2002), NPCs would pretty much ‘roam on rails’, lacking even the semblance of a routine. For The Elder Scrolls IV: Oblivion (2006), Bethesda attempted to create complex NPCs with daily habits using Radiant AI, which had to be dumbed down considerably due to its unexpected in-game results.

In Oblivion, Radiant AI prescribes various daily tasks for the NPC, such as sleeping, eating and doing an in-game job. These tasks comprise the NPC’s daily routine, and the AI allows the NPC to decide how to perform its tasks.

Oblivion’s NPCs Murdered Each Other to Complete their Tasks Before their AI was Fine-Tuned (Courtesy Bethesda)
Oblivion’s NPCs Murdered Each Other to Complete their Tasks Before their AI was Fine-Tuned (Courtesy Bethesda)

Most games do feature NPCs with schedules, the key difference was the ‘free choice’ given to NPCs to do what they had to do in Oblivion. Playtesting revealed a rather big problem with this AI system – NPCs prone to murder. 

As part of a certain quest, the player character needs to meet a dealer of Skooma – a highly-narcotic in-game potion. But the player would find this Skooma dealer dead because other NPCs designed to be ‘addicted’ to Skooma would simply kill the dealer to get to the drug, breaking the quest. In another case, a gardener could not find a rake, and so murdered another NPC, took his tools and went about raking leaves. When a hungry town guard left his post to hunt for food, other guards went along and the town’s malcontents started thieving indiscriminately as the law was nowhere in sight. 

Many of these criminals had low ‘responsibility’, an in-game NPC attribute that determines how likely they are to behave in unlawful ways. An NPC with higher responsibility would buy food, but one with low responsibility might steal it – both are trying to fulfil the eating task, but are just going about it in radically different ways.  

Of course, high-responsibility NPCs like guards won’t let crimes go unpunished, and an NPC who has stolen a loaf of bread can get killed. In the game, the player incurs a bounty if they commit a crime, and can pay instead of getting into a lethal encounter. This alternative was not granted to NPCs, so minor theft escalated to murder.

Designer Emil Pagliarulo had to take steps to tone down Radiant AI, so that NPCs wouldn’t slaughter each other to complete their daily tasks, and describes Oblivion’s original Radiant AI as a sentient version of the holodeck (the holodeck is a life simulator from the Star Trek franchise). 

But even in the finished version of the game, one can exploit Radiant AI in interesting ways. Oblivion’s game world features poisoned apples. If the player places these apples in a public place, an NPC will likely eat them and die. This has no connection to any quest – it is just a simple player action with disastrous consequences for an NPC. 

Even in Skyrim (2011), Bethesda’s fifth instalment in the Elder Scrolls franchise, the fine-tuned version of Radiant AI makes for lethal stand-offs between NPCs. In this video, NPCs fight to the death to claim certain valuable items dropped by the player, and in fact, they might do the same even if the items aren’t particularly valuable. This behaviour is driven by Bethesda’s Radiant Story system, which creates random quests based on certain parameters (like the quest giver, the guild they belong to, and other contexts) and also makes NPCs react dynamically to player actions.

Skyrim’s Radiant Story System Creates Dynamic Events Based on the Player’s Actions (Courtesy Bethesda)
Skyrim’s Radiant Story System Creates Dynamic Events Based on the Player’s Actions (Courtesy Bethesda)

NPCs will ask the player character if they can keep any item the player has dropped (or fight other NPCs to claim it). A guard may berate the player if he drops weapons, pointing out that someone could get hurt, and even fine the player if they disregard the guard’s warning. Completing a quest for an NPC makes them friendly towards you, and you can take their items instead of robbing them. In fact, friendly NPCs will also attend your wedding if you get married.

Meeting and Making Your Nemesis in the Mordor Games

The Nemesis System used in Middle Earth: Shadow of Mordor (2014) and Shadow of War (2017) is perhaps the one AI system that practically ensures that every player will encounter different villains in every playthrough. 

Shadow of Mordor Creates Unique Enemies in Each Playthrough with its Nemesis System (Courtesy Warner Bros)
Shadow of Mordor Creates Unique Enemies in Each Playthrough with its Nemesis System (Courtesy Warner Bros)

The developer Monolith Productions essentially created a dynamic ‘villain generator’ in which the player’s hostile encounters with an orc would result in changes to the orc’s status in the enemy hierarchy, his attitude towards the player, his powers, and more – if you set one on fire, he will hate you forever, and will develop a phobia for fire too, and if you run away from an orc in a fight, he will taunt you the next time you confront him. In effect, the Nemesis System turns a generic enemy into a named villain with unique traits. 

The Nemesis System is largely made possible because Talion, the protagonist, cannot die, as he is in a state between life and death – lore is used to weave player death into the narrative and gameplay. This mechanic allows orcs and other enemies to remember Talion, hate him for what he has done to them, rise up the ranks by killing him and even gain a following because of their exploits – this can make them even harder to kill.

The Nemesis System is also built on the idea that the orcs in Sauron’s Army are a bunch of back-stabbing, infighting brutes, who rise to alpha dog status by challenging and killing orcs higher up the hierarchy – orcs can become named villains not only by facing off against you, but also by taking on their commanders.

One of the late game objectives is to sow discord in Sauron’s Army and dismantle it thereby, and the best way to achieve this is by recruiting low-level orcs using a special power. Such allies will spy for you, betray and supplant enemy leaders, and even join you in fights against powerful named villains. This is part of Nemesis too – orcs can rise up, but can also lose status if you beat them or if their bid for more power backfires. ‘Turning’ such a weakened orc – or recruiting him – allows you to thin your enemy’s ranks: the game discourages indiscriminate killing. 

The ability to recruit orcs, even high-level ones such as captains and warchiefs, was expanded in Shadow of War to build up a veritable army of one’s own. Even the orcs in the game have complex relationships and are less prone to butchering each other – an orc you kill may have a friend who will hunt you down to avenge his brother-in-arms

Shadow of War Uses the Nemesis System to Help The Player Build an Army (Courtesy Warner Bros)
Shadow of War Uses the Nemesis System to Help The Player Build an Army (Courtesy Warner Bros)

Warner Bros, the publisher of the Mordor games, chose to patent the Nemesis System, preventing other developers from building on Monolith’s achievements. If the patent had not been granted, developers could have used Nemesis, or developed a system based on it, to create true drama between the player and their enemy, whose personalities grow every time they face off against each other.

The Perfect Monster in Alien: Isolation

Alien: Isolation developer Creative Assembly faced an unprecedented challenge when designing the game – how could game AI be implemented to recreate the perfect killing machine, the xenomorph, from the Alien movies?

The Xenomorph AI in Alien: Isolation Strikes a Perfect Balance between Fear and Opportunity (Courtesy Sega)
The Xenomorph AI in Alien: Isolation Strikes a Perfect Balance between Fear and Opportunity (Courtesy Sega)

As Ian Holm’s character says in the first film, Alien (1979), the xenomorph is the “perfect organism. Its structural perfection is matched only by its hostility… [It is] a survivor…unclouded by conscience, remorse, or delusions of morality.” 

The AI for such an entity has to be near-perfect as well – the horror game’s immersion would have been utterly broken if some bug made the xenomorph run around in circles, or behave like one of Oblivion’s Skooma-addicted NPCs. Every interaction between the player and the xenomorph had to be scary, believable and unpredictable. 

The developers adopted a design mantra called ‘psychopathic serendipity’, where the xenomorph somehow seems to be at the right place at the right time, and foils your plans even when you successfully hide from it. While you can’t kill it, it can kill you instantly.

Developers used a two-tiered AI system to foster these ‘serendipitous’ encounters: a director-AI always knows about your location and your actions, and periodically drops the alien-AI hints about where to look for you. But the alien-AI is never allowed to cheat, it can only work with the clues it’s given. You can always evade, hide or take it by surprise. This makes the game unpredictable, both for the alien and the player. 

The alien-AI has an extremely complex behaviour tree system that determines the actions it takes, and some of its nodes are unlocked only after certain conditions are met, making the xenomorph exhibit unnerving traits that suggest that it is learning from your actions. Other behaviours are unlocked as you get better at the game, enabling the xenomorph to keep you on your toes

A dynamic ‘menage gauge’ however, increases based on certain in-game contexts, estimating how tense the player is. When it reaches a certain threshold, the xenomorph will back off, giving the player some breathing space. 

The alien’s pathfinding algorithm is also tweaked to make it look like it’s hunting, searching or even backtracking, suggesting that it’s revising strategies on the fly. Such behaviour is activated either by giving the xenomorph areas of interest to explore, or making it respond to loud noises made by the player. The intentionally sub-optimal pathfinder makes the xenomorph stop at all points of interest, ramping up the tension, making the player wonder what it is up to. However, the alien will never look in certain areas of the game, as doing so would shift the game balance unfairly in its favour. Throughout the game, the alien never spawns or teleports anywhere (except for two cutscenes), but can sneak around so well that players think it’s teleporting. 

The AI in Alien: Isolation creates a macabre game of hide-and-seek with one of cinema’s most fearsome creatures, whose animal cunning keeps you guessing throughout the game. 

Peak Game AI in Read Dead Redemption 2

It is difficult to capture the full complexity of the in-game AI in Rockstar’s Read Dead Redemption 2 (RDR 2), but like Alien: Isolation, the game represents a novel layering of multiple AI systems. 

Read Dead Redemption 2 is so Complex that its AI will Surprise Players for Years to Come (Courtesy Rockstar)
Read Dead Redemption 2 is so Complex that its AI will Surprise Players for Years to Come (Courtesy Rockstar)

In this game, you can interact with every NPC in a variety of ways, and they will react and comment on what they notice about you, such as the blood stains on your shirt, how drunk you are, or your ‘Honor’ level, which gauges the good you have done, and also affects how the player character Arthur Morgan behaves. 

NPCs may ridicule your choice of clothing, and keep their distance if you are dirty. They also have their own complex schedules, not just restricted to doing their jobs. They will start looking over their shoulder if you follow them along their routine and may flee if you persist. In Grand Theft Auto V (GTA V), attacking an NPC might trigger various reactions like fleeing or even a counter-attack. An NPC in RDR 2, however, may not immediately draw their gun, but try to address the situation with dialogue, allowing for more believable interplay between the player and the NPC. 

During actual combat, NPCs will act intelligently – they will dive for cover, grip wounded areas and even try to take down Morgan with a melee weapon when possible. Enemies under fire will behave differently from calm ones who are not in the thick of combat. If you take refuge in a building, NPCs will cover all exits before a coordinated attack.

The sparse wild-west world of RDR 2 meant that each NPC had to be given a unique personality and mood states. Rockstar engaged 1,200 actors from the American Screen Actors Guild to flesh out the NPCs, each of whom had an 80-page script, and captured the actors’ demeanour and mannerisms over 2,200 days of mo-cap sessions. 

Even the wilderness teems with 200 animal species that interact with each other and the player, and are found only in their natural habitats – the vast open world features multiple ecosystems and animals react realistically to creatures higher up the food chain. Herbivores will flee at the sight of wolves, wolves themselves will flee from a grizzly bear and a vulture will swoop down on an abandoned carcass.

Rockstar also overhauled the animation system to create more accurate human and animal mannerisms in the game world, generating fluid movements without stiff transitions. The animation overhaul allows NPCs to react to the nuances of your facial expressions, your posture and mannerisms, especially as they all change due to the dynamic nature of the game world. A well-rested, well-fed Arthur Morgan looks different from one who is half-starved and muddy after an all-night trek, and NPCs will note this. 

Rockstar also completely recreated horse animations from scratch and even allowed the horse AI to decide how to move based on the player’s input. As a result, your horse is practically a supporting character in the game, and there’s a Youtube video devoted just to how Rockstar made the ‘ultimate video game horse’. 

One day, a fan exploring the game found a mounted NPC wearing the exact same clothes as the player character. In other games, with less variety in outfits, this will happen often, but RDR 2’s numerous outfits make this situation unlikely – until you realise the number of complex NPCs who share the world with the player. By sheer coincidence, an NPC had managed to choose the same clothes as the player – it is unlikely that the NPC’s choice of clothing is scripted and meant to surprise the player. 

Simply put, RDR 2’s AI is massively complex and will surprise players for years to come with its emergent gameplay.

Conclusion

We have seen how game AI can create complex interactions with NPCs who can act quite intelligently in context. 

In RDR 2’s case, NPCs are so complex that one can sense they have their own, complex lives, which are not just centred around the player. We may also assume that Rockstar did not use neural networks to power their game AI – the implementation of state-of-the-art AI would have surely made it to promotional materials. Every game discussed above uses traditional techniques at greater and greater scales, and it is likely that game AI might eventually reach a plateau phase. What, then, is its future?

AI powered by machine learning and neural networks may soon become a viable means for playtesting. What if an AI such as this were allowed to play a million games of RDR 2 or Skyrim to fine-tune NPC behaviours, while still maintaining the balance of predictability and randomness game AI requires? 

Most machine learning systems and neural networks work on vast datasets. Developers could perhaps take game AI to the next level by training a deep learning AI to amass and parse game AI behaviour, and improve it further still, and then create game AI – and new game AI techniques – for a subsequent game. Elder Scrolls VI, and Rockstar’s next open-world game, could perhaps benefit greatly from an AI created by AI.

Gameopedia offers extensive coverage of the AI used in all types of games. Contact us to gain actionable insights about how industry game AI techniques make games more believable and immersive. 

Read More