Memory Management

Good practices and guidelines for memory management in AI for games.

Introduction

The aim of this tutorial is to explain good practices for memory management in the Utility AI, and presentation of different options of how to implement memory in an AI for games.

Memory management is an important part of any Apex Utility AI implementation. To design and create an intelligently behaving Utility AI agent, the agent will most likely need access to some form of storage for observations. Additionally, by storing a custom observation type with relevant meta-data, the Utility AI can be designed to work on partial/incomplete knowledge, but still behave realistically.

In the simplest implementation, the memory consists solely of a list of observations. In its most complicated form, the memory is an advanced class exposing a set amount of methods for adding, updating, getting and removing observations.

Storing the Data

First off, the memory needs to be stored somewhere. There are multiple options, so identifying the right one means considering what would be easiest to use and best performing in the context of a specific game.

Memory Design How does it work? When should I use this design?
Directly in Context Information is stored directly in the Context object For simple AIs, for reducing complexity, or where the overhead of additional subclasses is undesirable
Sub-class in Context Information is stored in a subclass referenced from the Context For simple AIs where the Context object has become rather large, subclasses referenced from the Context can facilitate better overview
Custom Type Collection is stored in its own memory wrapper class, which can be referenced either directly from Context, from a subclass through Context, or from a third place For most AI implementations where there are certain ways to handle adding, updating, getting and/or removing observations, which can be encapsulated in its own type
Memory Manager / Blackboard Collection is stored in a centralized location which all AI agents access when adding and querying memory observations For special needs when memory sharing between AIs is dominant, e.g. ‘hive minds’ where all agents contribute to a common memory
Multiple subclasses / memories The memory is split up into multiple separate collections, which can then be distributed out into multiple classes, typically referenced through the Context For complex AI setup where different types of observations or entities require different handling and thus the memory is split up into different collections, possibly even in different locations

The data can be stored directly in the Context object. The list of observations can be in the Context object directly. The Context will be unique for each AI unit and as long as the same Context object is being returned by the IContextProvider the memory will be persistent. This is probably the simplest solution and a fine starting point. This is the case for the ExampleContext object introduced in the Scripting Guide. However, most often the Context object will also have a range of other fields and properties, which means it can grow to unmaintainable sizes.

The above code shows snippets from the ExampleContext earlier introduced in the ‘Scripting Guide’. It shows the simplest case where observations are stored as game objects in a list directly in the Context.

Sometimes the Context object is itself kept small by introducing sub classes inside the Context object to handle different parts of the data being stored in the Context. Thus, the list of observations can be implemented in one of such subclasses, thereby still keeping the outer Context object small. However, this solution can also reduce overview if there are many subclasses in the Context and it is not clear which of them holds the memory.

The observations list can also be implemented on a GameObject-wrapping class. Typically an interface or base class is introduced, e.g. IEntity, which has a reference to its associated game object, along with any other game relevant data, e.g. colors, names, weapon, health, ammunition, position, etc. This IEntity interface can also declare the list of observations. This means that the Context object is kept minimal and clean – it needs only a reference to the IEntity. However, now the IEntity interface is in danger of growing viciously large and overview is decreased. Additionally, even if it is likely that the implementation of memory management will be the same for all AI units in a game – how they utilize the memory can still differ vastly.

Above is an example for a GameObject-wrapping interface, IEntity, which holds the memory for an associated game object.

The memory management could also be wrapped in its own custom type, which handles adding, updating, getting and removing observations. Wrapping the memory management in its own class means that it will never require more than one property or field to implement, regardless of whether it is placed within the Context, in a subclass of the Context or on the AI agent class or interface (e.g. IEntity). Any logic needed to handle the memory can be implemented inside the memory class itself. This solution will probably be the most commonly used, unless the implementation of memory is really simple.

The above class, ExampleMemory, is an example of a memory-wrapping class, which handlings adding and updating, and removing from memory. This is a minimal example, based on the ‘ExampleObservation’ class introduced in the ‘Vision and Hearing’ tutorial.

For some games it might also be the case that it is more desirable to split up the memory into multiple collections depending on some criteria, e.g. their type or a defining trait. This mostly makes sense if the handling of different types of observations is varied depending on the type of observation. This multitude of collections can then potentially be placed inside subclasses, referenced through the Context. The idea with this approach is then that e.g. all enemy observations can be added to a special ‘enemy observations’ list, which then is known to always only hold enemies. Then there would be another collection e.g. for all allied observations, which may be handled completely different from enemies in relation to adding, updating or removing their observations.

Finally, the memory management could also be centralized in a ‘memory manager’ (sometimes referred to as a ‘blackboard’) (e.g. typically a static or singleton class), which handles memory internally for all AI agents in the game. This requires that Utility AI actions and scorers etc. that require access to the memory, need to query the memory manager and probably provide a reference to the identifying key for their particular memory. This method could be desirable in cases where a lot of memory is shared between AI agents, which could be handled internally by the manager. However, there will most often be a performance overhead in getting the reference to the memory manager, which could have a negative impact on performance if used excessively.

Thus, memory can be stored in the Context, through a reference in the Context, on the AI agent, through its own memory management class or even through an external, centralized memory manager. The correct solution for any particular game may be any one of these solutions, a mixture or something completely different.

Accessing Memory

When designing how to store the Utility AI memory it is important to consider what will be the most common use case for accessing the memory. This is highly dependent not only on the game genre, but on the particular use of a Utility AI in a specific game. However, performance considerations should be prioritized in regards to designing the memory data storage, as this can often become a performance bottleneck. It is especially important to avoid allocation of new collections in run-time, as this results in reduced performance or occasional framerate drops due to garbage collection. A few guidelines for the memory data types follows here.

The probably simplest solution is to use a generic List, e.g. (assuming a custom ‘Observation’ class): List<Observation>. This works well if the most common use case for memory retrieval is getting all observations, unfiltered. For these cases, the entire list of observations can be returned directly, which means no memory allocation and easy access to indexing, adding and removing, etc. However, special care should be taken in regards to adding new observations. Most commonly it is not desirable to have many observations regarding the same GameObject, as this can increase memory usage drastically with little or no benefit. Therefore, it may be desirable to implement explicit methods for adding and updating observations, which takes care that all observations have unique GameObjects (no duplicates). It should be noted that lists are inefficient when uniqueness of added elements are required or if elements are often removed. This inefficiency grows the larger the list.

If the most common use case for memory retrieval is getting a single observation for a particular, known game object, then it would probably be more efficient to utilize a lookup table. One example could be a Dictionary<GameObject, Observation> which would facilitate quickly getting the observation regarding any game object (if it exists). Also, dictionaries are generally more efficient at unique inserts and removals than lists.

In some cases, the most common pattern for retrieving memory will be to apply some sort of filtering, in order to get a subset of the observations. For an example, it could be that AI agents most commonly want to get all observations where the entity is of a particular type, e.g. ‘Player’ or ‘Sniper’. Thereby, a set of scorers can be applied to a filtered list of observations to ensure that certain constraints are adhered to. For these use cases, it will typically also be beneficial to apply a Dictionary as the data type for memory storage, e.g. (assuming an enum called ‘EntityType’) Dictionary<EntityType, List<Observation>>. This increases the complexity of all memory operations, which in turn means that it may be beneficial to wrap the memory management in its own class, which can then expose methods for adding, updating, getting and removing observations.

Data Type Best for Most Common Use Case
List<TObservation> Retrieving all observations without filtering.
Dictionary<TEntity, TObservation> Retrieving a specific observation for a specific entity/GameObject.
Dictionary<T, List<TObservation>> Retrieving a subset of observations based on a specific criterion, typically a type.

The table shows a simplified, and by no means all-exhaustive, list of data types and for which use cases they are superior.

Conclusively, the considerations for what type of data to use for memory storage is highly dependent on the most common use case for memory retrieval. Consider the strengths of different collection types versus their weaknesses, and pick the data type which best affords the most common use case for memory retrieval.

Validating Memory

It is important to ensure that the memory of any given AI agent is not ‘corrupted’, in the sense that there are invalid entries in the memory or any other constraint-breaking cases. The challenge typically arises when observations can be killed and are subsequently destroyed. The Observation class, if used, will still be valid, but its ‘GameObject’ property will no longer have a valid reference and will instead point to null. If GameObjects are stored directly without a wrapping class in memory, then there will be null entries in the collection from the destroyed GameObjects.

There are three typical approaches to ensuring valid memory. The first and perhaps simplest is to build it into the scanning process. This will typically involve iterating through all observations before or after a new scan, and removing any null or otherwise invalid entries. However, this can result in multiple iterations through all observations, which can take time, especially if there are many observations.

Another option is to have a separate Utility AI running at a set interval, typically at the same or lower frequency than the scanning, which sole purpose is to iterate through all observations and remove invalid ones. This means that there is the possibility for invalid observations to show up in actions or scorers, because the AIs probably do not run in perfect synchronicity. However, if that is tolerable, then this is a quite simple and well performing method.

The ExampleMemoryCleanup action class, found above, is a minimal example of removing observed game objects that have become null from the memory list.

A third option is to use an event handler model, where storing an observation of a different game object also entails subscribing to the death ‘event’ for that game object. As the game object dies and is destroyed, any AI agents with the dying GameObject in their memory will be notified of the death event, and will thusly be able to remove that particular – now invalid – entry from their memory at the exact time of its death. This approach is quite robust and with it, certain assumptions can be made concerning the state of memory – namely that there will never be dead game objects in memory. However, there is a performance overhead of the event-subscription model, especially in cases containing many AI agents all scanning and observing each other.

If a custom Observation type is utilized for the memory, and it includes a visibility or similar that needs to be updated in real-time, it can be a challenge to find the right paradigm for this. One option is to implement it as a part of the scanning, typically involving multiple iterations through all observations. E.g. any observations not scanned in this frame are set to not visible. Another approach is to run it as part of the memory cleanup AI, a simple ‘visibility expiration threshold’ can be used to say that e.g. if an observation has not been updated for 2 seconds (checked through timestamp), then the visibility is set to false. This approach is not entirely accurate and will mean that an observation can be seen as visible even if it has not been for some milliseconds.

It is important to ensure that the memory is valid and upholds the defined constraints, e.g. if actions do not expect memory entries to be null, then they should never be null, etc. Additionally, special considerations may need to be taken into account when maintaining ‘visibility’ state of each observation or similar transient data that requires real-time updating.

Testing the AI

A very beneficial approach for testing the memory management is to use Gizmos drawing through contextual visualizer components, as explained in depth in ‘Visualizers’ tutorial. Assuming that a context visualizer base class has been implemented, as shown in ‘Visualizers’, the following visualizer class serves as a nice starting point for visualizing memory:

The above code shows a small example for visualizing all observations in memory with Gizmos lines.

The observation visualizer could be enhanced with specific color coding depending on the visibility of the observation, the timestamp or any other relevant, available factors. Thus, a simple contextual visualizer can be a really powerful tool for debugging and finding issues in the implementation.

Extensions

It is also possible to combine several of the previously described methods within one solution. It may be that AI agents can communicate their memory to each other within short ranges, but at the same time they report their memory back to some type of ‘overmind’, which has the combined knowledge of all units. The overmind will then be able to perform more complex evaluations taking into account the entire situation involving all relevant AI agents.

Another extension could be to limit the maximum time an observation can live in memory, e.g. after sixty (60) minutes the observation is too old and can be removed.

Conclusion

Memory management is usually a fundamentally integral part of any Apex Utility AI implementation. Often it will be beneficial to base all other targeting, selection and movement logic on the current state of the AIs memory, as this will allow the AI agent to act intelligently, despite operating on limited or impartial knowledge, typically also resulting in better emergent and more believable behaviour.