Vision and Hearing

Implementing sensory scanners with the Apex Utility AI

Introduction

In this tutorial we explain how scanners for sensory input to the Utility AI can be implemented.

This tutorial shows how scanning can be implemented as an action controlled by the Apex Utility AI. This allows the AI to control when to scan and how scanning is performed. Scanning can be a central part of the game, in the case of e.g. stealth games or other games where sensory input is varied and complex. The Apex Utility AI makes it easy to handle any number of different sensory input types.

The following example starts with a very minimal and simple example, storing GameObjects as observations. A slightly more complex example follows, where an actual Observation type is added to handle observations, which then affords storing observational metadata, i.e. timestamp, location and visibility.

Scanner

There are numerous ways to scan as a part of the Apex Utility AI. The examples presented here are meant as introductory and explanatory, and should be customized to suit the specific needs of your game.

The first minimal scanner class, SimpleScanForObservations, is implemented as follows:

The class inherits from the ActionBase class, as we plan to run the scanner as an action in the Utility AI. This scanner is very simple and basically checks for intersection with any other colliders in the set observationsLayer within the set range. The game objects discovered are added to a list in the context, if not already present in that list.

The simple scanner has some limitations though. Since the stored type is a game object, there is no option for writing any other relevant observational data, e.g. the timestamp, location or visibility of the observation – at the time of the observation. In order to achieve this, we introduce a sample observation class, called ExampleObservation:

Thereby, the ScanForObservations action can be extended to also record the time, the visibility and the location of all scanned game objects. First off, the ExampleContext must be extended to handle this new observation type, e.g. by adding:

We use the AddOrUpdateObservation() method to easily ensure that we never have two observations for one game object.

Thus, our ScanForObservations becomes:

The Utilities.IsVisible() method is basically just casting a ray to see if any blocks are in the way between ‘self’ and the ‘col’. Any other visibility check would also work. Thus, for each observation a new observation object is created (pooling techniques could be applied here for performance optimization) and added to the contextual memory, so that it may be accessed later. This model affords much more intelligent behaviour based on observations, e.g. units can investigate the last known visible position for a game object, rather than their current position, which the AI may or may not know legally.

Often, there will be multiple scanning processes running simultaneously or in parallel. It is, for example, common to store observations of other game objects, but it might also be useful to store a range of valid, walkable positions in the AI’s vicinity.

Tip
Storing a list of sampled positions opens up for possibilities for scoring all sampled positions and choosing the highest scoring one as a movement destination.

The ScanForPositions class, which is a minimal example of position sampling using Unity’s NavMesh solution, is responsible for sampling a range of positions in the vicinity of the AI unit, exemplified as follows:

This class also inherits from ActionBase, but instead of looking for colliders, it samples a range of positions in a square around the unit and stores them in the context, for later use by actions and scorers. By storing the sampled positions thusly, any number of scorers and actions can run evaluations or execute actions using the sampled positions, and the performance hit is minimal due to the one-time sampling at a relatively low frequency (e.g. once per second). The sampled positions facilitate creating ‘MoveToBestPosition’ actions (ActionWithOptions<Vector3>) where individual option scorers control how each position is scored.

Testing the AI

For testing whether the system with scanning and observations works as intended, it is a good idea to create a Visualizer component (See ‘Visualizers’) which visualizes all observations in memory through the context, optionally including a color coding of the visibility for each observation or any other relevant appended information or visualization. It can be beneficial in some cases to create a base class to facilitate writing contextually-based visualizers. See more in the ‘Visualizers’ tutorial.

Extensions

The scanners can be extended to other senses such as x-ray vision, olfactory / smell, etc.
The implementation in this tutorial only takes into account range as a parameter for whether or not the scanner picks up the observation. Other parameters such as line-of-sight, light & darkness, surroundings, etc., can also be taken into account.

For line-of-sight, simple raycasts can be used to identify whether the observation is blocked by e.g. an obstacle. For light & darkness, the position of the observation identified by Physics.OverlapSphere() can be compared against e.g. a map containing information about light and darkness in the scene. For sensors registering noise or scent, the surroundings can be factored in, such as the general noise level at the sample point or the general smells in that particular area. For x-ray vision, certain layers can be ignored, while other layers for e.g. objects made of lead or other metals can be used to block vision. The evaluations can also be implemented using scorers, if a more intelligent evaluation is desired.

Conclusion

In this tutorial we saw how scanners for vision and hearing can be implemented using the Apex Utility AI. We saw how a simple scanning AI can be implemented, and how this can be extended to a more complex AI that keeps track of observations for use by higher-level AI reasoning. The principles around scanning can be extended to other senses, such as smell or touch. Finally, we saw how the scanning AI can be tested using visual debugging.