07 June 2025

How to implement a vision cone in Unity

A vision cone
A vision cone in video games is a mechanism primarily used in stealth or strategy games to simulate the field of vision of a non-playable character (NPC), such as an enemy or a guard. It is represented as a conical area originating from the NPC's eyes and extending forward at a certain angle, defining the range and direction in which the NPC can detect the player or other objects. If there are no obstacles in the way, all objects within the player's vision cone are visible to them. 

You can find some famous examples of this concept in games like Commandos or Metal Gear Solid. In Commandos, the enemies' vision cone is visible in the main window to show the surveillance area of enemy soldiers.

Vision cones in Commandos
Vision cones in Commandos

In Metal Gear Solid, the vision cones are not shown in the main window but in the minimap in the upper right corner, allowing the player to plan their movements to navigate the scene without being detected.

Vision cones in Metal Gear Solid
Vision cones in Metal Gear Solid

In general, the vision cone is key to creating stealth mechanics, as it forces the player to plan movements, use the environment (such as cover or distractions), and manage time to avoid detection. It also adds realism, as it simulates how humans or creatures have a limited, non-omnidirectional field of vision.

In this article, we will see how we can implement a vision cone in Unity. The idea is to create a sensor that simulates this detection mode, so we can add it to our NPCs.

Main Characteristics of Vision Cones

  • Angle: The cone usually has the shape of a triangle or circular sector in 2D, or a three-dimensional cone in 3D games. The viewing angle (e.g., 60° or 90°) determines the width of the field that the NPC can "see". 
  • Distance: The cone has a maximum range, beyond which the NPC will not detect the player, even if they are within the viewing angle. 
You can add more embellishments, but a vision cone is defined only by these two factors.

In many games, the vision cone is graphically shown to the player (especially in top-down view or specific interfaces) to indicate where they should avoid being seen. It can change color (e.g., green for "no alert" and red for "alert"). In this article, I will not cover the visual part because it doesn't add much. I want to focus on implementing what the NPC can see and what it cannot, not on the representation of the cone.

In Unity, the component that implements the vision cone usually exposes these two characteristics in the inspector, as seen in the following screenshot:

Basic fields to implement a vision cone
Basic fields to implement a vision cone

In my case, detectionRange (line 15) implements the distance, while detectionSemiconeAngle (line 18) implements the angle.

In the case of the angle, my code is based on some premises that need to be considered. The first is that I used a [Range] attribute (line 17) to configure the value of this field with a slider and to limit the range of possible values to the interval between 0 and 90 degrees. Although the viewing angle of a person is greater than 90° lateral degrees, in games it would be too difficult to avoid a character with that vision cone, so it is normal not to exceed 90°, with 45° being the most common. The second premise is that I treat the angle as a semi-angle. That is, I measure it from the direction I consider frontal (Forward, in my case) in one direction, and then it is mirrored in the other direction to generate a symmetrical cone.

The two parameters that define a vision cone
The two parameters that define a vision cone

In my example, I am working in 2D, so I have defined Forward as the local +Y axis, as seen in the following screenshot.

Definition of the frontal vector (Forward)
Definition of the frontal vector (Forward)

In line 20 of the code screenshot, I included one more field, layersToDetect, which we will use as a filter, as we will see a little later.

How to Detect if a Position is Within the Vision Cone

With the distance and angle defined, we need to assess whether the position to be checked is less than that distance and whether the angle between the relative position vector and the Forward vector of the cone is less than the angle of the cone. In Unity, it is very easy to calculate both.

Method to determine if a position is within the vision cone
Method to determine if a position is within the vision cone

The easiest way to calculate the distance is to use the Vector2.Distance() method, as I do in line 126 of the screenshot, passing the position of the vision cone (coinciding with its vertex) and the position to be checked as parameters.

For the angle, we can use the Vector2.Angle() method, as seen in line 127. This method returns the absolute angle between two vectors, so I pass Forward (line 128) on one side and the vector of the position to be checked, relative to the origin of the cone (line 129), on the other.

If both the distance and the angle are below the thresholds defined in the cone, then the checked position is within it.

Object Filtering

We could leave the article here, and you would have a functional vision cone. You would just need to collect all potentially visible objects in the scene and pass their positions (one by one) to the PositionIsInConeRange() method defined earlier. This check would need to be done periodically, perhaps in the Update() or FixedUpdate() method.

However, this would not be very efficient as the scene could be huge and contain many objects. It would be much better if we could do a preliminary filtering, so we only pass the minimum and essential objects to PositionIsInConeRange().

Filtering by Layers

The first filtering we could apply is by layer. We can distribute the objects in the scene into different layers and configure the vision cone to only consider objects in a specific layer. That was the purpose of the layersToDetect field mentioned earlier. Extended, this field looks like the screenshot.

layersToDetect field of type LayerMask
layersToDetect field of type LayerMask

This type of field allows multiple selection, so you can define that your cone analyzes several layers simultaneously.

Once you know which layers you want to analyze, discriminating whether an object is in one of those layers is apparently simple, as seen in the following screenshot.

How to know if an object is in one of the layers of a LayerMask
How to know if an object is in one of the layers of a LayerMask

I say "apparently" simple because, although you can limit yourself to copy-pasting this code into yours, fully understanding it has its intricacies.

To begin with, a LayerMask has a value field that is a 32-bit integer in which each of them represents the 32 possible layers in a Unity scene. You can imagine a succession of 32 ones and zeros. If you include two layers in the layerMask field, the value field will have 2 bits with a value of one, and the rest will be zeros. The final integer value of the field will depend on the position of those ones, although, in reality, that value is indifferent because what matters is which positions have a one.

On the other hand, all Unity objects have a layer field that contains an integer with values ranging from 0 to 31. This integer indicates the index of the layer to which the object belongs, within the LayerMask of all possible layers in the scene. For example, if an object's layer field has a value of 3, and that layer is included in a LayerMask, then that LayerMask will have a one in its bit at index 3.

To know if an object's layer is within the layers marked in a LayerMask, we need to make a comparison, using the object's layer as a mask. The trick is to generate an integer whose binary value is filled with zeros and put a one in the position corresponding to the layer to be checked. That integer is what we call the mask. We will compare that mask with the LayerMask, doing a binary AND, and see if the resulting value is different from zero. If it were zero, it would mean that the LayerMask did not include the layer we wanted to check.

It is better seen by representing the example from before. Look at the following screenshot.

Operation to check if a layer is contained within a LayerMask
Operation to check if a layer is contained within a LayerMask

In it, I have represented a LayerMask with two layers, the one at index 1 and the one at index 3 (they are the positions that have a one). Suppose now we want to check if the LayerMask contains layer 3.

What we have done is generate a mask with all zeros, except for the one at position 3, and we have done AND with the LayerMask. Doing AND with a mask makes the final result depend on the value that the LayerMask digits had in the positions marked by the mask. In this case, the mask points to position 3, so the final result will be zero or different from zero depending on whether position 3 of the LayerMask is zero or different from zero. In this case, it will be different from zero.

Filtering by Proximity

With layer filtering, we will avoid calling PositionIsInConeRange() for objects that are in layers we are not interested in. That will improve performance, but we can improve it further.

Another preliminary filtering we can do is to discard objects that are too far from the cone to have a chance of being in it.

As seen in the screenshot, every vision cone can be enclosed in a bounding box.

Bounding box of a vision cone
Bounding box of a vision cone

If that box were a volumetric sensor (in Unity terms: a collider in trigger mode), we could pass to PositionIsInConeRange() only the objects that entered the volumetric sensor and were in the layers we were interested in.

Method to process objects that entered the box
Method to process objects that entered the box

In the code screenshot, OnObjectEnteredCone() would be an event handler that would apply if an object entered the box. In my case, the trigger mode collider has an associated script that emits a UnityEvent when the trigger triggers its OnTriggerEnter2D. What I have done is associate OnObjectEnteredCone() with that UnityEvent.

Starting from there, the code in the screenshot is simple. In line 159, we check if the object is in one of the layers we are interested in, using the ObjectIsInLayerMask() method we analyzed earlier. If affirmative, in line 161, we check if the object is within the area covered by the vision cone, using the PositionIsInConeRange() method we saw at the beginning. And finally, if both checks are positive, the object is added to the list of objects detected by the vision cone (line 164), and an event is emitted so that the scripts using the vision cone know that it has made a new detection.

As you can imagine, it is necessary to implement a reciprocal method to process objects that leave the detection box, as well as another method to process those that might remain within the detection box but have left the area covered by the cone. It will be enough to link eventHandler to the OnTriggerExit2D() and OnTriggerStay2D() methods of the detection box's trigger collider script. None of these cases have special complexity, once the OnObjectEnteredCone() code is understood, but I will show you my implementation of the check for an object that remains in the detection area.

Check for an object that remains in the detection area
Check for an object that remains in the detection area

At this point, you are probably wondering how to dimension the box to fit the vision cone.

If you look at the screenshot I put before, with the box enclosing the vision cone, you will see that the height of the box coincides with the parameter I called detectionRange.

What has a bit more intricacy is the width of the box, as we have to resort to basic trigonometry. Look at the screenshot:

Some trigonometry to calculate the width of the box
Some trigonometry to calculate the width of the box

Starting from the screenshot, to find the width of the detection box, we need to calculate the length of B, which will correspond to half of that width.

B is one of the sides of the rectangle created using detectionRange as the diagonal. Every rectangle is composed of two right triangles whose hypotenuse will be precisely detectionRange. If we look at the upper right triangle (the red area), and review the trigonometry we learned in school, we will agree that the sine of detectionSemiConeAngle is equal to B divided by detectionRange. Therefore, we can calculate B as the product of detectionRange and the sine of detectionSemiConeAngle; with the total width of the detection box being twice B.

Translated into code, the dimensions of the detection box would be calculated as follows:

Calculation of the dimensions of the detection box
Calculation of the dimensions of the detection box

You can do this calculation every time you change the vision cone parameters and manually modify the dimensions of the trigger collider with the result of the calculation; but I preferred to do it automatically by linking the collider with a BoxRangeManager component that I implemented, and that dynamically modifies the size of the collider as you change the Range and Width fields of that BoxRangeManager. The implementation of that component is based on what I explained in my article on "Volumetric sensors with dynamic dimensions in Unity" so I will not repeat it here.

Conclusion

With this, you have everything you need to create a simple, efficient vision cone. My advice is to create a generic component that you reuse in your different projects. It is such a common element that it doesn't make sense to implement it from scratch every time. This should be one of the elements of your personal library of reusable components.

I hope you found this article interesting and that it helps you create exciting game mechanics.

28 May 2025

How to execute methods from the Godot inspector

Button at the inspector
Button at the inspector
A few days ago, I published an article explaining how to activate method execution from the inspector in Unity. We saw the possibility of using an attribute that generated an entry in an editor menu, and also how to create a custom editor so that the component inspector would show a button to activate the method.

Can the same be done in Godot? Well, until very recently, no. There wasn't an easy way to add a button to the inspector without developing a plugin and without the result being debatable. In terms of GUI customization, Godot still has a long way to go to be on par with Unity.

The [ExportToolButton] attribute 

However, Godot recently added a new attribute, @export_tool_button, and its equivalent in C# [ExportToolButton]. This attribute allows exporting a Callable field to the inspector and displaying it as a button. When the button is pressed, the method pointed to by the Callable is activated.

Let's look at an example in Godot C#. Suppose we have a ResetBoxManager method in our script:

The method we want to activate by pressing the button
The method we want to activate by pressing the button

What the method does doesn't matter. It's just an example. I show a screenshot of its content so you can see that the declaration and implementation of the method are nothing special. And now the button. To declare it, you just have to decorate a Callable field with an [ExportToolButton] tag.

Button declaration with [ExportToolButton]
Button declaration with [ExportToolButton]

Between the parentheses of the attribute, we will put the text we want the button to display. On the other hand, in the screenshot, you can see how to initialize the Callable. I called the field ResetButton (line 107) and initialized it with a new instance of Callable that points, by its parameters, to the ResetBoxManager method of that same class (hence the "this"), as can be seen in line 108.

With that, your inspector will show the button in the place that would have corresponded to the field, and when you press it, the linked method will be activated. You have a screenshot of how it looks in the image that opens this article.

Conclusion 

As you can see, the [ExportToolButton] attribute makes it really easy to add buttons to your inspectors. It combines the simplicity of the [ContextMenu] attribute we saw in Unity, with the visual appearance of its custom editors. With this, you can take a step forward in providing your inspectors with functionality that speeds up development and facilitates debugging your projects.

25 May 2025

How to execute methods from the Unity inspector

Cover image of the article
Unity is one of the most popular tools for game development, thanks to its flexibility and intuitive visual environment. One of the most useful features of Unity is its inspector. The inspector is a window in Unity that shows the properties of the components of a selected GameObject and allows us to adjust them. For example, if a GameObject has an associated script, the Inspector will show the public variables defined in that script, allowing you to modify them directly in the editor. However, what many novice developers do not know is that you can also configure the inspector to execute specific methods of your scripts, either to test functionalities or to configure more efficient workflows. In this article, I will explain how to execute methods from the Unity inspector in a simple way, a technique that can save you time and facilitate the configuration and debugging of your project.

The [ContextMenu] attribute 

A direct way to execute methods from the Inspector is by using the [ContextMenu] attribute. This attribute adds an entry at the end of the script's context menu in the inspector, allowing you to invoke methods with a single click. The context menu is the one that appears when you right-click on the component's name bar or the three dots in the upper right corner of the script in the inspector.

Context menu of a script in the inspector
Context menu of a script in the inspector

If you decorate the method you want to activate with this attribute: 

Example of using the attribute
Example of using the attribute

The entry will appear at the end of the context menu, with the text you have included in parentheses in the attribute. 

Resulting context menu
Resulting context menu

Creating a custom editor 

The previous option is the quickest to implement, but it may not be the most convenient as it involves two clicks to execute the method in question.

Another alternative, more visual, but more laborious, is to include a button in the inspector to activate the method. To do this, you have to create your own editor that shows a custom view of the component in the editor.

Let's analyze the simplest possible example. Suppose we have a MonoBehavior script called BoxRangeManager, with a method ResetBoxManager() (the one from the previous screenshot) that we want to activate. To create an editor that shows a custom view of BoxRangeManager, you have to create an Editor folder inside the main Assets folder. In the Editor folder is where you should put all the scripts dedicated to customizing the Unity editor. It is very important that these scripts do not end up in the usual Scripts folder, otherwise, you may have serious compilation problems when you want to build the final executable of the game. Remember: game scripts in the Scripts folder, and editor customization scripts in the Editor folder.

Continuing with our example, once the Editor folder is created, you have to create a script like the one in the following screenshot:

The code of our custom editor
The code of our custom editor

The first notable thing about the previous code is that it imports the UnityEditor namespace (line 2). That is the first sign that your script should be in the Editor folder or you will have problems when building your executable package. Including the code in its own namespace (line 5) is a good practice, but not essential.

Let's get to the point: for a class to be used to implement a custom editor, it must inherit from UnityEditor.Editor (line 8); and for Unity to know which MonoBehaviour to use the custom editor with, you have to identify it in a [CustomEditor] tag that decorates the class (line 7).

From there, to customize how BoxRangeManager is displayed in the inspector, you have to reimplement the OnInspectorGUI() method of UnityEditor.Editor (line 10).

Our example is the simplest possible. We just want to show the default inspector and add a button at the end of it that activates the ResetBoxManager() method when pressed. So the first thing we do is draw the inspector as the default inspector would have done (line 13). Then we add a space so as not to stick the button too close to everything above (line 16). Finally, we add our button on line 22, passing as parameters the text we want to appear on the button and the height we want it to have.

The button returns true when pressed. Thanks to that, in lines 24 and 25 we can define the reaction when pressed. In that case, the first thing is to execute the ResetBoxManager() method we wanted (line 24) and then call the EditorUtility.SetDirty() method (line 25) to notify the editor that we have made changes in the component's inspector and force it to redraw it.

Note that the custom editor contains a reference to the MonoBehavior whose inspector it shows in the target field. You just have to cast to the class you know you are showing (line 19) to have access to its public fields and methods.

And that's it. You don't have to do anything else. Once Unity reloads the domain, the custom editor with our new and shiny button will be displayed.

Appearance of our custom editor
Appearance of our custom editor

Conclusion 

Executing methods from the Unity Inspector is a powerful technique to streamline the development and debugging of your projects. Whether using the [ContextMenu] attribute for quick tests or with custom editors, these tools allow you to interact with your code in a more dynamic and visual way.

Experiment with these options and find out which one best suits your workflow. The Unity Inspector is much more than just a property editor!

18 May 2025

Volumetric sensors with dynamic dimensions in Unity

Creating a volumetric sensor in Unity is simple: you add a Collider component to a Transform, configure it as a trigger, shape the Collider to define the sensor's range, and add a script to the Transform that implements the OnTriggerEnter and OnTriggerExit methods. The Collider will activate these methods when another collider enters or exits its range, respectively.

That's all, and it usually suffices in most cases, but there are others where we need to define the shape of the volumetric sensor in real-time, during the game's execution. Imagine, for example, that you want to equip a mobile agent with a volumetric sensor to detect obstacles that may interfere with its path. Normally, you would place a volumetric sensor in the shape of a box in front of the agent, and this box would extend forward along a distance proportional to the agent's speed. If the box is too short, obstacles may be detected too late for the agent to avoid a collision. If the box is too long, the agent may react to obstacles that are too far away, which would be unrealistic.

A mobile agent with a volumetric sensor to detect obstacles
A mobile agent with a volumetric sensor to detect obstacles

If the agent always maintains a constant speed, it will suffice to manually set the sensor's size when designing the agent. But if the agent varies its speed depending on its behavior, a single type of sensor may not be suitable for all possible speeds. In such cases, we could equip the agent with several sensors of different sizes and activate one or another depending on the speed range the agent is in. However, this solution will be very complex and difficult to scale if we end up changing the agent's speed range. It is much better to modify the sensor's size during execution and adapt it to the agent's speed at each moment.

I will explain how I do it with a BoxCollider2D. I'm not saying it's the best way to do it, but it's the best way I've found so far. You will find it easy to adapt my examples to other collider shapes and use them as a starting point to find the mechanism that best suits your case.

A BoxCollider2D allows changing its size through its size property. This consists of a Vector2 whose first component defines the width and the second the height of the box. We can change the value of this property at any time, and the collider will adapt its shape to it. The problem is that the BoxCollider will remain centered and grow in both directions of the modified component. If we have a case like the one in the previous capture, what we want is for the collider to grow forward when we increase the speed, and not for the collider to grow also towards the rear of the agent.

The way to solve it is to modify the dimension we are interested in, in the case of the agent in the capture it will be the height of the box, and at the same time displace the box, manipulating its offset, so that it seems to grow only on one side. In the example at hand, to make the sensor grow, we would increase the height of the box and at the same time displace its offset upwards to keep the lower side of the box in the same position and make it seem to grow only on the upper side.

We want the collider to grow only on one side
We want the collider to grow only on one side

Based on the above, I will show you the code I use to generalize the different possible cases for a box. For a BoxCollider, the possible size modifications are those of the enum in the following capture:

Possible growth directions for a box
Possible growth directions for a box

The options explain themselves: "Up" means we want the box to grow only on its upper side, "Down" on the lower side, "Left" on the left side, and "Right" on the right side. "Symmetric" is the default behavior according to which, when changing the height, the box would grow both on its upper and lower sides.

The point is that when you increase the size of a box in one of its dimensions, this increase is evenly distributed in a growth of both sides of that dimension. For example, if you increase the height of a box by 2 units, you can expect the upper side to rise by one unit and the lower side to drop by another. Therefore, if you wanted the box to appear to grow only on the upper side, you should keep the lower side in the same position by making the box rise by one unit.

The way to generalize it is that the box must move half the size increase in the growth direction we have marked.

How to calculate the box's movement vector
How to calculate the box's movement vector

The box's movement vector can be obtained from a method like the previous one (GetGrowOffsetVector). In the case of our example, where we want the box to appear to grow on the upper side and the lower side to remain in its position, the growDirection would be "Up," so the method would return a Vector2 with the values (0, 0.5). Note that I have defined OffsetBias as a constant value of 0.5. This vector will then be multiplied by the box's growth vector, which will give us the box's displacement.

The growth vector is the resulting vector from subtracting the new box size from the initial size.

Calculation of the growth vector
Calculation of the growth vector

Therefore, every time we want to change the box's size, we will have to calculate the growth vector and multiply it by the box's movement vector to obtain the new box offset so that it appears to have moved only on one side.

Method to change the box's size
Method to change the box's size

The SetBoxSize() method, in which I implement the above, cannot be simpler. In lines 153 and 154, I reset the box to its initial size (1,1) and its offset to (0,0). Since the new size is set immediately afterward, the reset is instantaneous and not noticeable.

Then, in line 155, I execute the GetGrowOffsetVector() method to obtain the box's movement vector. And in line 156, I obtain the growth vector by calling the GetGrowVector() method. Both vectors are multiplied in line 158 to give the new box offset. Note in that same line, 158, that I use the initialOffset field (of type Vector2) to define a default box offset. This will be the offset the box will have when no movement has been applied.

Also note that I use the boxCollider field to manipulate the collider's properties. This field has a reference to the collider. You can obtain this reference either by exposing the field in the inspector and dragging the collider onto that field or by using a call to GetComponent() in the script's Ready().

If you include the previous methods in a script and make it expose public methods that end up calling SetBoxSize(), you will be able to manipulate the collider's size in real-time from other game scripts.

And with this, you have everything. It's quite simple once you have it clear, but if you start from scratch, it can take a while to figure out how to do it. I hope this article has saved you that time and that you found it interesting.

15 May 2025

Course "Beat'em up Godot tutorial"

Some time ago, I completed the "Beat'em up Godot tutorial," but I hadn't found the time to write about it until now.

This is a 20-episode video tutorial available for free on YouTube, taking roughly 10 hours to complete. In it, the author guides you through the process of creating a beat'em up game, similar to classics like Streets of Rage or Double Dragon. As I understand, the game was originally developed for a Game Jam and turned out so polished that it was an ideal candidate for a course.

The content starts from the basics, making it perfectly suitable for those looking to get started with Godot. However, it also dives into more advanced concepts after the initial learning curve, so it will also appeal to intermediate learners or those simply looking for a Godot refresher.

It begins with fundamental concepts like shaping characters, their animations, and movements. Then it covers the damage system, based on hitboxes, which I found quite interesting as it’s a much simpler and more elegant system than the one I had used before. It also implements state machines for characters, built from scratch through code. While this was interesting, I felt it was a bit like reinventing the wheel. I would have preferred if the author had used Godot’s built-in state machine system via its Animation Tree. After that, the course moves on to implementing weapons, both melee and throwable, as well as other collectible items like food and health kits. I also found the way the author sets up the levels and the various enemy spawn points to be very interesting and original. Everything is highly modular and reusable. As for the GUI, music, and sound, it covers the basics, but a retro-style game like this doesn’t really demand much more.

Having taken several courses, read books, and worked on some projects, I can now clearly distinguish good practices and clean, organized development—and this course delivers. I did feel that using C# might have made the code even more modular, but the author makes good use of GDScript within its limitations. Godot’s node system inherently enforces high modularity and reusability, and combined with the lightweight editor, it makes game development quite straightforward, with very few moments of waiting or writing repetitive code. Since the game’s resources are publicly available, I plan to implement it in Unity to see how it compares to Godot.

Overall, the course is very engaging, and you get a high return for the time invested. In summary: a highly recommended course.

30 April 2025

2D Navigation in Unity

Someone reading a map in front of a maze
A couple of months ago, I published an article on how to use the 2D navigation system in Godot. It's time to write a similar article for Unity. I'm not going to complicate things, so I'm going to replicate the structure of that article, but adapting the instructions to Unity.

Remember that the starting premise is that we want to have NPC characters that are able to navigate the scene to go from one point to another. Let's start by seeing how to create a scene that can be processed by the navigation system.

Creating a navigable scene 

As a scene, any set of sprites could work, as long as they have associated colliders to define which ones pose an obstacle to the NPC. Since that wouldn't be very interesting, we're going to opt for a somewhat more complex case, but also quite common in a 2D game. Let's assume that our scene is built using tilemaps. The use of tilemaps deserves its own article and you will find plenty of them on the Internet, so here I will assume that you already know how to use them.

In our example, we will assume a scene built with three levels of tilemaps: one for the perimeter walls of the scene (Tilemap Wall), another for the internal obstacles of the scene (Tilemap Obstacles), and another for the ground tilemap (Tilemap Ground).

Hierarchy of tilemaps in the example
Hierarchy of tilemaps in the example

Courtyard is the parent node of the tilemaps and the one that contains the Grid component. With the tilemap grid. Tilemap Ground only contains the visual components Tilemap and TilemapRenderer to display the ground tiles. The Wall and Obstacles tilemaps also have these components, but they also incorporate two additional components: a Tilemap Collider 2D and a Composite Collider 2D. The Tilemap Collider 2D component is used to account for the collider of each tile. This collider is defined in the sprite editor for each of the sprites used as tiles. The problem with Tilemap Collider 2D is that it counts the collider of each tile individually, which is very inefficient due to the number of tiles that any tilemap-based scene accumulates. For this reason, it is very common to accompany the Tilemap Collider 2D component with another component called Composite Collider 2D. This latter component is responsible for merging all the colliders of the tiles, generating a single combined collider that is much lighter to manipulate by the engine.

When using these components in your tilemaps, I advise you to do two things:

  • Set the "Composite Operation" attribute of the Tilemap Collider 2D to the value Merge. This will tell the Composite Collider 2D component that the operation it needs to perform is to merge all the individual colliders into one. 
  • In the Composite Collider 2D, I would set the "Geometry Type" attribute to the value Polygons. If you leave it at the default value, Outlines, the generated collider will be hollow, meaning it will only have edges, so some collider detection operations could fail, as I explained in a previous article

Creating a 2D NavMesh 

A NavMesh is a component that analyzes the scene and generates a graph based on it. This graph is used by the pathfinding algorithm to guide the NPC. Creating a NavMesh in Unity 3D is very simple.

The problem is that in 2D, the components that work in 3D do not work. There are no warnings or error messages. You simply follow the instructions in the documentation and the NavMesh is not generated if you are in 2D. In the end, after much searching on the Internet, I have come to the conclusion that Unity's native implementation for 2D NavMeshes is broken. It only seems to work for 3D.

From what I've seen, everyone ends up using an open-source package, external to Unity, called NavMeshPlus. This package implements a series of components very similar to Unity's native ones, but they do work when generating a 2D NavMesh.

The previous link takes you to the GitHub page of the package, where it explains how to install it. There are several ways to do it, the easiest perhaps is to add the URL of the repository using the "Install package from git URL..." option of the "+" icon in Unity's package manager. Once you do this and the package index is refreshed, you will be able to install NavMeshPlus, as well as its subsequent updates. 

Option to add a git repository to Unity's package manager.
Option to add a git repository to Unity's package manager.

Once you have installed NavMeshPlus, you need to follow these steps: 

  1. Create an empty GameObject in the scene. It should not depend on any other GameObject. 
  2. Add a Navigation Surface component to the previous GameObject. Make sure to use the NavMeshPlus component and not Unity's native one. My advice is to pay attention to the identifying icons of the components shown in the screenshots and make sure that the components you use have the same icons. 
  3. You also need to add the Navigation CollectSources2d component. In that same component, you need to press the "Rotate Surface to XY" button; that's why it's important that these components are installed in an empty GameObject that doesn't depend on any other. If you did it correctly, it will seem like nothing happens. In my case, I made a mistake and added the components to the Courtyard GameObject mentioned earlier, and when I pressed the button, the entire scene rotated. So be very careful. 
  4. Then you need to add a Navigation Modifier component to each of the elements in the scene. In my case, I added it to each of the GameObjects of the tilemaps seen in the screenshot with the hierarchy of tilemaps in the example. These components will help us discriminate which tilemaps define areas that can be traversed and which tilemaps define obstacles. 
  5. Finally, in the Navigation Surface component, you can press the "Bake" button to generate the NavMesh. 

Let's examine each of the previous steps in more detail.

The GameObject where I placed the two previous components hangs directly from the root of the hierarchy. I didn't give it much thought and called it NavMesh2D. In the following screenshot, you can see the components it includes and their configuration.

Configuration of the main components of NavMeshPlus
Configuration of the main components of NavMeshPlus

As you can see in the previous figure, the main utility of the NavigationSurface component is to define which layers we will take into account to build our NavMesh ("Include Layers"). I suppose that if you have very loaded layers, you might be interested in limiting the "Include Layers" parameter only to the layers where there are scene elements. In my case, the scene was so simple that even including all the layers, I didn't notice any slowdown when creating the NavMesh. Another customization I made is to set the "Use Geometry" parameter to "Physics Colliders". This value presents better performance when using tilemaps since simpler geometric shapes are used to represent the scene. The "Render Meshes" option allows you to create a much more detailed NavMesh, but less optimized, especially when using tilemaps.

If you're wondering how to model the physical dimensions of the navigation agent (its radius and height, for example), although they are shown at the top of the "Navigation Surface" component, they are not configured there but in the Navigation tab, which is also visible in the previous screenshot. If you don't see it in your editor, you can open it in Window --> AI --> Navigation.

Navigation tab Navigation tab
Navigation tab Navigation tab

Finally, the Navigation Modifier components allow us to distinguish tilemaps that contain obstacles from tilemaps that contain walkable areas. To do this, we need to check the "Override Area" box and then define the type of area this tilemap contains. For example, the GameObjects of the Wall and Obstacles tilemaps have the Navigation Modifier component from the following screenshot:

Navigation Modifier applied to tilemaps with obstacles
Navigation Modifier applied to tilemaps with obstacles

By marking the area as "Not Walkable," we are saying that what this tilemap paints are obstacles. If it were a walkable area, like the Ground tilemap, we would set it to Walkable.

Once all the Navigation Modifiers are configured, we can create our NavMesh by pressing the "Bake" button on the Navigation Surface component. To see it, you need to click on the compass icon in the lower toolbar (it's the second from the right in the toolbar) of the scene tab. This will open a pop-up panel on the right where you can check the "Show NavMesh" box. If the NavMesh has been generated correctly, it will appear in the scene tab, overlaying the scene. All areas marked in blue will be walkable by our NPC.

NavMesh visualization NavMesh visualization
NavMesh visualization NavMesh visualization


Using the 2D NavMesh 

Once the 2D NavMesh is created, our NPCs should be able to read it.

In the case of Godot, this meant including a MeshNavigationAgent2D node in the NPCs. From there, you would tell that node where you wanted to go, and it would calculate the route and return the location of the different waypoints of the route. The rest of the agent's nodes would be responsible for moving it to that location.

Unity also has a NavMeshAgent component, but the problem is that it is not passive like Godot's; that is, it doesn't just give the different waypoints of the route but also moves the agent to those waypoints. This can be very convenient in many cases when the movement is simple, with a single component you meet two needs: you guide the movement and execute it. However, thinking about it carefully, it is not a good architecture because it does not respect the principle of Separation of Responsibilities, which states that each component should focus on performing a single task. My project strongly configures the movement; it is not homogeneous but changes along a route based on multiple factors. It is a level of customization that exceeds what Unity's NavMeshAgent allows. If Unity had respected the principle of Separation of Responsibilities, as Godot has done in this case, it would have separated route generation and agent movement into two separate components. This way, the route generator component could have been used as is, while the agent movement component could have been wrapped in other components to customize it appropriately.

Fortunately, there is a little-publicized way to query the 2D NavMesh to get routes without needing a NavMeshAgent, which allows replicating Godot's functionality. I will focus this article on that side because it is what I have done in my project. If you are interested in how to use the NavMeshAgent, I recommend consulting Unity's documentation, which explains in great detail how to use it.

Querying the NavMesh to get a route between two points
Querying the NavMesh to get a route between two points

In the previous screenshot, I have provided an example of how to perform these queries.

The key is in the call to the NavMesh.CalculatePath() method on line 99. This method takes 4 parameters: 

  • Starting point: Generally, it is the NPC's current position, so I passed it directly as transform.position.
  • Destination point: In this case, I passed a global variable of the NPC where the location of its target is stored. 
  • A NavMesh area filter: In complex cases, you can have your NavMesh divided into areas. This bitmask allows you to define which areas you want to restrict the query to. In a simple case like this, it is normal to pass NavMesh.AllAreas to consider all areas. 
  • An output variable of type AI.NavMeshPath: this is the variable where the resulting route to the destination point will be deposited. I passed a private global variable of the NPC. 

The call to CalculatePath() is synchronous, meaning the game will pause for a moment until CalculatePath() calculates the route. For small routes and occasional updates, the interruption will not affect the game's performance; but if you spend a lot of time calculating many long routes, you will find that performance starts to suffer. In those cases, it is best to divide the journeys into several shorter segments that are lighter to calculate. In the case of formations, instead of having each member of the formation calculate their route, it is more efficient for only the "commander" to calculate the route and the rest to follow while maintaining the formation.

The output variable of type AI.NavMeshPath, where CalculatePath() dumps the calculated route, could still be passed to a NavMeshAgent through its SetPath() method. However, I preferred to do without the NavMeshAgent, so I processed the output variable in the UpdatePathToTarget() method on line 107 to make it easier to use. An AI.NavMeshPath variable has the "corners" field where it stores an array with the locations of the different waypoints of the route. These locations are three-dimensional (Vector3), while in my project I work with two-dimensional points (Vector2), which is why in the UpdatePathToTarget() method I go through all the points in the "corners" field (line 111) and convert them to elements of a Vector2 array (line 113). This array is then used to direct my movement components to each of the waypoints of the route.

Conclusion 

Done, with this you have everything you need to make your NPCs move intelligently through the scene, navigating to reach the target. At a high level, it is true that the concepts are very similar between Godot and Unity, but the devil is in the details. When you get down to the implementation level, you will find the nuances and differences that we have analyzed in this article, but with the instructions I have given you, the result you obtain in Unity and Godot should be similar.

24 April 2025

How to detect obstacles in Unity 2D

In games it is quite common to need to determine if a point on the stage is free of obstacles in order to place a character or another game element. Think, for example, of an RTS: when building a building, you have to choose a free section of land, but how can your code know if a site already has a building or some other type of object?

In a 3D game, the most common solution is to project a ray from the camera's viewpoint, passing through the point where the mouse cursor is located on the screen plane, until it hits a collider. If the collider is the ground, that point is free, and if not, there is an obstacle.

Of course, if the object we want to place is larger than a point, projecting a simple ray falls short. Imagine we want to place a rectangular building, and the point where its center would go is free, but the corner area is not. Fortunately, for those cases, Unity allows us to project complex shapes beyond a mere point. For example, the SphereCast methods allow an invisible sphere to be moved along a line, returning the first collider it hits. Another method, BoxCast, would solve the problem of the rectangular building by projecting a rectangular base box along a line. We would only have to make that projection along a vertical line to the ground position we want to check.

In 2D, there are also projection methods, BoxCast and CircleCast, but they only work when the projection takes place in the XY plane (the screen plane). That is, they are equivalent to moving a box or a circle in a straight line along the screen to see if they touch a collider. Of course, that has its utility. Imagine you are making a top-down game and want to check if the character will be able to pass through an opening in a wall. In that case, you would only need to do a CircleCast of a circle, with a diameter like the width of our character's shoulders, projecting through the opening to see if the circle touches the wall's colliders.

A CircleCast, projecting a circle along a vector.
A CircleCast, projecting a circle along a vector.

But what happens when you have to project on the Z-axis in a 2D game? For example, for a 2D case equivalent to the 3D example we mentioned earlier. In that case, neither BoxCast nor CircleCast would work because those methods define the projection vector using a Vector2 parameter, limited to the XY plane. In those cases, a different family of methods is used: the "Overlap" methods.

The Overlap methods place a geometric shape at a specific point in 2D space and, if the shape overlaps with any collider, they return it. Like projections, there are methods specialized in different geometric shapes: OverlapBox, OverlapCapsule, and OverlapCircle, among others.

Let's suppose a case like the following figure. We want to know if a shape the size of the red circle would touch any obstacle (in black) if placed at the point marked in the figure.

Example of using OverlapCircle.
Example of using OverlapCircle.

In that case, we would use OverlapCircle to "draw" an invisible circle at that point (the circle seen in the figure is just a gizmo) and check if the method returns any collider. If not, it would mean that the chosen site is free of obstacles.

A method calling OverlapCircle could be as simple as the following:

Call to OverlapCircle Call to OverlapCircle
Call to OverlapCircle Call to OverlapCircle

The method in the figure returns true if there is no collider within a radius (MinimumCleanRadius) of the candidateHidingPoint position. If there is any collider, the method returns false. For that, the IsCleanHidingPoint method simply calls OverlapCircle, passing the following parameters:

  • candidateHidingPoint (line 224): A Vector2 with the position of the center of the circle to be drawn. 
  • MinimumCleanRadius (line 225): A float with the circle's radius. 
  • NotEmpyGroundLayers (line 226): A LayerMask with the layers of the colliders we want to detect. It serves to filter out colliders we don't want to detect. OverlapCircle will discard a collider that is not in one of the layers we passed in the LayerMask. 

If the area is free of colliders, OverlapCircle will return null. If there are any, it will return the first collider it finds. If you are interested in getting a list of all the colliders that might be in the area, you could use the OverlapCircleAll variant, which returns a list of all of them.

We could end here, but I don't want to do so without warning you about a headache you will undoubtedly encounter in 2D. Fortunately, it can be easily solved if you are warned.

The problem can occur if you use tilemaps. These are very common for shaping 2D scenarios. The issue is that to form the colliders of a tilemap, it is normal to use a "Tilemap Collider 2D" component, and it is also quite common to add a "Composite Collider 2D" component to sum all the individual colliders of each tile into one to improve performance. The problem is that by default, the "Composite Collider 2D" component generates a hollow collider, only defined by its outline. I suppose it does this for performance reasons. This happens when the "Geometry Type" parameter has the value Outlines.

Possible values of the Geometry Type parameter.
Possible values of the Geometry Type parameter.

Why is it a problem that the collider is hollow? Because in that case, the call to OverlapCircle will only detect the collider if the circle it draws intersects with the collider's edge. If, on the other hand, the circle fits neatly inside the collider without touching any of its edges, then OverlapCircle will not return any collider, and we would mistakenly assume that the area is clear. The solution is simple once it has been explained to you. You need to change the default value of "Geometry Type" to Polygons. This value makes the generated collider "solid," so OverlapCollider will detect it even if the drawn circle fits inside without touching its edges.

It seems silly because it is, but it was a silly thing that took me a couple of hours to solve until I managed to find the key. I hope this article helps you avoid the same issue.