07 June 2025

How to implement a vision cone in Unity

A vision cone
A vision cone in video games is a mechanism primarily used in stealth or strategy games to simulate the field of vision of a non-playable character (NPC), such as an enemy or a guard. It is represented as a conical area originating from the NPC's eyes and extending forward at a certain angle, defining the range and direction in which the NPC can detect the player or other objects. If there are no obstacles in the way, all objects within the player's vision cone are visible to them. 

You can find some famous examples of this concept in games like Commandos or Metal Gear Solid. In Commandos, the enemies' vision cone is visible in the main window to show the surveillance area of enemy soldiers.

Vision cones in Commandos
Vision cones in Commandos

In Metal Gear Solid, the vision cones are not shown in the main window but in the minimap in the upper right corner, allowing the player to plan their movements to navigate the scene without being detected.

Vision cones in Metal Gear Solid
Vision cones in Metal Gear Solid

In general, the vision cone is key to creating stealth mechanics, as it forces the player to plan movements, use the environment (such as cover or distractions), and manage time to avoid detection. It also adds realism, as it simulates how humans or creatures have a limited, non-omnidirectional field of vision.

In this article, we will see how we can implement a vision cone in Unity. The idea is to create a sensor that simulates this detection mode, so we can add it to our NPCs.

Main Characteristics of Vision Cones

  • Angle: The cone usually has the shape of a triangle or circular sector in 2D, or a three-dimensional cone in 3D games. The viewing angle (e.g., 60° or 90°) determines the width of the field that the NPC can "see". 
  • Distance: The cone has a maximum range, beyond which the NPC will not detect the player, even if they are within the viewing angle. 
You can add more embellishments, but a vision cone is defined only by these two factors.

In many games, the vision cone is graphically shown to the player (especially in top-down view or specific interfaces) to indicate where they should avoid being seen. It can change color (e.g., green for "no alert" and red for "alert"). In this article, I will not cover the visual part because it doesn't add much. I want to focus on implementing what the NPC can see and what it cannot, not on the representation of the cone.

In Unity, the component that implements the vision cone usually exposes these two characteristics in the inspector, as seen in the following screenshot:

Basic fields to implement a vision cone
Basic fields to implement a vision cone

In my case, detectionRange (line 15) implements the distance, while detectionSemiconeAngle (line 18) implements the angle.

In the case of the angle, my code is based on some premises that need to be considered. The first is that I used a [Range] attribute (line 17) to configure the value of this field with a slider and to limit the range of possible values to the interval between 0 and 90 degrees. Although the viewing angle of a person is greater than 90° lateral degrees, in games it would be too difficult to avoid a character with that vision cone, so it is normal not to exceed 90°, with 45° being the most common. The second premise is that I treat the angle as a semi-angle. That is, I measure it from the direction I consider frontal (Forward, in my case) in one direction, and then it is mirrored in the other direction to generate a symmetrical cone.

The two parameters that define a vision cone
The two parameters that define a vision cone

In my example, I am working in 2D, so I have defined Forward as the local +Y axis, as seen in the following screenshot.

Definition of the frontal vector (Forward)
Definition of the frontal vector (Forward)

In line 20 of the code screenshot, I included one more field, layersToDetect, which we will use as a filter, as we will see a little later.

How to Detect if a Position is Within the Vision Cone

With the distance and angle defined, we need to assess whether the position to be checked is less than that distance and whether the angle between the relative position vector and the Forward vector of the cone is less than the angle of the cone. In Unity, it is very easy to calculate both.

Method to determine if a position is within the vision cone
Method to determine if a position is within the vision cone

The easiest way to calculate the distance is to use the Vector2.Distance() method, as I do in line 126 of the screenshot, passing the position of the vision cone (coinciding with its vertex) and the position to be checked as parameters.

For the angle, we can use the Vector2.Angle() method, as seen in line 127. This method returns the absolute angle between two vectors, so I pass Forward (line 128) on one side and the vector of the position to be checked, relative to the origin of the cone (line 129), on the other.

If both the distance and the angle are below the thresholds defined in the cone, then the checked position is within it.

Object Filtering

We could leave the article here, and you would have a functional vision cone. You would just need to collect all potentially visible objects in the scene and pass their positions (one by one) to the PositionIsInConeRange() method defined earlier. This check would need to be done periodically, perhaps in the Update() or FixedUpdate() method.

However, this would not be very efficient as the scene could be huge and contain many objects. It would be much better if we could do a preliminary filtering, so we only pass the minimum and essential objects to PositionIsInConeRange().

Filtering by Layers

The first filtering we could apply is by layer. We can distribute the objects in the scene into different layers and configure the vision cone to only consider objects in a specific layer. That was the purpose of the layersToDetect field mentioned earlier. Extended, this field looks like the screenshot.

layersToDetect field of type LayerMask
layersToDetect field of type LayerMask

This type of field allows multiple selection, so you can define that your cone analyzes several layers simultaneously.

Once you know which layers you want to analyze, discriminating whether an object is in one of those layers is apparently simple, as seen in the following screenshot.

How to know if an object is in one of the layers of a LayerMask
How to know if an object is in one of the layers of a LayerMask

I say "apparently" simple because, although you can limit yourself to copy-pasting this code into yours, fully understanding it has its intricacies.

To begin with, a LayerMask has a value field that is a 32-bit integer in which each of them represents the 32 possible layers in a Unity scene. You can imagine a succession of 32 ones and zeros. If you include two layers in the layerMask field, the value field will have 2 bits with a value of one, and the rest will be zeros. The final integer value of the field will depend on the position of those ones, although, in reality, that value is indifferent because what matters is which positions have a one.

On the other hand, all Unity objects have a layer field that contains an integer with values ranging from 0 to 31. This integer indicates the index of the layer to which the object belongs, within the LayerMask of all possible layers in the scene. For example, if an object's layer field has a value of 3, and that layer is included in a LayerMask, then that LayerMask will have a one in its bit at index 3.

To know if an object's layer is within the layers marked in a LayerMask, we need to make a comparison, using the object's layer as a mask. The trick is to generate an integer whose binary value is filled with zeros and put a one in the position corresponding to the layer to be checked. That integer is what we call the mask. We will compare that mask with the LayerMask, doing a binary AND, and see if the resulting value is different from zero. If it were zero, it would mean that the LayerMask did not include the layer we wanted to check.

It is better seen by representing the example from before. Look at the following screenshot.

Operation to check if a layer is contained within a LayerMask
Operation to check if a layer is contained within a LayerMask

In it, I have represented a LayerMask with two layers, the one at index 1 and the one at index 3 (they are the positions that have a one). Suppose now we want to check if the LayerMask contains layer 3.

What we have done is generate a mask with all zeros, except for the one at position 3, and we have done AND with the LayerMask. Doing AND with a mask makes the final result depend on the value that the LayerMask digits had in the positions marked by the mask. In this case, the mask points to position 3, so the final result will be zero or different from zero depending on whether position 3 of the LayerMask is zero or different from zero. In this case, it will be different from zero.

Filtering by Proximity

With layer filtering, we will avoid calling PositionIsInConeRange() for objects that are in layers we are not interested in. That will improve performance, but we can improve it further.

Another preliminary filtering we can do is to discard objects that are too far from the cone to have a chance of being in it.

As seen in the screenshot, every vision cone can be enclosed in a bounding box.

Bounding box of a vision cone
Bounding box of a vision cone

If that box were a volumetric sensor (in Unity terms: a collider in trigger mode), we could pass to PositionIsInConeRange() only the objects that entered the volumetric sensor and were in the layers we were interested in.

Method to process objects that entered the box
Method to process objects that entered the box

In the code screenshot, OnObjectEnteredCone() would be an event handler that would apply if an object entered the box. In my case, the trigger mode collider has an associated script that emits a UnityEvent when the trigger triggers its OnTriggerEnter2D. What I have done is associate OnObjectEnteredCone() with that UnityEvent.

Starting from there, the code in the screenshot is simple. In line 159, we check if the object is in one of the layers we are interested in, using the ObjectIsInLayerMask() method we analyzed earlier. If affirmative, in line 161, we check if the object is within the area covered by the vision cone, using the PositionIsInConeRange() method we saw at the beginning. And finally, if both checks are positive, the object is added to the list of objects detected by the vision cone (line 164), and an event is emitted so that the scripts using the vision cone know that it has made a new detection.

As you can imagine, it is necessary to implement a reciprocal method to process objects that leave the detection box, as well as another method to process those that might remain within the detection box but have left the area covered by the cone. It will be enough to link eventHandler to the OnTriggerExit2D() and OnTriggerStay2D() methods of the detection box's trigger collider script. None of these cases have special complexity, once the OnObjectEnteredCone() code is understood, but I will show you my implementation of the check for an object that remains in the detection area.

Check for an object that remains in the detection area
Check for an object that remains in the detection area

At this point, you are probably wondering how to dimension the box to fit the vision cone.

If you look at the screenshot I put before, with the box enclosing the vision cone, you will see that the height of the box coincides with the parameter I called detectionRange.

What has a bit more intricacy is the width of the box, as we have to resort to basic trigonometry. Look at the screenshot:

Some trigonometry to calculate the width of the box
Some trigonometry to calculate the width of the box

Starting from the screenshot, to find the width of the detection box, we need to calculate the length of B, which will correspond to half of that width.

B is one of the sides of the rectangle created using detectionRange as the diagonal. Every rectangle is composed of two right triangles whose hypotenuse will be precisely detectionRange. If we look at the upper right triangle (the red area), and review the trigonometry we learned in school, we will agree that the sine of detectionSemiConeAngle is equal to B divided by detectionRange. Therefore, we can calculate B as the product of detectionRange and the sine of detectionSemiConeAngle; with the total width of the detection box being twice B.

Translated into code, the dimensions of the detection box would be calculated as follows:

Calculation of the dimensions of the detection box
Calculation of the dimensions of the detection box

You can do this calculation every time you change the vision cone parameters and manually modify the dimensions of the trigger collider with the result of the calculation; but I preferred to do it automatically by linking the collider with a BoxRangeManager component that I implemented, and that dynamically modifies the size of the collider as you change the Range and Width fields of that BoxRangeManager. The implementation of that component is based on what I explained in my article on "Volumetric sensors with dynamic dimensions in Unity" so I will not repeat it here.

Conclusion

With this, you have everything you need to create a simple, efficient vision cone. My advice is to create a generic component that you reuse in your different projects. It is such a common element that it doesn't make sense to implement it from scratch every time. This should be one of the elements of your personal library of reusable components.

I hope you found this article interesting and that it helps you create exciting game mechanics.

28 May 2025

How to execute methods from the Godot inspector

Button at the inspector
Button at the inspector
A few days ago, I published an article explaining how to activate method execution from the inspector in Unity. We saw the possibility of using an attribute that generated an entry in an editor menu, and also how to create a custom editor so that the component inspector would show a button to activate the method.

Can the same be done in Godot? Well, until very recently, no. There wasn't an easy way to add a button to the inspector without developing a plugin and without the result being debatable. In terms of GUI customization, Godot still has a long way to go to be on par with Unity.

The [ExportToolButton] attribute 

However, Godot recently added a new attribute, @export_tool_button, and its equivalent in C# [ExportToolButton]. This attribute allows exporting a Callable field to the inspector and displaying it as a button. When the button is pressed, the method pointed to by the Callable is activated.

Let's look at an example in Godot C#. Suppose we have a ResetBoxManager method in our script:

The method we want to activate by pressing the button
The method we want to activate by pressing the button

What the method does doesn't matter. It's just an example. I show a screenshot of its content so you can see that the declaration and implementation of the method are nothing special. And now the button. To declare it, you just have to decorate a Callable field with an [ExportToolButton] tag.

Button declaration with [ExportToolButton]
Button declaration with [ExportToolButton]

Between the parentheses of the attribute, we will put the text we want the button to display. On the other hand, in the screenshot, you can see how to initialize the Callable. I called the field ResetButton (line 107) and initialized it with a new instance of Callable that points, by its parameters, to the ResetBoxManager method of that same class (hence the "this"), as can be seen in line 108.

With that, your inspector will show the button in the place that would have corresponded to the field, and when you press it, the linked method will be activated. You have a screenshot of how it looks in the image that opens this article.

Conclusion 

As you can see, the [ExportToolButton] attribute makes it really easy to add buttons to your inspectors. It combines the simplicity of the [ContextMenu] attribute we saw in Unity, with the visual appearance of its custom editors. With this, you can take a step forward in providing your inspectors with functionality that speeds up development and facilitates debugging your projects.