24 April 2025

How to detect obstacles in Unity 2D

In games it is quite common to need to determine if a point on the stage is free of obstacles in order to place a character or another game element. Think, for example, of an RTS: when building a building, you have to choose a free section of land, but how can your code know if a site already has a building or some other type of object?

In a 3D game, the most common solution is to project a ray from the camera's viewpoint, passing through the point where the mouse cursor is located on the screen plane, until it hits a collider. If the collider is the ground, that point is free, and if not, there is an obstacle.

Of course, if the object we want to place is larger than a point, projecting a simple ray falls short. Imagine we want to place a rectangular building, and the point where its center would go is free, but the corner area is not. Fortunately, for those cases, Unity allows us to project complex shapes beyond a mere point. For example, the SphereCast methods allow an invisible sphere to be moved along a line, returning the first collider it hits. Another method, BoxCast, would solve the problem of the rectangular building by projecting a rectangular base box along a line. We would only have to make that projection along a vertical line to the ground position we want to check.

In 2D, there are also projection methods, BoxCast and CircleCast, but they only work when the projection takes place in the XY plane (the screen plane). That is, they are equivalent to moving a box or a circle in a straight line along the screen to see if they touch a collider. Of course, that has its utility. Imagine you are making a top-down game and want to check if the character will be able to pass through an opening in a wall. In that case, you would only need to do a CircleCast of a circle, with a diameter like the width of our character's shoulders, projecting through the opening to see if the circle touches the wall's colliders.

A CircleCast, projecting a circle along a vector.
A CircleCast, projecting a circle along a vector.

But what happens when you have to project on the Z-axis in a 2D game? For example, for a 2D case equivalent to the 3D example we mentioned earlier. In that case, neither BoxCast nor CircleCast would work because those methods define the projection vector using a Vector2 parameter, limited to the XY plane. In those cases, a different family of methods is used: the "Overlap" methods.

The Overlap methods place a geometric shape at a specific point in 2D space and, if the shape overlaps with any collider, they return it. Like projections, there are methods specialized in different geometric shapes: OverlapBox, OverlapCapsule, and OverlapCircle, among others.

Let's suppose a case like the following figure. We want to know if a shape the size of the red circle would touch any obstacle (in black) if placed at the point marked in the figure.

Example of using OverlapCircle.
Example of using OverlapCircle.

In that case, we would use OverlapCircle to "draw" an invisible circle at that point (the circle seen in the figure is just a gizmo) and check if the method returns any collider. If not, it would mean that the chosen site is free of obstacles.

A method calling OverlapCircle could be as simple as the following:

Call to OverlapCircle Call to OverlapCircle
Call to OverlapCircle Call to OverlapCircle

The method in the figure returns true if there is no collider within a radius (MinimumCleanRadius) of the candidateHidingPoint position. If there is any collider, the method returns false. For that, the IsCleanHidingPoint method simply calls OverlapCircle, passing the following parameters:

  • candidateHidingPoint (line 224): A Vector2 with the position of the center of the circle to be drawn. 
  • MinimumCleanRadius (line 225): A float with the circle's radius. 
  • NotEmpyGroundLayers (line 226): A LayerMask with the layers of the colliders we want to detect. It serves to filter out colliders we don't want to detect. OverlapCircle will discard a collider that is not in one of the layers we passed in the LayerMask. 

If the area is free of colliders, OverlapCircle will return null. If there are any, it will return the first collider it finds. If you are interested in getting a list of all the colliders that might be in the area, you could use the OverlapCircleAll variant, which returns a list of all of them.

We could end here, but I don't want to do so without warning you about a headache you will undoubtedly encounter in 2D. Fortunately, it can be easily solved if you are warned.

The problem can occur if you use tilemaps. These are very common for shaping 2D scenarios. The issue is that to form the colliders of a tilemap, it is normal to use a "Tilemap Collider 2D" component, and it is also quite common to add a "Composite Collider 2D" component to sum all the individual colliders of each tile into one to improve performance. The problem is that by default, the "Composite Collider 2D" component generates a hollow collider, only defined by its outline. I suppose it does this for performance reasons. This happens when the "Geometry Type" parameter has the value Outlines.

Possible values of the Geometry Type parameter.
Possible values of the Geometry Type parameter.

Why is it a problem that the collider is hollow? Because in that case, the call to OverlapCircle will only detect the collider if the circle it draws intersects with the collider's edge. If, on the other hand, the circle fits neatly inside the collider without touching any of its edges, then OverlapCircle will not return any collider, and we would mistakenly assume that the area is clear. The solution is simple once it has been explained to you. You need to change the default value of "Geometry Type" to Polygons. This value makes the generated collider "solid," so OverlapCollider will detect it even if the drawn circle fits inside without touching its edges.

It seems silly because it is, but it was a silly thing that took me a couple of hours to solve until I managed to find the key. I hope this article helps you avoid the same issue.

19 February 2025

Assigning data to tiles in Unity

Tilemaps are very often used to create 2D games. Their simplicity makes them ideal for creating a retro-style setting. 

However, at first glance, Unity's entire implementation of Tilemaps seems to be limited to its aesthetic aspect. That's why it's quite common to find people on the forums asking how to associate data with the different Tiles used in a Tilemap.

Why might we need to associate data with a tile? For a variety of reasons. For example, let's say we're using a Tilemap to map a top-down game. In that case, we'll probably want to add a "drag" value to the different tiles, so that our character moves slower on tiles that represent a swamp, faster on tiles that represent a path, and can't cross tiles that show impassable stones.

For our examples, let's assume a scenario like the one in the capture:

Scenario of our examples
Scenario of our examples


It represents an enclosed area that includes three obstacles inside, one on the left (with a single tile), one in the center (with four tiles) and one on the right (with three). The origin of the stage's coordinates is at its center; I have indicated it with the crosshair of an empty GameObject.

The problem we want to solve in our example is how to make a script analyze the scenario, identify the black tiles and take note of their positions.

As with many other cases, there is no single solution to this problem. We have an option that is quick to implement and offers more possibilities, but can put too much overhead on the game. On the other hand, we have another option that is more difficult to implement and is more limited, but will put less overhead on the game. Let's analyze both.

Associate a GameObject to a tile

Generally, when we want to identify at once the GameObjects that belong to the same category, the easiest way would be to mark them with a tag and search for them in the scenario with the static method  GameObject.FindGameObjectsWithTag() . The problem is that tiles are ScriptableObjects, so they cannot be marked with tags.

ScriptableObjects for tiles are created when we drag sprites onto the Tile Palette tab. At that point, the editor lets us choose the name and location of the asset with the ScriptableObject we want to create, associated with the tile. From that point on, if we click on the asset of that ScriptableObject we can edit its parameters through the inspector. For example, for the tile I used for the perimeter walls, the parameters are:

Setting up a Tile
Setting up a Tile


The fields that can be configured are:

  • Sprite: This is the sprite with the visual appearance of the tile. Once the sprite is set, we can press the "Sprite Editor" button below to configure both the pivot point and the collider associated with the sprite.
  • Color: Allows you to color the sprite with the color you set here. The neutral color is white; if you use it, Unity will understand that you do not want to force the sprite's color.
  • Collider Type: Defines whether we want to associate a Collider to the tile. If we choose "None" it will mean that we do not want the Tile to have an associated Collider; if we set "Sprite", the collider will be the one we have defined through the Sprite Editor; finally, if the chosen value is "Grid", the collider will have the shape of the Tilemap cells.
  • GameObject to Instantiate: This is the parameter we are interested in. We will explain this in a moment.
  • Flags: These are used to modify how a tile behaves when placed on a Tilemap. For our purposes, you can simply leave it at its default value.

As I was saying, the parameter that interests us for our purpose is "GameObject to instantiate" if we drag a prefab to this field, the Tilemap will be in charge of creating an instance of that prefab in each location where that Tile appears.

For example, to be able to easily locate the black tiles, those of the obstacles, I have associated a prefab to that parameter of their Tile that I have called ObstacleTileData.

Setting up the Obstacle Tile
Setting up the Obstacle Tile

Since all I want is to be able to associate a tag with the tiles, in order to locate them with  FindGameObjectsWithTag() , it was enough for me to make ObstacleTileData a simple transform with the tag I was interested in. In the screenshot you can see that I used the InnerObstacle tag .

ObstacleTileData with label InnerObstacle
ObstacleTileData with label InnerObstacle

Once this is done, and once the tiles we want to locate are deployed on the stage, we only need the following code to make an inventory of the tiles with the InnerObstacle tag .

Code to locate the tiles that we have marked with the InnerObstacle tag

We just need to place the above script on any GameObject located next to the stage's Tilemap. For example, I have it hanging from the same transform as the Grid component of the Stage's Tilemaps.

When the level starts, the Tilemap will create an instance of the ObstacleTileData prefab at each position on the stage where a black obstacle tile appears. Since the ObstacleTileData prefab has no visual component, its instances will be invisible to the player, but not to our scripts. Since these instances are marked with the "InnerObstacle" tag, our script can locate them by calling  FindGameObjectsWithTag() , on line 16 of the code. 

To demonstrate that the code correctly locates the obstacle tile locations, I've set a breakpoint on line 17, so that we can analyze the contents of the "obstacles" variable after calling  FindGameObjectsWithTag() . When running the game in debug mode, the contents of that variable are as follows:

Obstacle tile positions
Obstacle tile positions

If we compare the positions of the GameObjects with those of the tiles, we can see that obstacles[7] is the obstacle on the left, with a single tile. The GameObjects obstacle[2], [3], [5] and [6] correspond to the four tiles of the central obstacle. The three remaining GameObjects ([0], [1] and [4]) are the tiles of the obstacle on the right, the elbow-shaped one.

In this way, we have achieved a quick and easy inventory of all the tiles of a certain type.

However, pulling labels isn't the only way to locate instances of the GameObjects associated with each Tile. Tilemap objects offer the GetInstantiatedObject() method , which is passed a position within the Tilemap and in return returns the instantiated GameObject for that tile's tile. Using this method is less direct than locating objects by label, since it forces you to examine the Tilemap positions one by one, but there will be situations where you have no other choice.

Finally, before we leave this section of the article, you should be aware that there may be situations where instantiating a GameObject per tile can weigh down the performance of the game. In the example case, we are talking about a few tiles, but in much larger scenarios we may be talking about hundreds of tiles, so instantiating hundreds of GameObjects may be something to think twice about.

Extending the Tile class

By default, I would use the above strategy; but there may be situations where you don't want to instantiate a large number of GameObjects. In that case, you may want to use the approach I'm going to explain now.

Tiles are a class that inherits from ScriptableObject. We can extend the Tile class to add any parameters we want. For example, we could create a specialized Tile with a boolean to define whether the tile is an obstacle or not.

Tile with a specialized parameter
Tile with a specialized parameter

This tile can be instantiated like any ScriptableObject to create an asset. When we do this, we will see that the specialized parameter will appear and we can configure it through the inspector.

Setting the tile with the specialized parameter
Setting the tile with the specialized parameter

The key is that the assets we create this way can be dragged to the Tile Palette so they can be drawn on the stage.

Once that is done, we could use the Tilemap.GetTile() method to retrieve the tiles for each position, cast them to our custom tile type (in our case CourtyardTile) and then analyze the value of the custom parameter.

The drawback of this method is that we cannot use labels or layers to search for data associated with tiles, which forces us to go through the tilemap cell by cell to find them, but it has the advantage of freeing our game from the burden of creating a GameObject per tile.

Conclusion

Whether by creating a GameObject per tile or by extending the Tile class, you now have the resources necessary to associate data with each of the tiles. This will allow you to provide the tiles with essential semantics for a multitude of algorithms, such as pathfinding algorithms.

08 February 2025

2D Navigation in Godot

NPCs or Non-Playable-Characters are all the characters in the game that are not controlled by the player, but that interact with him. They can range from the player's allies to enemies that try to kill him. One of the great challenges for game developers is to equip their NPCs with a series of behaviors that convey the appearance of life and intelligence.

One of the clearest signs of life is movement. If something moves on its own initiative, one of the reasons our mind instinctively considers is that it may be alive. On the other hand, one of the signs of a minimum of intelligence is that this movement occurs while avoiding obstacles in its path. If a creature moves when we approach it, but immediately runs into the wall in front of it, we may think that this creature is alive, but not that it is very intelligent.

That's why most game engines, as long as they have an artificial intelligence package, first include some kind of pathfinding tool to allow NPCs to orient themselves around the environment.

Godot is no exception and therefore incorporates pathfinding functionality using meshes, both in 2D and 3D. For simplicity, we will focus on the first area.

Creating a map

To orient ourselves, the best thing for humans is a map. The same goes for an engine. That map is based on our scenario, but to model it we need a node called NavigationRegion2D. We must add it to the scene of any level we want to map. For example, supposing that our level is based on several TileMapLayers (one with the floor tiles, another with the perimeter walls and another with the tiles of the obstacles inside the perimeter), the node structure with the NavigationRegion2D node could be the following:

Node hierarchy with a NavigationRegion2D
Node hierarchy with a NavigationRegion2D

Note that the NavigationRegion2D node is the parent of the TileMapLayer since by default it only builds the map with the conclusions it draws from analyzing its child nodes.

When configuring the NavigationRegion2D through the inspector, we will see that it requires the creation of a NavigationPolygon resource, in this resource the map information of our level will be saved.

Setting up a NavigationRegion2D
Setting up a NavigationRegion2D


Once the resource has been created, when clicking on it, we will see that it has quite a few properties to configure.

Setting up a NavigationPoligon
Setting up a NavigationPoligon


In a simple case like our example, only two parameters need to be configured:
  • Radius : Here we will set the radius of our NPC, with a small additional margin. This way, the pathfinding algorithm will add this distance to the outline of the obstacles to prevent the NPC from rubbing against them.
  • Parsed Collision Mask : All objects of the child nodes that are in the collision layers that we mark here will be considered an obstacle.
Once this is done, we will only need to mark the limits of our map. To do this, note that when you click on the NavigationPolygon resource in the inspector, the following toolbar will appear in the scene tab:

Toolbar for defining the shape of a NavigationPolygon
Toolbar for defining the shape of a NavigationPolygon


Thanks to this toolbar we can define the boundaries of our map. Then, NavigationRegion2D will be in charge of identifying obstacles and making "holes" in our map to indicate the areas through which our NPC will not be able to pass.

The first button (green) on the toolbar is used to add new vertices to the shape of our map, the second button (blue) is used to edit a vertex already placed, while the third (red) is used to delete vertices.

In a simple scenario, such as a tilemap, it may be enough to draw the four vertices that limit the maximum extension of the tilemap. In the following screenshot you can see that I have placed a vertex in each corner of the area I want to map.

Vertices of our NavigationPolygon
Vertices of our NavigationPolygon


Once we have defined the boundaries of our map, we will have to start its generation. To do this, we will press the Bake NavigationPolygon button in the toolbar. 
The result will be that NavigationRegion2D will mark in blue the areas where you can wander, once the limits of the map have been analyzed and the obstacles within them have been detected.

We will have to remember to press the Bake NavigationPolygon button whenever we add new obstacles to the level or move existing ones, otherwise the map of the navigable areas will not update.

NavigationPolygon, after being baked.
NavigationPolygon, after being baked.


For an object to be identified as an obstacle, it has to be configured to belong to one of the collision layers that we have configured in the Parsed Collision Mask field of the NavigationPolygon. In the case of a TileMapLayer this is configured as follows:
  1. In those TileMapLayers that contain obstacles we will have to mark the Physics -> Collision Enabled parameter.
  2. In the Tile Set resource that we are using in the TileMapLayer, we will have to make sure to add a physical layer and place it in one of the layers contemplated in the Parsed Collision Mask of the NavigationPolygon.
  3. You must also add a navigation layer and set it to the same value as the NavigationRegion2D node's navigation layer.
For example, the TileMapLayer containing the interior obstacles in my example, has the following settings for the Tile Set:

Setting up the TileMapLayer
Setting up the TileMapLayer


Then, inside the TileSet, not all tiles have to be obstacles, only those that have a collider configured. Remember that the tiles' colliders are configured from the TileSet tab. I won't go into more detail about this because that has more to do with the TileMaps configuration than with the navigation itself.

Setting a tile's collider
Setting a tile's collider


Using the map

Once the map is created, our NPCs need to be able to read it. To do this, they need a MeshNavigationAgent2D node in their hierarchy.

The MeshNavigationAgent2D node within an NPC's hierarchy
The MeshNavigationAgent2D node within an NPC's hierarchy


In a simple case, you may not even need to change anything from its default settings. Just make sure its Navigation Layers field is set to the same value as the NavigationRegion2D.

From your script, once you have a reference to the MeshNavigationAgent2D node, you will simply have to set the position you want to go to in the TargetPosition property of the MeshNavigationAgent2D. For example, if we had an NPC who wanted to hide at a point on the map, we could include a property in which we would ask the MeshNavigationAgent2D node to find the route to reach that point, as you can see in the screenshot.


Setting a target position in a MeshNavigationAgent2D
Setting a target position in a MeshNavigationAgent2D

Once we have told the node where we want to go, it will calculate a route and give us the different stops on that route as we reach them.

In order for the node to tell us where the next stop on the path is, we will have to call the GetNextPathPosition() method. It is important to note that this method is responsible for updating quite a few things internal to the pathfinding algorithm, so a requirement is that we call it once in each call to _PhysicsProcess() of our NPC.

In the screenshot you have the _PhysicProcess() of the agent I'm using as an example. Most of the code in the screenshot refers to topics that are not the subject of this article, but I'm including it to provide some context. In fact, for what we're talking about, you only need to look at lines 221 to 225.

Obtaining the next stop on the route, within the _PhysicsProcess.
Obtaining the next stop on the route, within the _PhysicsProcess.


On line 225 you can see how we call GetNextPathPosition() to get the next point we need to go to in order to follow the path drawn to the target set in TargetPosition. How we get to that point is up to us. The MeshNavigationAgent2D simply guarantees two things: 
  • That there is no obstacle between the NPC and the next point on the route that he marks for you.
  • That if you follow the route points that it gives you, you will end up reaching the objective... if there is a route that leads to it.
I want to emphasize this because, unlike Unity's NavigationAgent, Godot's does not move the NPC in which it is embedded. It simply gives directions on where to go.
Apart from the general case, there are certain caveats in the capture code that need to be made clear.

For starters, the Godot documentation says not to keep calling GetNextPathPosition() once you've finished traversing the path, otherwise the NPC may "shake" by forcing further updates to the pathfinding algorithm after already reaching the goal. That's why, on line 224, I check that we haven't reached the end of the path yet, before calling GetNextPathPosition(). So don't forget to check that IsNavigationFinished() returns false, before checking GetNextPathPosition().

On the other hand, the pathfinding algorithm takes a while to converge, especially at the beginning. If we ask it too early it will throw an exception. Typically it takes one or two physics frames (i.e. one or two calls to _PhysicsProcess). That is why on line 222 it checks if the IsReady property is true before continuing. The problem is that MeshNavigationAgent2D does not have an IsReady property (although it should have one), it is just a property I created myself to make a non-intuitive query:

How to check that the pathfinding algorithm is ready to answer queries
How to check that the pathfinding algorithm is ready to answer queries


Basically, what the property does is ensure that the pathfinding algorithm has managed to generate at least one version of the path. 

Conclusion

And that's it, by setting the TargetPosition of the MeshNavigationAgent2D, and going to the successive points marked by the calls to GetNextPathPosition(), you will be able to reach any reachable point of the NavigationRegion2D that you have defined. 

With that you already have the basics, but if at any time you need to analyze the complete route that the algorithm has calculated, you can ask for it by calling the GetCurrentNavigationPath() method. This method will return an array with the positions of the different stops on the route.

Folder structure of a video game development project

(Photo by Niklas Ohlrogge)
A game uses a massive amount of assets. Images, sounds, code files, etc. You need a way to organize them or you'll have a hard time finding things after a while. So a typical question on the forums is, What folder structure should I give my project?

Well, the truth is that this question does not have a single answer. It often depends on the project, your own particularities as a developer and the requirements that the different engines sometimes impose. In any case, there is a certain consensus on some good practices that I will list in this article so that you can choose the one that best suits you.

Engine requirements

To start, let's list the folders that you will always have as a requirement for the engine you are using.

In the case of Unity, it requires you to have 3 folders:

  • Assets : Here you will place all the resources you create for the game.
  • Packages : This is where the packages you download from the package manager are stored.
  • Project Settings : This is where the editor stores your project settings.

The three folders above are the minimum you should keep in your release repository, if you use one (which is highly recommended).

Normally you'll be working inside the Assets folder, since the editor already takes care of the content of the other folders for you. The folder structure you have inside the Assets folder is up to you, but the editor requires you to have at least two:

  • Scripts : You must have this folder to store the scripts that will make your game work. Unity compiles the game with the scripts it finds in this folder.
  • Editor : If you develop extensions for the editor or custom inspectors, you will need to create this folder and save the extensions code there. Do not save them in the Scripts folder, because the compilation will fail when you want to build the final package, to run the game outside of the editor.
  • In the case of Godot, the requirements are more lax. The only fixed folder is the addons folder , since that is where both the plugins you have developed and those you have downloaded from AssetLib are collected.

Once the above requirements have been met, the sky is the limit when it comes to organizing our assets. Let's now look at the most common layouts that people use.

Distribution by type

This is the most classic layout. I based it on it to organize the resources of Prince of Unity .

You create a folder to house each type of asset: one for images, one for sounds, one for prefabs/scenes, one for levels, one for shaders, etc.

This is the structure that Unity documentation usually recommends , so it's probably the first one you'll use. 

Example of organization by type, according to Unity documentation
Example of organization by type, according to Unity documentation

Functional distribution

However, there are many who reject the previous structure. They claim that in order to distinguish between assets by type, it is useless to classify them at the folder level, because the editors (both Unity and Godot) allow you to search for assets, filtering by their type.

That's why Godot's documentation advocates placing assets as close as possible to the scenes that use them. There are style guides for Unreal Engine that recommend the same.

In practice, this means creating a folder per scene and grouping the assets you use there, regardless of their type. In Unity, this means creating a folder per prefab.

Generally, the exception to the above is code, which is usually preferred to be placed in a separate folder where all the source code files are concentrated (in the case of Unity we have already said that having the Scripts folder is a requirement).

But normally our scenes/prefabs are not autonomous islands, but share resources with each other. How is this done in these cases? Is the resource copied to the folder of each scene/prefab? No, in these cases a folder is usually created for the shared resources, it can be a folder at the same level as the scenes/prefabs or a folder located at a higher level than them.

For example, the Unreal Engine style guide I mentioned earlier provides this example structure:

Functional organization example
Functional organization example

Proponents of this practice argue that it is better to have a deep folder structure, each containing a few files (even if they are of different types), than to have a few directories but full of files. It is easier to know what resources a scene/prefab uses if you see everything in the same folder.

Conclusion

Organizing your project is a very personal decision. There are no big rules set in stone. In the end, it all comes down to organizing your assets in the way that is most comfortable for you and makes you most productive. Perhaps trying out some of the organization systems I've talked about here will help you find your own.

30 January 2025

"Game Development Patterns with Godot 4" by Henrique Campos

Apps and video games have been around for so long that it's hard to find yourself in the middle of a problem that hasn't been solved by another developer before. Over time, this has given rise to a set of general solutions that are considered best practices for solving common problems. These are known as design patterns; a set of "recipes" or "templates" that developers can follow to structure their code when implementing solutions to certain problems.

It is a subject that is studied in any degree related to programming, with the book " Design Patterns, Elements of Reusable Object-Oriented Software " by Gamma, Helm, Johnson and Vlissides (the famous Group of Four) being the classic book that started this branch of study. 

However, when told in an academic way, design patterns can be difficult to understand and their application is not always clear. This is precisely where Henrique Campos' " Game Development Patterns with Godot 4 " shines.

Campos quite successfully selects 9 of the 23 design patterns enunciated by the Group of Four and explains them in a clear, simple way, rich in examples applied to game development. The patterns chosen by Campos seem to me to be the most useful and the ones with the most immediate application in any game. I have not missed any of the other remaining design patterns because they always seemed too abstract and esoteric to me.

In a very original way, Campos presents his examples based on dialogues or communications between the designer of a fictional game and his programmer. In these communications, the designer demands functionalities focused on enriching specific aspects of a game, while the programmer receives these demands and has to fit them with his objective of creating a scalable and easy-to-maintain code. In each of these examples, a specific design pattern is explained as the best solution to fit the designer's demand with the programmer's objective.

Following this scheme, the author explains the singleton pattern, the observer pattern, the factory, state machines, the command pattern, the strategy pattern, decorators, service locators and event queues. All of this in an order of increasing complexity and based on the previous ones. Along the way, he also explains in a simple but effective way basic principles of development such as those of object-oriented programming or the SOLID principles.

The examples are implemented in GDScript, using Godot 4. I must admit that I was initially a bit wary of the book because I didn't think GDScript was a rich enough language to illustrate these patterns (I admit that I develop in Godot C#). However, the field code is very expressive and GDScript is so concise that the examples end up being read as if they were pseudocode. In the end I didn't miss C#, because the GDScript code managed to convey the idea of ​​the examples in just a few lines, which made them very easy to read and understand.

Therefore, I think it is a highly recommended book that makes a subject that is often repulsive due to the excessively academic treatment it has received in other previous works, enjoyable and fun. If you give it a chance, I think you will enjoy it and it will help you considerably to improve the quality of your code. 

26 January 2025

Creating interactive Gizmos in Unity

In a previous article I explained how to use the OnDrawGizmos() and OnDrawGizmosSelected() callbacks, present in all MonoBehaviours, to draw Gizmos that visually represent the magnitudes of the fields of a GameObject. However, I already explained then that Gizmos implemented in this way were passive in the sense that they were limited to representing the values ​​that we entered in the inspector. But the Unity editor is full of Gizmos that can be interacted with to change the values ​​of the GameObject, simply by clicking on the Gizmo and dragging. An example is the Gizmo that is used to define the dimensions of the Collider. 

How are these interactive Gizmos implemented? The key is in the Handles. These are visual "handles" that we can place on our GameObject and that allow us to click and drag them with the mouse, returning the values ​​of their change in position, so that we can use them to calculate the resulting changes in the magnitudes represented. This will be better understood when we see the examples.

The first thing to note is that Handles belong to the editor namespace and can therefore only be used within it. This means that we cannot use Handles in final game builds that we run independently of the editor. For this reason, all code that uses Handles has to be placed either in the Editor folder or with #if UNITY_EDITOR ... #endif guards. In this article I will explain the first approach, as it is cleaner.

With the above in mind, Gizmo code that uses Handles should be placed in the Editor folder and not in the Scripts folder. Unity only takes into account the Scripts folder when compiling the C# code for the final executable. It is important to be rigorous with this because if you mix Handles code with MonoBehaviour code, everything will seem to work as long as you start the game from the editor, but when you try to compile it with File --> Build Profiles... --> Windows --> Build, you will get errors that will prevent you from compiling. At this point, you will have two options: either you redirect the distribution of the code that uses Handles to the structure that I am going to explain here or you fill your code with #if UNITY_EDITOR ... #endif guards around all the calls to objects in the Handles library. Anyway, I think that the structure that I am going to explain here is generic enough so that you do not need to mix your Handles code with MonoBehaviour code.

To start with the examples, let's assume we have a MonoBehaviour (in our Scripts folder) called EvadeSteeringBehavior that is responsible for initiating the escape of its GameObject if it detects that a threat is approaching below a certain threshold. That threshold is a radius, a distance around the fleeing GameObject. If the distance to the threat is less than that radius, the EvadeSteeringBehavior will start executing the escape logic. Let's assume that the property that stores that radius is EvadeSteeringBehavior.PanicDistance.

To represent PanicDistance as a circle around the GameObject, we could use calls to Gizmos.DrawWireSphere() from MonoBehaviour's OnDrawGizmos() method, but with that approach we could only change the PanicDistance value by modifying it in the inspector. We want to go further and be able to alter the PanicDistance value by clicking on the circle and dragging in the Scene window. To achieve this, we're going to use Handles via a custom editor.

For that, we can create a class in the Editor folder. The name doesn't matter too much. In my case, I've named the file DrawEvadePanicDistance.cs. Its content is as follows:

DrawEvadePanicDistance Custom Editor Code

Look at line 1, all your alarms should go on if you find yourself importing this library in a class intended to be part of the final compilation of your game, since in that case you will get the errors I mentioned before. However, by placing this file in the Editor folder we will no longer have to worry about this problem.

Line 6 is key as it allows us to associate this custom editor with a specific MonoBehaviour. With this line we are telling the editor that we want to execute the logic collected in OnSceneGUI() whenever it is going to render an instance of EvadeSteeringBehavior in the Scene tab.

As a custom editor, our class must inherit from UnityEditor.Editor, as seen on line 7.

So far, whenever I've needed to use Handles, I've ended up structuring the contents of OnSceneGUI() in the same way you see between lines 10 and 24. So in the end it's almost a template.

If we want to access the data of the MonoBehaviour whose values ​​we want to modify, we will have it in the target field to which all custom editors have access. Although it is true that you will have to cast to the type that you know the MonoBehaviour we are representing has, as can be seen in line 11.

We must place the code for our Handles between a call to EditorGUI.BeginChangeCheck() (line 13) and a call to EditorGUI.EndChangeCheck() (line 19) so that the editor will monitor whether any interaction with the Handles occurs. The call to EditorGUI.EndChangeCheck() will return true if the user has interacted with any of the Handles created from the call to EditorGUI.BeginChangeCheck().

To define the color with which the Handles will be drawn, we do it in a very similar way to how we did with the Gizmos, in this case loading a value in Handles.color (line 14).

We have multiple Handles to choose from, the main ones are:

  • PositionHandle : Draws a coordinate origin in the Scene tab, identical to that of Transforms. Returns the position of the Handle.
  • RotationHandle : Draws rotation circles similar to those that appear when you want to rotate an object in the editor. If the user interacts with the Handle, it will return a Quaternion with the new rotation value, while if the user does not touch it, it will return the same initial rotation value with which the Handle was created.
  • ScaleHandler : It works similarly to RotationHandle, but in this case it uses the usual scaling axes and cubes when you want to modify the scale of an object in Unity. What it returns is a Vector3 with the new scale, in case the user has touched the Handle or the initial one, otherwise.
  • RadiusHandle : Draws a sphere (or a circle if we are in 2D) with handles to modify its radius. In this case, what is returned is a float with said radius.

In the example at hand, the natural choice was to choose the RadiusHandle (line 15), since what we are looking for is to define the radius that we have called PanicDistance. Each Handle has its creation parameters, to configure how they are displayed on the screen. In this case, RadiusHandle requires an initial rotation (line 16), the position of its center (line 17) and the initial radius (line 18).

In case the user interacts with the Handle, its new value would be returned as a return from the Handle creation method. In our example, we have saved it in the variable newPanicDistance (line 15). In such cases, the EditorGUI.EndChangeCheck() method would return true, so we could save the new value in the property of the MonoBehaviour whose value we are defining (line 22).

To ensure that we can undo changes to the MonoBehaviour with Ctrl+Z, it is convenient to precede them with a call to Undo.RecordObject() (line 21), indicating the object that we are going to change and providing an explanatory message of the change that is going to be made.

The result of all the above code will be that, whenever you click on a GameObject that has the EvadeSteeringBehavior script, a circle will be drawn around it, with points that you can click and drag. Interacting with those points will change the size of the circle, but also the value displayed in the inspector for the EvadeSteeringBehavior's PanicDistance property.

The RadiusHandle displayed thanks to the code above
The RadiusHandle displayed thanks to the code above

What we can achieve with Handles doesn't end here. If we were Unity, the logical thing would have been to offer Handles to users, for interactive modification of script properties, and leave Gizmos for the visual representation of those values ​​from calls to OnDrawGizmos(). However, Unity did not leave such a clear separation of functions and, instead, provided the Handles library with drawing primitives very similar to those offered by Gizmos. This means that there is some overlap between the functionalities of Handles and Gizmos, especially when it comes to drawing.

It is important to know the drawing primitives that Handles can offer. In many cases it is faster and more direct to draw with Gizmos primitives, from OnDrawGizmos(), but there are things that cannot be achieved with Gizmos that can be drawn with Handles methods. For example, with Gizmos you cannot define the thickness of the lines (they will always be one pixel wide), while Handles primitives do have a parameter to define the thickness. Handles also allows you to paint dashed lines, as well as arcs or rectangles with translucent fills.

Many people take advantage of the benefits of Handles and use them not only for entering new values ​​but also for drawing, completely replacing Gizmos. The problem is that this forces the entire drawing logic to be extracted to a custom editor like the one we saw before, which implies creating one more file and saving it in the Editor folder.

In any case, it is not about following any dogma, but about knowing what the Handles and Gizmos offer in order to choose what best suits us in each occasion.

15 January 2025

Node Configuration Alerts in Godot

Godot emphasizes composition. Each atomic functionality is concentrated in a specific node, and complex objects (what Godot calls scenes) are formed by grouping and configuring nodes to achieve the desired functionality. This creates node hierarchies in which nodes complement each other.

Because of this, some nodes cannot provide complete functionality unless complemented by other nodes attached to them. A classic example is the RigidBody3D node, which cannot function without being complemented by a CollisionShape3D that defines its physical shape.

Godot offers an alert system to notify you when a node depends on another to function properly. You’ve probably seen it many times: a yellow warning triangle that displays an explanation when you hover your mouse over it.

Warning message indicating a missing child node
Warning message indicating a missing child node

When developing scenes in Godot, they become nodes within others. If you take good design principles seriously and separate responsibilities, sooner or later, you’ll find yourself designing nodes that depend on other nodes for customization.

At that point, you might wonder if you, too, can emit alerts if one of your nodes lacks a complementary node. The answer is yes, you can, and I’m going to show you how.

For example, let’s assume we have a node named MovingAgent. Its implementation doesn’t matter, but for illustration, let’s suppose this node defines the movement characteristics (speed, acceleration, braking, etc.) of an agent. To define how we want the agent to move, we aim to implement nodes with different movement algorithms (e.g., straight line, zigzag, reverse). These nodes have diverse implementations but adhere to the ISteeringBehavior interface, offering a set of common methods that can be called from MovingAgent. Thus, the agent’s movement will depend on the ISteeringBehavior-compliant node attached to MovingAgent.

In this case, we’d want to alert the user of the MovingAgent node if it’s used without an ISteeringBehavior node attached to it.

To trigger this dependency alert, all base nodes in Godot provide the _GetConfigurationWarnings() method. To have our node issue warnings, we simply need to implement this method. For MovingAgent, this could look like the following implementation:

Implementation of _GetConfigurationWarnings()
Implementation of _GetConfigurationWarnings()

The method is expected to return an array with all detected error messages (line 148). If Godot detects that the method returns an empty array, it interprets that everything is correct and won’t display the warning icon.

As you can see, the first thing the method does is check whether the MovingAgent node has a child node implementing ISteeringBehavior (line 149). If no such child is found (line 154), an error message is generated (line 156).

There’s no reason to limit ourselves to checking for just one type of node. We can search for multiple nodes, check their configurations, and generate multiple error messages, as long as we store the generated error messages in an array to return as the method’s result.

In this example, I store the error messages in a list (line 152) and convert it into an array before the method ends (line 159).

_GetConfigurationWarnings() runs in the following situations:

  • When a new script is attached to the node.
  • When a scene containing the node is opened.
  • When a property is changed in the node’s inspector.
  • When the script attached to the node is updated.
  • When a new node is attached to the one containing the script.

Therefore, you can expect the script to refresh the warning, displaying or removing the alert, in any of these scenarios.

And that’s it—there’s no more mystery to it... or maybe there is. Observant readers may have noticed that in line 150, I searched for a child node solely by type. Developers familiar with Unity are accustomed to searching for components by type because this engine provides a native method for it (GetComponent<>). However, Godot doesn’t offer a native method to search by type. The native FindChild implementations search for nodes by name, not by type. This was inconvenient for me because I wanted to attach nodes with different names (indicative of functionality) to MovingAgent, as long as they adhered to the ISteeringBehavior interface. So, lacking a native method, I implemented one via an extension method:

Extension method for searching child nodes by type
Extension method for searching child nodes by type

The extension method iterates through all child nodes (line 20) and checks if they are of the type passed as a parameter (line 22).

If it locates a node of the desired type, it returns it and ends the method (line 24). Otherwise, it can continue searching recursively in the children’s children (line 28) if requested via parameters (line 26).

Thanks to this extension method, any instance of a Godot class inheriting from Node (is there any that doesn’t?) will offer the ability to search by type, as seen in line 150 of _GetConfigurationWarnings().

In your case, it might suffice to search child nodes by name. If not, the extension method solution for type-based searches might suit you better. This is the only complexity I see in this alert system, which is otherwise extremely easy to use.

Resulting alert from our implementation
Resulting alert from our implementation

11 January 2025

Creating custom Gizmos in Unity

I've written several articles explaining how to implement Gizmos in Godot, but I realized I haven’t explained how to do the equivalent in Unity. Godot has several advantages over Unity, mainly its lightweight nature and the speed it allows during development iterations, but it’s worth recognizing that Unity is very mature, and this maturity shows when it comes to Gizmos.

As you may recall, Gizmos are visual aids used to represent magnitudes related to an object’s fields and properties. They are primarily used in the editor to assist during the development phase. For example, it’s always easier to visualize the shooting range of a tank in a game if that range is represented as a circle around the tank. Another example is when you edit the shape of a Collider; what is displayed on the screen is its Gizmo.

In Unity, any MonoBehaviour can implement two methods for drawing Gizmos: OnDrawGizmos() and OnDrawGizmosSelected(). The first one draws Gizmos at all times, while the second only does so when the GameObject containing the script is selected. It's important to note a significant caveat about OnDrawGizmos(): it is not called if its script is minimized in the inspector.

The script SeekSteeringBehavior is minimized.
The script SeekSteeringBehavior is minimized.

In theory, you can interact with the Gizmos drawn in OnDrawGizmos(), but not with those drawn in OnDrawGizmosSelected(). Practically, knowledge of how to interact with Gizmos was lost a long time ago. In the past, it was possible to click on them, but this functionality seems to have disappeared around Unity version 2.5. The mention in Unity’s OnDrawGizmos() documentation about being able to click on Gizmos seems more like a sign that the documentation hasn’t been fully updated.

In any case, Unity’s editor is full of Gizmos you can interact with, but that’s because they include an additional element: Handles. In this article, we’ll focus on Gizmos as a means of passive visual representation, leaving the explanation of interactive Gizmos via Handles for a future article. To simplify further, I’ll refer only to OnDrawGizmos(); the other method is identical, but is only called when its GameObject is selected in the hierarchy.

The OnDrawGizmos() method is only called in the editor during an update or when the focus is on the Scene View. We should avoid overloading this method with complex calculations, as we could degrade the editor’s performance. Although it’s only called from the editor, we could implement it as-is, knowing that Gizmos won’t appear in the final compiled game. However, I prefer to wrap the method’s implementation in #if UNITY_EDITOR ... #endif. It’s an old habit. While redundant when using only Gizmos, this guard becomes necessary if you include Handles in the method, as we’ll see in a later article.

Let’s assume we’re designing an agent that interposes itself between two others (Agent-A and Agent-B). The movement algorithm for the interposing agent isn’t relevant here, but its effect will be to measure a vector between Agents A and B and position the interposing agent at the midpoint. In such a case, we’d want to draw this midpoint on the screen to verify that the interposing agent is actually heading toward it. This is an ideal use case for Gizmos.

The MonoBehaviour responsible for calculating this midpoint also implements the OnDrawGizmos() method with the following code:

Example of OnDrawGizmos() implementation.
Example of OnDrawGizmos() implementation.

Let’s analyze it line by line to understand how to draw any figure within this method.

Lines 127–128: These lines prevent the method from executing if we’ve decided to make the Gizmos invisible by setting the predictedPositionMarkerVisible variable to false, or if _predictedPositionMarker is null. This variable refers to the implementation of the interposing agent. For reasons not covered here, when the MonoBehaviour starts, I create a GameObject linked to _predictedPositionMarker. As the script calculates the midpoints between Agents A and B, it positions this GameObject at those midpoints. For our purposes, _predictedPositionMarker is a GameObject that acts as a marker for the position where the interposing agent should be. If it’s null, there’s nothing to draw.

Line 130: This line sets the color used for drawing all Gizmos until a different value is assigned to Gizmos.color.

Lines 131–133: Here, we use the Gizmos.DrawLine() call to draw a line between Agent A’s position and the marker.

Line 134: This line changes the drawing color to magenta (purple) to draw a circle at the marker’s position using Gizmos.DrawSphere(). This method draws a filled circle. If we only wanted an outline, we could use Gizmos.DrawWireSphere().

Lines 137–139: These lines use Gizmos.DrawLine() to draw another line (with its own color) between Agent B’s position and the marker.

The result can be seen when running the game from the editor:

Gizmos drawn when running the game from the editor.
Gizmos drawn when running the game from the editor.

Agents A and B are colored blue and red, respectively, while the interposing agent is green. The Gizmos are the blue and red lines and the purple circle.

And that’s it! Using these primitives, along with the rest of the module’s Gizmos offerings, we can draw any shape we want. These shapes will update in the editor when we change the fields they depend on in the inspector or, if tied to variables, as those variables change while running the game in the editor.

One last note: I often use booleans like predictedPositionMarkerVisible to decide which specific scripts can draw their Gizmos. However, Unity’s editor allows you to disable the drawing of all Gizmos. To do this, just click the button on the far right of the toolbar at the top of the Scene tab.

Button to toggle Gizmo visibility.
Button to toggle Gizmo visibility.

I recommend ensuring this button is enabled. The internet is full of posts from people asking why their Gizmos aren’t being drawn... only to realize they had inadvertently disabled this button.