30 April 2025

2D Navigation in Unity

Someone reading a map in front of a maze
A couple of months ago, I published an article on how to use the 2D navigation system in Godot. It's time to write a similar article for Unity. I'm not going to complicate things, so I'm going to replicate the structure of that article, but adapting the instructions to Unity.

Remember that the starting premise is that we want to have NPC characters that are able to navigate the scene to go from one point to another. Let's start by seeing how to create a scene that can be processed by the navigation system.

Creating a navigable scene 

As a scene, any set of sprites could work, as long as they have associated colliders to define which ones pose an obstacle to the NPC. Since that wouldn't be very interesting, we're going to opt for a somewhat more complex case, but also quite common in a 2D game. Let's assume that our scene is built using tilemaps. The use of tilemaps deserves its own article and you will find plenty of them on the Internet, so here I will assume that you already know how to use them.

In our example, we will assume a scene built with three levels of tilemaps: one for the perimeter walls of the scene (Tilemap Wall), another for the internal obstacles of the scene (Tilemap Obstacles), and another for the ground tilemap (Tilemap Ground).

Hierarchy of tilemaps in the example
Hierarchy of tilemaps in the example

Courtyard is the parent node of the tilemaps and the one that contains the Grid component. With the tilemap grid. Tilemap Ground only contains the visual components Tilemap and TilemapRenderer to display the ground tiles. The Wall and Obstacles tilemaps also have these components, but they also incorporate two additional components: a Tilemap Collider 2D and a Composite Collider 2D. The Tilemap Collider 2D component is used to account for the collider of each tile. This collider is defined in the sprite editor for each of the sprites used as tiles. The problem with Tilemap Collider 2D is that it counts the collider of each tile individually, which is very inefficient due to the number of tiles that any tilemap-based scene accumulates. For this reason, it is very common to accompany the Tilemap Collider 2D component with another component called Composite Collider 2D. This latter component is responsible for merging all the colliders of the tiles, generating a single combined collider that is much lighter to manipulate by the engine.

When using these components in your tilemaps, I advise you to do two things:

  • Set the "Composite Operation" attribute of the Tilemap Collider 2D to the value Merge. This will tell the Composite Collider 2D component that the operation it needs to perform is to merge all the individual colliders into one. 
  • In the Composite Collider 2D, I would set the "Geometry Type" attribute to the value Polygons. If you leave it at the default value, Outlines, the generated collider will be hollow, meaning it will only have edges, so some collider detection operations could fail, as I explained in a previous article

Creating a 2D NavMesh 

A NavMesh is a component that analyzes the scene and generates a graph based on it. This graph is used by the pathfinding algorithm to guide the NPC. Creating a NavMesh in Unity 3D is very simple.

The problem is that in 2D, the components that work in 3D do not work. There are no warnings or error messages. You simply follow the instructions in the documentation and the NavMesh is not generated if you are in 2D. In the end, after much searching on the Internet, I have come to the conclusion that Unity's native implementation for 2D NavMeshes is broken. It only seems to work for 3D.

From what I've seen, everyone ends up using an open-source package, external to Unity, called NavMeshPlus. This package implements a series of components very similar to Unity's native ones, but they do work when generating a 2D NavMesh.

The previous link takes you to the GitHub page of the package, where it explains how to install it. There are several ways to do it, the easiest perhaps is to add the URL of the repository using the "Install package from git URL..." option of the "+" icon in Unity's package manager. Once you do this and the package index is refreshed, you will be able to install NavMeshPlus, as well as its subsequent updates. 

Option to add a git repository to Unity's package manager.
Option to add a git repository to Unity's package manager.

Once you have installed NavMeshPlus, you need to follow these steps: 

  1. Create an empty GameObject in the scene. It should not depend on any other GameObject. 
  2. Add a Navigation Surface component to the previous GameObject. Make sure to use the NavMeshPlus component and not Unity's native one. My advice is to pay attention to the identifying icons of the components shown in the screenshots and make sure that the components you use have the same icons. 
  3. You also need to add the Navigation CollectSources2d component. In that same component, you need to press the "Rotate Surface to XY" button; that's why it's important that these components are installed in an empty GameObject that doesn't depend on any other. If you did it correctly, it will seem like nothing happens. In my case, I made a mistake and added the components to the Courtyard GameObject mentioned earlier, and when I pressed the button, the entire scene rotated. So be very careful. 
  4. Then you need to add a Navigation Modifier component to each of the elements in the scene. In my case, I added it to each of the GameObjects of the tilemaps seen in the screenshot with the hierarchy of tilemaps in the example. These components will help us discriminate which tilemaps define areas that can be traversed and which tilemaps define obstacles. 
  5. Finally, in the Navigation Surface component, you can press the "Bake" button to generate the NavMesh. 

Let's examine each of the previous steps in more detail.

The GameObject where I placed the two previous components hangs directly from the root of the hierarchy. I didn't give it much thought and called it NavMesh2D. In the following screenshot, you can see the components it includes and their configuration.

Configuration of the main components of NavMeshPlus
Configuration of the main components of NavMeshPlus

As you can see in the previous figure, the main utility of the NavigationSurface component is to define which layers we will take into account to build our NavMesh ("Include Layers"). I suppose that if you have very loaded layers, you might be interested in limiting the "Include Layers" parameter only to the layers where there are scene elements. In my case, the scene was so simple that even including all the layers, I didn't notice any slowdown when creating the NavMesh. Another customization I made is to set the "Use Geometry" parameter to "Physics Colliders". This value presents better performance when using tilemaps since simpler geometric shapes are used to represent the scene. The "Render Meshes" option allows you to create a much more detailed NavMesh, but less optimized, especially when using tilemaps.

If you're wondering how to model the physical dimensions of the navigation agent (its radius and height, for example), although they are shown at the top of the "Navigation Surface" component, they are not configured there but in the Navigation tab, which is also visible in the previous screenshot. If you don't see it in your editor, you can open it in Window --> AI --> Navigation.

Navigation tab Navigation tab
Navigation tab Navigation tab

Finally, the Navigation Modifier components allow us to distinguish tilemaps that contain obstacles from tilemaps that contain walkable areas. To do this, we need to check the "Override Area" box and then define the type of area this tilemap contains. For example, the GameObjects of the Wall and Obstacles tilemaps have the Navigation Modifier component from the following screenshot:

Navigation Modifier applied to tilemaps with obstacles
Navigation Modifier applied to tilemaps with obstacles

By marking the area as "Not Walkable," we are saying that what this tilemap paints are obstacles. If it were a walkable area, like the Ground tilemap, we would set it to Walkable.

Once all the Navigation Modifiers are configured, we can create our NavMesh by pressing the "Bake" button on the Navigation Surface component. To see it, you need to click on the compass icon in the lower toolbar (it's the second from the right in the toolbar) of the scene tab. This will open a pop-up panel on the right where you can check the "Show NavMesh" box. If the NavMesh has been generated correctly, it will appear in the scene tab, overlaying the scene. All areas marked in blue will be walkable by our NPC.

NavMesh visualization NavMesh visualization
NavMesh visualization NavMesh visualization


Using the 2D NavMesh 

Once the 2D NavMesh is created, our NPCs should be able to read it.

In the case of Godot, this meant including a MeshNavigationAgent2D node in the NPCs. From there, you would tell that node where you wanted to go, and it would calculate the route and return the location of the different waypoints of the route. The rest of the agent's nodes would be responsible for moving it to that location.

Unity also has a NavMeshAgent component, but the problem is that it is not passive like Godot's; that is, it doesn't just give the different waypoints of the route but also moves the agent to those waypoints. This can be very convenient in many cases when the movement is simple, with a single component you meet two needs: you guide the movement and execute it. However, thinking about it carefully, it is not a good architecture because it does not respect the principle of Separation of Responsibilities, which states that each component should focus on performing a single task. My project strongly configures the movement; it is not homogeneous but changes along a route based on multiple factors. It is a level of customization that exceeds what Unity's NavMeshAgent allows. If Unity had respected the principle of Separation of Responsibilities, as Godot has done in this case, it would have separated route generation and agent movement into two separate components. This way, the route generator component could have been used as is, while the agent movement component could have been wrapped in other components to customize it appropriately.

Fortunately, there is a little-publicized way to query the 2D NavMesh to get routes without needing a NavMeshAgent, which allows replicating Godot's functionality. I will focus this article on that side because it is what I have done in my project. If you are interested in how to use the NavMeshAgent, I recommend consulting Unity's documentation, which explains in great detail how to use it.

Querying the NavMesh to get a route between two points
Querying the NavMesh to get a route between two points

In the previous screenshot, I have provided an example of how to perform these queries.

The key is in the call to the NavMesh.CalculatePath() method on line 99. This method takes 4 parameters: 

  • Starting point: Generally, it is the NPC's current position, so I passed it directly as transform.position.
  • Destination point: In this case, I passed a global variable of the NPC where the location of its target is stored. 
  • A NavMesh area filter: In complex cases, you can have your NavMesh divided into areas. This bitmask allows you to define which areas you want to restrict the query to. In a simple case like this, it is normal to pass NavMesh.AllAreas to consider all areas. 
  • An output variable of type AI.NavMeshPath: this is the variable where the resulting route to the destination point will be deposited. I passed a private global variable of the NPC. 

The call to CalculatePath() is synchronous, meaning the game will pause for a moment until CalculatePath() calculates the route. For small routes and occasional updates, the interruption will not affect the game's performance; but if you spend a lot of time calculating many long routes, you will find that performance starts to suffer. In those cases, it is best to divide the journeys into several shorter segments that are lighter to calculate. In the case of formations, instead of having each member of the formation calculate their route, it is more efficient for only the "commander" to calculate the route and the rest to follow while maintaining the formation.

The output variable of type AI.NavMeshPath, where CalculatePath() dumps the calculated route, could still be passed to a NavMeshAgent through its SetPath() method. However, I preferred to do without the NavMeshAgent, so I processed the output variable in the UpdatePathToTarget() method on line 107 to make it easier to use. An AI.NavMeshPath variable has the "corners" field where it stores an array with the locations of the different waypoints of the route. These locations are three-dimensional (Vector3), while in my project I work with two-dimensional points (Vector2), which is why in the UpdatePathToTarget() method I go through all the points in the "corners" field (line 111) and convert them to elements of a Vector2 array (line 113). This array is then used to direct my movement components to each of the waypoints of the route.

Conclusion 

Done, with this you have everything you need to make your NPCs move intelligently through the scene, navigating to reach the target. At a high level, it is true that the concepts are very similar between Godot and Unity, but the devil is in the details. When you get down to the implementation level, you will find the nuances and differences that we have analyzed in this article, but with the instructions I have given you, the result you obtain in Unity and Godot should be similar.

24 April 2025

How to detect obstacles in Unity 2D

In games it is quite common to need to determine if a point on the stage is free of obstacles in order to place a character or another game element. Think, for example, of an RTS: when building a building, you have to choose a free section of land, but how can your code know if a site already has a building or some other type of object?

In a 3D game, the most common solution is to project a ray from the camera's viewpoint, passing through the point where the mouse cursor is located on the screen plane, until it hits a collider. If the collider is the ground, that point is free, and if not, there is an obstacle.

Of course, if the object we want to place is larger than a point, projecting a simple ray falls short. Imagine we want to place a rectangular building, and the point where its center would go is free, but the corner area is not. Fortunately, for those cases, Unity allows us to project complex shapes beyond a mere point. For example, the SphereCast methods allow an invisible sphere to be moved along a line, returning the first collider it hits. Another method, BoxCast, would solve the problem of the rectangular building by projecting a rectangular base box along a line. We would only have to make that projection along a vertical line to the ground position we want to check.

In 2D, there are also projection methods, BoxCast and CircleCast, but they only work when the projection takes place in the XY plane (the screen plane). That is, they are equivalent to moving a box or a circle in a straight line along the screen to see if they touch a collider. Of course, that has its utility. Imagine you are making a top-down game and want to check if the character will be able to pass through an opening in a wall. In that case, you would only need to do a CircleCast of a circle, with a diameter like the width of our character's shoulders, projecting through the opening to see if the circle touches the wall's colliders.

A CircleCast, projecting a circle along a vector.
A CircleCast, projecting a circle along a vector.

But what happens when you have to project on the Z-axis in a 2D game? For example, for a 2D case equivalent to the 3D example we mentioned earlier. In that case, neither BoxCast nor CircleCast would work because those methods define the projection vector using a Vector2 parameter, limited to the XY plane. In those cases, a different family of methods is used: the "Overlap" methods.

The Overlap methods place a geometric shape at a specific point in 2D space and, if the shape overlaps with any collider, they return it. Like projections, there are methods specialized in different geometric shapes: OverlapBox, OverlapCapsule, and OverlapCircle, among others.

Let's suppose a case like the following figure. We want to know if a shape the size of the red circle would touch any obstacle (in black) if placed at the point marked in the figure.

Example of using OverlapCircle.
Example of using OverlapCircle.

In that case, we would use OverlapCircle to "draw" an invisible circle at that point (the circle seen in the figure is just a gizmo) and check if the method returns any collider. If not, it would mean that the chosen site is free of obstacles.

A method calling OverlapCircle could be as simple as the following:

Call to OverlapCircle Call to OverlapCircle
Call to OverlapCircle Call to OverlapCircle

The method in the figure returns true if there is no collider within a radius (MinimumCleanRadius) of the candidateHidingPoint position. If there is any collider, the method returns false. For that, the IsCleanHidingPoint method simply calls OverlapCircle, passing the following parameters:

  • candidateHidingPoint (line 224): A Vector2 with the position of the center of the circle to be drawn. 
  • MinimumCleanRadius (line 225): A float with the circle's radius. 
  • NotEmpyGroundLayers (line 226): A LayerMask with the layers of the colliders we want to detect. It serves to filter out colliders we don't want to detect. OverlapCircle will discard a collider that is not in one of the layers we passed in the LayerMask. 

If the area is free of colliders, OverlapCircle will return null. If there are any, it will return the first collider it finds. If you are interested in getting a list of all the colliders that might be in the area, you could use the OverlapCircleAll variant, which returns a list of all of them.

We could end here, but I don't want to do so without warning you about a headache you will undoubtedly encounter in 2D. Fortunately, it can be easily solved if you are warned.

The problem can occur if you use tilemaps. These are very common for shaping 2D scenarios. The issue is that to form the colliders of a tilemap, it is normal to use a "Tilemap Collider 2D" component, and it is also quite common to add a "Composite Collider 2D" component to sum all the individual colliders of each tile into one to improve performance. The problem is that by default, the "Composite Collider 2D" component generates a hollow collider, only defined by its outline. I suppose it does this for performance reasons. This happens when the "Geometry Type" parameter has the value Outlines.

Possible values of the Geometry Type parameter.
Possible values of the Geometry Type parameter.

Why is it a problem that the collider is hollow? Because in that case, the call to OverlapCircle will only detect the collider if the circle it draws intersects with the collider's edge. If, on the other hand, the circle fits neatly inside the collider without touching any of its edges, then OverlapCircle will not return any collider, and we would mistakenly assume that the area is clear. The solution is simple once it has been explained to you. You need to change the default value of "Geometry Type" to Polygons. This value makes the generated collider "solid," so OverlapCollider will detect it even if the drawn circle fits inside without touching its edges.

It seems silly because it is, but it was a silly thing that took me a couple of hours to solve until I managed to find the key. I hope this article helps you avoid the same issue.

19 February 2025

Assigning data to tiles in Unity

Tilemaps are very often used to create 2D games. Their simplicity makes them ideal for creating a retro-style setting. 

However, at first glance, Unity's entire implementation of Tilemaps seems to be limited to its aesthetic aspect. That's why it's quite common to find people on the forums asking how to associate data with the different Tiles used in a Tilemap.

Why might we need to associate data with a tile? For a variety of reasons. For example, let's say we're using a Tilemap to map a top-down game. In that case, we'll probably want to add a "drag" value to the different tiles, so that our character moves slower on tiles that represent a swamp, faster on tiles that represent a path, and can't cross tiles that show impassable stones.

For our examples, let's assume a scenario like the one in the capture:

Scenario of our examples
Scenario of our examples


It represents an enclosed area that includes three obstacles inside, one on the left (with a single tile), one in the center (with four tiles) and one on the right (with three). The origin of the stage's coordinates is at its center; I have indicated it with the crosshair of an empty GameObject.

The problem we want to solve in our example is how to make a script analyze the scenario, identify the black tiles and take note of their positions.

As with many other cases, there is no single solution to this problem. We have an option that is quick to implement and offers more possibilities, but can put too much overhead on the game. On the other hand, we have another option that is more difficult to implement and is more limited, but will put less overhead on the game. Let's analyze both.

Associate a GameObject to a tile

Generally, when we want to identify at once the GameObjects that belong to the same category, the easiest way would be to mark them with a tag and search for them in the scenario with the static method  GameObject.FindGameObjectsWithTag() . The problem is that tiles are ScriptableObjects, so they cannot be marked with tags.

ScriptableObjects for tiles are created when we drag sprites onto the Tile Palette tab. At that point, the editor lets us choose the name and location of the asset with the ScriptableObject we want to create, associated with the tile. From that point on, if we click on the asset of that ScriptableObject we can edit its parameters through the inspector. For example, for the tile I used for the perimeter walls, the parameters are:

Setting up a Tile
Setting up a Tile


The fields that can be configured are:

  • Sprite: This is the sprite with the visual appearance of the tile. Once the sprite is set, we can press the "Sprite Editor" button below to configure both the pivot point and the collider associated with the sprite.
  • Color: Allows you to color the sprite with the color you set here. The neutral color is white; if you use it, Unity will understand that you do not want to force the sprite's color.
  • Collider Type: Defines whether we want to associate a Collider to the tile. If we choose "None" it will mean that we do not want the Tile to have an associated Collider; if we set "Sprite", the collider will be the one we have defined through the Sprite Editor; finally, if the chosen value is "Grid", the collider will have the shape of the Tilemap cells.
  • GameObject to Instantiate: This is the parameter we are interested in. We will explain this in a moment.
  • Flags: These are used to modify how a tile behaves when placed on a Tilemap. For our purposes, you can simply leave it at its default value.

As I was saying, the parameter that interests us for our purpose is "GameObject to instantiate" if we drag a prefab to this field, the Tilemap will be in charge of creating an instance of that prefab in each location where that Tile appears.

For example, to be able to easily locate the black tiles, those of the obstacles, I have associated a prefab to that parameter of their Tile that I have called ObstacleTileData.

Setting up the Obstacle Tile
Setting up the Obstacle Tile

Since all I want is to be able to associate a tag with the tiles, in order to locate them with  FindGameObjectsWithTag() , it was enough for me to make ObstacleTileData a simple transform with the tag I was interested in. In the screenshot you can see that I used the InnerObstacle tag .

ObstacleTileData with label InnerObstacle
ObstacleTileData with label InnerObstacle

Once this is done, and once the tiles we want to locate are deployed on the stage, we only need the following code to make an inventory of the tiles with the InnerObstacle tag .

Code to locate the tiles that we have marked with the InnerObstacle tag

We just need to place the above script on any GameObject located next to the stage's Tilemap. For example, I have it hanging from the same transform as the Grid component of the Stage's Tilemaps.

When the level starts, the Tilemap will create an instance of the ObstacleTileData prefab at each position on the stage where a black obstacle tile appears. Since the ObstacleTileData prefab has no visual component, its instances will be invisible to the player, but not to our scripts. Since these instances are marked with the "InnerObstacle" tag, our script can locate them by calling  FindGameObjectsWithTag() , on line 16 of the code. 

To demonstrate that the code correctly locates the obstacle tile locations, I've set a breakpoint on line 17, so that we can analyze the contents of the "obstacles" variable after calling  FindGameObjectsWithTag() . When running the game in debug mode, the contents of that variable are as follows:

Obstacle tile positions
Obstacle tile positions

If we compare the positions of the GameObjects with those of the tiles, we can see that obstacles[7] is the obstacle on the left, with a single tile. The GameObjects obstacle[2], [3], [5] and [6] correspond to the four tiles of the central obstacle. The three remaining GameObjects ([0], [1] and [4]) are the tiles of the obstacle on the right, the elbow-shaped one.

In this way, we have achieved a quick and easy inventory of all the tiles of a certain type.

However, pulling labels isn't the only way to locate instances of the GameObjects associated with each Tile. Tilemap objects offer the GetInstantiatedObject() method , which is passed a position within the Tilemap and in return returns the instantiated GameObject for that tile's tile. Using this method is less direct than locating objects by label, since it forces you to examine the Tilemap positions one by one, but there will be situations where you have no other choice.

Finally, before we leave this section of the article, you should be aware that there may be situations where instantiating a GameObject per tile can weigh down the performance of the game. In the example case, we are talking about a few tiles, but in much larger scenarios we may be talking about hundreds of tiles, so instantiating hundreds of GameObjects may be something to think twice about.

Extending the Tile class

By default, I would use the above strategy; but there may be situations where you don't want to instantiate a large number of GameObjects. In that case, you may want to use the approach I'm going to explain now.

Tiles are a class that inherits from ScriptableObject. We can extend the Tile class to add any parameters we want. For example, we could create a specialized Tile with a boolean to define whether the tile is an obstacle or not.

Tile with a specialized parameter
Tile with a specialized parameter

This tile can be instantiated like any ScriptableObject to create an asset. When we do this, we will see that the specialized parameter will appear and we can configure it through the inspector.

Setting the tile with the specialized parameter
Setting the tile with the specialized parameter

The key is that the assets we create this way can be dragged to the Tile Palette so they can be drawn on the stage.

Once that is done, we could use the Tilemap.GetTile() method to retrieve the tiles for each position, cast them to our custom tile type (in our case CourtyardTile) and then analyze the value of the custom parameter.

The drawback of this method is that we cannot use labels or layers to search for data associated with tiles, which forces us to go through the tilemap cell by cell to find them, but it has the advantage of freeing our game from the burden of creating a GameObject per tile.

Conclusion

Whether by creating a GameObject per tile or by extending the Tile class, you now have the resources necessary to associate data with each of the tiles. This will allow you to provide the tiles with essential semantics for a multitude of algorithms, such as pathfinding algorithms.

08 February 2025

2D Navigation in Godot

NPCs or Non-Playable-Characters are all the characters in the game that are not controlled by the player, but that interact with him. They can range from the player's allies to enemies that try to kill him. One of the great challenges for game developers is to equip their NPCs with a series of behaviors that convey the appearance of life and intelligence.

One of the clearest signs of life is movement. If something moves on its own initiative, one of the reasons our mind instinctively considers is that it may be alive. On the other hand, one of the signs of a minimum of intelligence is that this movement occurs while avoiding obstacles in its path. If a creature moves when we approach it, but immediately runs into the wall in front of it, we may think that this creature is alive, but not that it is very intelligent.

That's why most game engines, as long as they have an artificial intelligence package, first include some kind of pathfinding tool to allow NPCs to orient themselves around the environment.

Godot is no exception and therefore incorporates pathfinding functionality using meshes, both in 2D and 3D. For simplicity, we will focus on the first area.

Creating a map

To orient ourselves, the best thing for humans is a map. The same goes for an engine. That map is based on our scenario, but to model it we need a node called NavigationRegion2D. We must add it to the scene of any level we want to map. For example, supposing that our level is based on several TileMapLayers (one with the floor tiles, another with the perimeter walls and another with the tiles of the obstacles inside the perimeter), the node structure with the NavigationRegion2D node could be the following:

Node hierarchy with a NavigationRegion2D
Node hierarchy with a NavigationRegion2D

Note that the NavigationRegion2D node is the parent of the TileMapLayer since by default it only builds the map with the conclusions it draws from analyzing its child nodes.

When configuring the NavigationRegion2D through the inspector, we will see that it requires the creation of a NavigationPolygon resource, in this resource the map information of our level will be saved.

Setting up a NavigationRegion2D
Setting up a NavigationRegion2D


Once the resource has been created, when clicking on it, we will see that it has quite a few properties to configure.

Setting up a NavigationPoligon
Setting up a NavigationPoligon


In a simple case like our example, only two parameters need to be configured:
  • Radius : Here we will set the radius of our NPC, with a small additional margin. This way, the pathfinding algorithm will add this distance to the outline of the obstacles to prevent the NPC from rubbing against them.
  • Parsed Collision Mask : All objects of the child nodes that are in the collision layers that we mark here will be considered an obstacle.
Once this is done, we will only need to mark the limits of our map. To do this, note that when you click on the NavigationPolygon resource in the inspector, the following toolbar will appear in the scene tab:

Toolbar for defining the shape of a NavigationPolygon
Toolbar for defining the shape of a NavigationPolygon


Thanks to this toolbar we can define the boundaries of our map. Then, NavigationRegion2D will be in charge of identifying obstacles and making "holes" in our map to indicate the areas through which our NPC will not be able to pass.

The first button (green) on the toolbar is used to add new vertices to the shape of our map, the second button (blue) is used to edit a vertex already placed, while the third (red) is used to delete vertices.

In a simple scenario, such as a tilemap, it may be enough to draw the four vertices that limit the maximum extension of the tilemap. In the following screenshot you can see that I have placed a vertex in each corner of the area I want to map.

Vertices of our NavigationPolygon
Vertices of our NavigationPolygon


Once we have defined the boundaries of our map, we will have to start its generation. To do this, we will press the Bake NavigationPolygon button in the toolbar. 
The result will be that NavigationRegion2D will mark in blue the areas where you can wander, once the limits of the map have been analyzed and the obstacles within them have been detected.

We will have to remember to press the Bake NavigationPolygon button whenever we add new obstacles to the level or move existing ones, otherwise the map of the navigable areas will not update.

NavigationPolygon, after being baked.
NavigationPolygon, after being baked.


For an object to be identified as an obstacle, it has to be configured to belong to one of the collision layers that we have configured in the Parsed Collision Mask field of the NavigationPolygon. In the case of a TileMapLayer this is configured as follows:
  1. In those TileMapLayers that contain obstacles we will have to mark the Physics -> Collision Enabled parameter.
  2. In the Tile Set resource that we are using in the TileMapLayer, we will have to make sure to add a physical layer and place it in one of the layers contemplated in the Parsed Collision Mask of the NavigationPolygon.
  3. You must also add a navigation layer and set it to the same value as the NavigationRegion2D node's navigation layer.
For example, the TileMapLayer containing the interior obstacles in my example, has the following settings for the Tile Set:

Setting up the TileMapLayer
Setting up the TileMapLayer


Then, inside the TileSet, not all tiles have to be obstacles, only those that have a collider configured. Remember that the tiles' colliders are configured from the TileSet tab. I won't go into more detail about this because that has more to do with the TileMaps configuration than with the navigation itself.

Setting a tile's collider
Setting a tile's collider


Using the map

Once the map is created, our NPCs need to be able to read it. To do this, they need a MeshNavigationAgent2D node in their hierarchy.

The MeshNavigationAgent2D node within an NPC's hierarchy
The MeshNavigationAgent2D node within an NPC's hierarchy


In a simple case, you may not even need to change anything from its default settings. Just make sure its Navigation Layers field is set to the same value as the NavigationRegion2D.

From your script, once you have a reference to the MeshNavigationAgent2D node, you will simply have to set the position you want to go to in the TargetPosition property of the MeshNavigationAgent2D. For example, if we had an NPC who wanted to hide at a point on the map, we could include a property in which we would ask the MeshNavigationAgent2D node to find the route to reach that point, as you can see in the screenshot.


Setting a target position in a MeshNavigationAgent2D
Setting a target position in a MeshNavigationAgent2D

Once we have told the node where we want to go, it will calculate a route and give us the different stops on that route as we reach them.

In order for the node to tell us where the next stop on the path is, we will have to call the GetNextPathPosition() method. It is important to note that this method is responsible for updating quite a few things internal to the pathfinding algorithm, so a requirement is that we call it once in each call to _PhysicsProcess() of our NPC.

In the screenshot you have the _PhysicProcess() of the agent I'm using as an example. Most of the code in the screenshot refers to topics that are not the subject of this article, but I'm including it to provide some context. In fact, for what we're talking about, you only need to look at lines 221 to 225.

Obtaining the next stop on the route, within the _PhysicsProcess.
Obtaining the next stop on the route, within the _PhysicsProcess.


On line 225 you can see how we call GetNextPathPosition() to get the next point we need to go to in order to follow the path drawn to the target set in TargetPosition. How we get to that point is up to us. The MeshNavigationAgent2D simply guarantees two things: 
  • That there is no obstacle between the NPC and the next point on the route that he marks for you.
  • That if you follow the route points that it gives you, you will end up reaching the objective... if there is a route that leads to it.
I want to emphasize this because, unlike Unity's NavigationAgent, Godot's does not move the NPC in which it is embedded. It simply gives directions on where to go.
Apart from the general case, there are certain caveats in the capture code that need to be made clear.

For starters, the Godot documentation says not to keep calling GetNextPathPosition() once you've finished traversing the path, otherwise the NPC may "shake" by forcing further updates to the pathfinding algorithm after already reaching the goal. That's why, on line 224, I check that we haven't reached the end of the path yet, before calling GetNextPathPosition(). So don't forget to check that IsNavigationFinished() returns false, before checking GetNextPathPosition().

On the other hand, the pathfinding algorithm takes a while to converge, especially at the beginning. If we ask it too early it will throw an exception. Typically it takes one or two physics frames (i.e. one or two calls to _PhysicsProcess). That is why on line 222 it checks if the IsReady property is true before continuing. The problem is that MeshNavigationAgent2D does not have an IsReady property (although it should have one), it is just a property I created myself to make a non-intuitive query:

How to check that the pathfinding algorithm is ready to answer queries
How to check that the pathfinding algorithm is ready to answer queries


Basically, what the property does is ensure that the pathfinding algorithm has managed to generate at least one version of the path. 

Conclusion

And that's it, by setting the TargetPosition of the MeshNavigationAgent2D, and going to the successive points marked by the calls to GetNextPathPosition(), you will be able to reach any reachable point of the NavigationRegion2D that you have defined. 

With that you already have the basics, but if at any time you need to analyze the complete route that the algorithm has calculated, you can ask for it by calling the GetCurrentNavigationPath() method. This method will return an array with the positions of the different stops on the route.

Folder structure of a video game development project

(Photo by Niklas Ohlrogge)
A game uses a massive amount of assets. Images, sounds, code files, etc. You need a way to organize them or you'll have a hard time finding things after a while. So a typical question on the forums is, What folder structure should I give my project?

Well, the truth is that this question does not have a single answer. It often depends on the project, your own particularities as a developer and the requirements that the different engines sometimes impose. In any case, there is a certain consensus on some good practices that I will list in this article so that you can choose the one that best suits you.

Engine requirements

To start, let's list the folders that you will always have as a requirement for the engine you are using.

In the case of Unity, it requires you to have 3 folders:

  • Assets : Here you will place all the resources you create for the game.
  • Packages : This is where the packages you download from the package manager are stored.
  • Project Settings : This is where the editor stores your project settings.

The three folders above are the minimum you should keep in your release repository, if you use one (which is highly recommended).

Normally you'll be working inside the Assets folder, since the editor already takes care of the content of the other folders for you. The folder structure you have inside the Assets folder is up to you, but the editor requires you to have at least two:

  • Scripts : You must have this folder to store the scripts that will make your game work. Unity compiles the game with the scripts it finds in this folder.
  • Editor : If you develop extensions for the editor or custom inspectors, you will need to create this folder and save the extensions code there. Do not save them in the Scripts folder, because the compilation will fail when you want to build the final package, to run the game outside of the editor.
  • In the case of Godot, the requirements are more lax. The only fixed folder is the addons folder , since that is where both the plugins you have developed and those you have downloaded from AssetLib are collected.

Once the above requirements have been met, the sky is the limit when it comes to organizing our assets. Let's now look at the most common layouts that people use.

Distribution by type

This is the most classic layout. I based it on it to organize the resources of Prince of Unity .

You create a folder to house each type of asset: one for images, one for sounds, one for prefabs/scenes, one for levels, one for shaders, etc.

This is the structure that Unity documentation usually recommends , so it's probably the first one you'll use. 

Example of organization by type, according to Unity documentation
Example of organization by type, according to Unity documentation

Functional distribution

However, there are many who reject the previous structure. They claim that in order to distinguish between assets by type, it is useless to classify them at the folder level, because the editors (both Unity and Godot) allow you to search for assets, filtering by their type.

That's why Godot's documentation advocates placing assets as close as possible to the scenes that use them. There are style guides for Unreal Engine that recommend the same.

In practice, this means creating a folder per scene and grouping the assets you use there, regardless of their type. In Unity, this means creating a folder per prefab.

Generally, the exception to the above is code, which is usually preferred to be placed in a separate folder where all the source code files are concentrated (in the case of Unity we have already said that having the Scripts folder is a requirement).

But normally our scenes/prefabs are not autonomous islands, but share resources with each other. How is this done in these cases? Is the resource copied to the folder of each scene/prefab? No, in these cases a folder is usually created for the shared resources, it can be a folder at the same level as the scenes/prefabs or a folder located at a higher level than them.

For example, the Unreal Engine style guide I mentioned earlier provides this example structure:

Functional organization example
Functional organization example

Proponents of this practice argue that it is better to have a deep folder structure, each containing a few files (even if they are of different types), than to have a few directories but full of files. It is easier to know what resources a scene/prefab uses if you see everything in the same folder.

Conclusion

Organizing your project is a very personal decision. There are no big rules set in stone. In the end, it all comes down to organizing your assets in the way that is most comfortable for you and makes you most productive. Perhaps trying out some of the organization systems I've talked about here will help you find your own.

30 January 2025

"Game Development Patterns with Godot 4" by Henrique Campos

Apps and video games have been around for so long that it's hard to find yourself in the middle of a problem that hasn't been solved by another developer before. Over time, this has given rise to a set of general solutions that are considered best practices for solving common problems. These are known as design patterns; a set of "recipes" or "templates" that developers can follow to structure their code when implementing solutions to certain problems.

It is a subject that is studied in any degree related to programming, with the book " Design Patterns, Elements of Reusable Object-Oriented Software " by Gamma, Helm, Johnson and Vlissides (the famous Group of Four) being the classic book that started this branch of study. 

However, when told in an academic way, design patterns can be difficult to understand and their application is not always clear. This is precisely where Henrique Campos' " Game Development Patterns with Godot 4 " shines.

Campos quite successfully selects 9 of the 23 design patterns enunciated by the Group of Four and explains them in a clear, simple way, rich in examples applied to game development. The patterns chosen by Campos seem to me to be the most useful and the ones with the most immediate application in any game. I have not missed any of the other remaining design patterns because they always seemed too abstract and esoteric to me.

In a very original way, Campos presents his examples based on dialogues or communications between the designer of a fictional game and his programmer. In these communications, the designer demands functionalities focused on enriching specific aspects of a game, while the programmer receives these demands and has to fit them with his objective of creating a scalable and easy-to-maintain code. In each of these examples, a specific design pattern is explained as the best solution to fit the designer's demand with the programmer's objective.

Following this scheme, the author explains the singleton pattern, the observer pattern, the factory, state machines, the command pattern, the strategy pattern, decorators, service locators and event queues. All of this in an order of increasing complexity and based on the previous ones. Along the way, he also explains in a simple but effective way basic principles of development such as those of object-oriented programming or the SOLID principles.

The examples are implemented in GDScript, using Godot 4. I must admit that I was initially a bit wary of the book because I didn't think GDScript was a rich enough language to illustrate these patterns (I admit that I develop in Godot C#). However, the field code is very expressive and GDScript is so concise that the examples end up being read as if they were pseudocode. In the end I didn't miss C#, because the GDScript code managed to convey the idea of ​​the examples in just a few lines, which made them very easy to read and understand.

Therefore, I think it is a highly recommended book that makes a subject that is often repulsive due to the excessively academic treatment it has received in other previous works, enjoyable and fun. If you give it a chance, I think you will enjoy it and it will help you considerably to improve the quality of your code. 

26 January 2025

Creating interactive Gizmos in Unity

In a previous article I explained how to use the OnDrawGizmos() and OnDrawGizmosSelected() callbacks, present in all MonoBehaviours, to draw Gizmos that visually represent the magnitudes of the fields of a GameObject. However, I already explained then that Gizmos implemented in this way were passive in the sense that they were limited to representing the values ​​that we entered in the inspector. But the Unity editor is full of Gizmos that can be interacted with to change the values ​​of the GameObject, simply by clicking on the Gizmo and dragging. An example is the Gizmo that is used to define the dimensions of the Collider. 

How are these interactive Gizmos implemented? The key is in the Handles. These are visual "handles" that we can place on our GameObject and that allow us to click and drag them with the mouse, returning the values ​​of their change in position, so that we can use them to calculate the resulting changes in the magnitudes represented. This will be better understood when we see the examples.

The first thing to note is that Handles belong to the editor namespace and can therefore only be used within it. This means that we cannot use Handles in final game builds that we run independently of the editor. For this reason, all code that uses Handles has to be placed either in the Editor folder or with #if UNITY_EDITOR ... #endif guards. In this article I will explain the first approach, as it is cleaner.

With the above in mind, Gizmo code that uses Handles should be placed in the Editor folder and not in the Scripts folder. Unity only takes into account the Scripts folder when compiling the C# code for the final executable. It is important to be rigorous with this because if you mix Handles code with MonoBehaviour code, everything will seem to work as long as you start the game from the editor, but when you try to compile it with File --> Build Profiles... --> Windows --> Build, you will get errors that will prevent you from compiling. At this point, you will have two options: either you redirect the distribution of the code that uses Handles to the structure that I am going to explain here or you fill your code with #if UNITY_EDITOR ... #endif guards around all the calls to objects in the Handles library. Anyway, I think that the structure that I am going to explain here is generic enough so that you do not need to mix your Handles code with MonoBehaviour code.

To start with the examples, let's assume we have a MonoBehaviour (in our Scripts folder) called EvadeSteeringBehavior that is responsible for initiating the escape of its GameObject if it detects that a threat is approaching below a certain threshold. That threshold is a radius, a distance around the fleeing GameObject. If the distance to the threat is less than that radius, the EvadeSteeringBehavior will start executing the escape logic. Let's assume that the property that stores that radius is EvadeSteeringBehavior.PanicDistance.

To represent PanicDistance as a circle around the GameObject, we could use calls to Gizmos.DrawWireSphere() from MonoBehaviour's OnDrawGizmos() method, but with that approach we could only change the PanicDistance value by modifying it in the inspector. We want to go further and be able to alter the PanicDistance value by clicking on the circle and dragging in the Scene window. To achieve this, we're going to use Handles via a custom editor.

For that, we can create a class in the Editor folder. The name doesn't matter too much. In my case, I've named the file DrawEvadePanicDistance.cs. Its content is as follows:

DrawEvadePanicDistance Custom Editor Code

Look at line 1, all your alarms should go on if you find yourself importing this library in a class intended to be part of the final compilation of your game, since in that case you will get the errors I mentioned before. However, by placing this file in the Editor folder we will no longer have to worry about this problem.

Line 6 is key as it allows us to associate this custom editor with a specific MonoBehaviour. With this line we are telling the editor that we want to execute the logic collected in OnSceneGUI() whenever it is going to render an instance of EvadeSteeringBehavior in the Scene tab.

As a custom editor, our class must inherit from UnityEditor.Editor, as seen on line 7.

So far, whenever I've needed to use Handles, I've ended up structuring the contents of OnSceneGUI() in the same way you see between lines 10 and 24. So in the end it's almost a template.

If we want to access the data of the MonoBehaviour whose values ​​we want to modify, we will have it in the target field to which all custom editors have access. Although it is true that you will have to cast to the type that you know the MonoBehaviour we are representing has, as can be seen in line 11.

We must place the code for our Handles between a call to EditorGUI.BeginChangeCheck() (line 13) and a call to EditorGUI.EndChangeCheck() (line 19) so that the editor will monitor whether any interaction with the Handles occurs. The call to EditorGUI.EndChangeCheck() will return true if the user has interacted with any of the Handles created from the call to EditorGUI.BeginChangeCheck().

To define the color with which the Handles will be drawn, we do it in a very similar way to how we did with the Gizmos, in this case loading a value in Handles.color (line 14).

We have multiple Handles to choose from, the main ones are:

  • PositionHandle : Draws a coordinate origin in the Scene tab, identical to that of Transforms. Returns the position of the Handle.
  • RotationHandle : Draws rotation circles similar to those that appear when you want to rotate an object in the editor. If the user interacts with the Handle, it will return a Quaternion with the new rotation value, while if the user does not touch it, it will return the same initial rotation value with which the Handle was created.
  • ScaleHandler : It works similarly to RotationHandle, but in this case it uses the usual scaling axes and cubes when you want to modify the scale of an object in Unity. What it returns is a Vector3 with the new scale, in case the user has touched the Handle or the initial one, otherwise.
  • RadiusHandle : Draws a sphere (or a circle if we are in 2D) with handles to modify its radius. In this case, what is returned is a float with said radius.

In the example at hand, the natural choice was to choose the RadiusHandle (line 15), since what we are looking for is to define the radius that we have called PanicDistance. Each Handle has its creation parameters, to configure how they are displayed on the screen. In this case, RadiusHandle requires an initial rotation (line 16), the position of its center (line 17) and the initial radius (line 18).

In case the user interacts with the Handle, its new value would be returned as a return from the Handle creation method. In our example, we have saved it in the variable newPanicDistance (line 15). In such cases, the EditorGUI.EndChangeCheck() method would return true, so we could save the new value in the property of the MonoBehaviour whose value we are defining (line 22).

To ensure that we can undo changes to the MonoBehaviour with Ctrl+Z, it is convenient to precede them with a call to Undo.RecordObject() (line 21), indicating the object that we are going to change and providing an explanatory message of the change that is going to be made.

The result of all the above code will be that, whenever you click on a GameObject that has the EvadeSteeringBehavior script, a circle will be drawn around it, with points that you can click and drag. Interacting with those points will change the size of the circle, but also the value displayed in the inspector for the EvadeSteeringBehavior's PanicDistance property.

The RadiusHandle displayed thanks to the code above
The RadiusHandle displayed thanks to the code above

What we can achieve with Handles doesn't end here. If we were Unity, the logical thing would have been to offer Handles to users, for interactive modification of script properties, and leave Gizmos for the visual representation of those values ​​from calls to OnDrawGizmos(). However, Unity did not leave such a clear separation of functions and, instead, provided the Handles library with drawing primitives very similar to those offered by Gizmos. This means that there is some overlap between the functionalities of Handles and Gizmos, especially when it comes to drawing.

It is important to know the drawing primitives that Handles can offer. In many cases it is faster and more direct to draw with Gizmos primitives, from OnDrawGizmos(), but there are things that cannot be achieved with Gizmos that can be drawn with Handles methods. For example, with Gizmos you cannot define the thickness of the lines (they will always be one pixel wide), while Handles primitives do have a parameter to define the thickness. Handles also allows you to paint dashed lines, as well as arcs or rectangles with translucent fills.

Many people take advantage of the benefits of Handles and use them not only for entering new values ​​but also for drawing, completely replacing Gizmos. The problem is that this forces the entire drawing logic to be extracted to a custom editor like the one we saw before, which implies creating one more file and saving it in the Editor folder.

In any case, it is not about following any dogma, but about knowing what the Handles and Gizmos offer in order to choose what best suits us in each occasion.