25 June 2025

How to implement Gizmos and Handles in Godot

A cone view gizmo

Gizmos allow you to draw lines and shapes in the editor window to visually represent an object’s values.

For example, imagine you want to create a component that implements a vision cone. A vision cone is modeled with two parameters:

  • Range: This is the distance between the observer and the observed object beyond which the latter becomes invisible.
  • Aperture: This is the angle of the cone. It is generally calculated as an angle from the vector marking the observer’s gaze direction. If the object is at an angle relative to the observer greater than the aperture angle, then the object is not visible.

Thus, your component can have these two parameters as fields, and you can edit them from the component’s inspector. The problem is that it’s very difficult to set the ideal values for these parameters without a visual reference of the result. It’s easier to understand what falls within the vision cone if, when setting the aperture in the inspector, a triangle representing the vision cone with that angle appears in the scene editor.

To represent these geometric shapes, which help us visualize the values of our component’s fields, is exactly what Gizmos are for.

Of course, Gizmos will only appear in the editor. They are aids for the developer and level designer but will be invisible when the final game starts.

In fact, if you’ve been practicing with Godot, you’ve likely already used Gizmos, for example, when setting the shape of a CollisionShape. The boxes, circles, and other geometric shapes that appear in the editor when you configure a CollisionShape are precisely Gizmos, drawn as we will see here.

In fact, if you look at CollisionShapes, you’ll notice that, in addition to the Gizmo, there are points you can click and drag to change the component’s shape. These points are called Handles and are the “grabbers” for manipulating what we represent in the editor. In this article, we’ll also see how to implement our own Handles.

Side view of a CollisionShape with a box shape. The blue edges are the Gizmo, representing the shape, and the red points are the Handles, for changing the shape.

Gizmos are associated with the nodes they complement, so Godot knows which Gizmos to activate when a specific node is included in the scene. As we’ve seen, many of Godot’s nodes already have associated Gizmos (as seen with CollisionShape nodes).

Creating a Custom Node

Since I don’t want to mess with the default Gizmos of Godot’s nodes, we’ll start by adding a custom node to Godot’s node list. We’ll associate a Gizmo with its respective Handles to this custom node to serve as an example.

In our example, to create this custom node, I’ve created a folder called nodes/CustomNode3D inside the project folder. In that folder, we can create the script for our custom node by right-clicking the folder and selecting Create New > Script.... A pop-up window like the one below will appear, where I’ve filled in the values for this example:

The script creation window
The script creation window

Once the script is generated, we only need it to implement two public exported Vector3 properties. I’ve called them NodeMainPoint and NodeSecondaryPoint:


[Export] public Vector3 NodeMainPoint { get; set; }

[Export] public Vector3 NodeSecondaryPoint { get; set; }

 

I’m not including a screenshot because we’ll add code to the setter part later.

The idea is that dragging the Handles in the editor updates the values of the two properties above. The reverse should also work: if we change the property values in the inspector, the Handles should reposition to the locations indicated by the properties. Additionally, we’ll draw a Gizmo in the form of a line, from the node’s origin to the positions of the properties.

This should be enough to illustrate the main mechanics: representing a node’s properties with Gizmos and modifying those properties using Handles.

The next step will be to create an addons folder inside the Godot project. Custom Gizmos and Handles are considered plugins, so the consensus is to place them in an addons folder within the project.

Once that’s done, go to Project > Project Settings ... > Plugins and click the Create New Plugin button. A window like the one below will appear, where I’ve already filled in the values for the example:

The plugin creation window
The plugin creation window

Note that the folder specified in the Subfolder field of the previous window will be created inside the addons folder we mentioned earlier. The same applies to the plugin script, defined in the Script Name field.

Also, notice that I’ve unchecked the Activate now? checkbox. GDScript plugins can be activated immediately with the generated template code, but C# plugins require some prior configuration, so they will throw an error if activated with the default template code. The error won’t break anything, but it displays an error window that needs to be closed, which looks messy. So, it’s best to uncheck that box and leave the activation for a later step, as we’ll see below.

After doing this, a folder will be generated inside addons, containing a plugin.cfg file and the C# script from the previous window. The purpose of this script is to register the type represented by CustomNode3D in Godot so that we can select it from the engine’s node list. Remember that CustomNode3D inherits from Node3D, so it makes sense to include it alongside other nodes.

As with any other plugin, CustomNode3DRegister will need to inherit from the EditorPlugin class and implement the _EnterTree() and _ExitTree() methods. In the first method, we’ll register CustomNode3D as an eligible node in the node list, and in the second, we’ll deregister it so it no longer appears in the list. The implementation is straightforward:

addons/custom_node3D_register/CustomNode3DRegister.cs
addons/custom_node3D_register/CustomNode3DRegister.cs

As you can see, in the _EnterTree() method, we load two things: the script associated with the custom node and the icon we want to use to represent the node in Godot’s node list. For the icon, I’ve used the one included in all Godot projects, copying it from the root into the custom node’s folder.

Then, we associate these elements with a base node using the AddCustomType() method, which registers the custom node in the node list. Since the custom node’s script inherits from Node3D, we’ve used that as the base class in the AddCustomType() call. With this call, when we select CustomNode3D from the node list, a Node3D will be created in the scene, and the script we defined will be associated with it.

The implementation of _ExitTree() is the opposite: we use the RemoveCustomType() method to remove the custom node from the node list.

To execute the registration, we’ll compile the game to apply the changes to CustomNode3DRegister.cs. After that, go to Project > Project Settings ... > Plugins and ensure the Enable checkbox for the CustomNode3DRegister plugin is checked. This will trigger its logic and register our custom node in the node list. From there, we can locate our node in the list and add it to the scene:

The node list with our custom node

Add it to the scene before proceeding.

Creation of a Gizmo for the Custom Node

Now that we have our custom node, let’s create a Gizmo to visually represent its properties.

Gizmos are considered addons, so it makes sense to create a folder addons/CustomNode3DGizmo to house their files. These files will be two: a script to define the Gizmo, which will be a class inheriting from EditorNode3DGizmoPlugin, and another script to register the Gizmo, which will inherit from EditorPlugin and will be quite similar to the one we used to register the custom node.

The Gizmo script is where the real substance lies. I’ve called it CustomNode3DGizmo.cs. As I mentioned, it must inherit from EditorNode3DGizmoPlugin and implement some of its methods.

The first of these methods is _GetGizmoName(). This method simply returns a string with the Gizmo’s name:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Somewhat more intriguing is the _HasGizmo() method, which is passed all the nodes in the scene until the method returns true for one of them, indicating that the Gizmo should be applied to that node. Therefore, in our case, the method should return true when a node of type CustomNode3D is passed:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Here, we need to consider a specific issue that occurs in C# but not in GDScript. Although the comparison with is is syntactically correct, in practice, it doesn’t work in Godot C# unless the class we’re comparing against is marked with the [Tool] attribute. So, this is a good time to add that attribute to the header of the CustomNode3D class:

nodes/CustomNode3D/CustomNode3D.cs
nodes/CustomNode3D/CustomNode3D.cs

In reality, this is an anomaly. We shouldn’t need the [Tool] attribute to make that comparison work. In fact, the equivalent GDScript code (which appears in the official documentation) doesn’t require it. This is a bug reported multiple times in Godot’s forums and is still pending resolution. Until it’s fixed, the workaround in C# is to use the [Tool] attribute.

The next method to implement is the constructor of our Gizmo class. In GDScript, we would use the _init() method, but in C#, we’ll use the class constructor:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

In this constructor, we’ll create the materials to apply to our Gizmo and its Handles. These aren’t full materials like those created with a shader but rather a set of styles to apply to the lines we draw for our Gizmo. They are created using the CreateMaterial() method for the Gizmo and CreateHandleMaterial() for the Handle. Both accept a string as the first parameter, which is the name we want to give the material. This name is used with the GetMaterial() method to obtain a reference to that material. This reference can be useful, for example, to assign it to a StandardMaterial3D variable for in-depth customization by setting the values of its properties. However, it’s common not to need that level of customization and to simply set the line color using the second parameter of the CreateMaterial() method. However, the CreateHandleMaterial() method doesn’t accept this second parameter, so we have no choice but to use the GetMaterial() method (lines 19 and 20 of the previous screenshot) to obtain references to the material and set the value of its AlbedoColor property (lines 21 and 22).

In the example constructor, I’ve configured the lines drawn from the coordinate origin to the position marked by the NodeMainPoint property to use the color red. The lines going to the position of the NodeSecondaryPoint property will use green. I’ve configured the materials for the respective Handles to use the same color.

Finally, we have the _Redraw() method. This is responsible for drawing the Gizmos every time the UpdateGizmo() method, available to all Node3D nodes, is called:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

The _Redraw() method is like our canvas, and it’s common to clear a canvas at the start before drawing on it. That’s why the Clear() method is typically called at the beginning of the method (line 29 of the previous screenshot).

Then, we collect the positions of the lines we want to draw in a Vector3 array. In this case, we want to draw a line from the coordinate origin to the position marked by the NodeMainPoint property, so we store both points in the array (lines 33 to 37 of the previous screenshot).

For the Handles, we do the same, storing the points where we want a Handle to appear in another array. In this case, since we want a Handle to appear at the end of the line, marked by the NodeMainPoint position, we only add that position to the Handles array (lines 38 to 41 of the previous screenshot).

Finally, we use the AddLines() method to draw the lines along the positions collected in the array (line 42) and the AddHandles() method to position Handles at the positions collected in its array (line 43). Note that, in both cases, we pass the material defining the style with which we want the elements to be drawn.

I didn’t include it in the previous screenshot, but the process for drawing the line and Handle for a second point (in this case, NodeSecondaryPoint) would be the same: we’d confirm their position arrays and pass them to the AddLines() and AddHandles() methods.

Manipulating a Gizmo Using Handles

At this point, our Gizmo will draw lines and Handles based on the values stored in the properties of the node it’s associated with (in this example, CustomNode3D). However, if we click on the Handles, nothing will happen. They will remain static.

To interact with the Handles, we need to implement a few more methods from EditorNode3DGizmoPlugin in our CustomNode3DGizmo class. However, Godot’s official documentation doesn’t cover these implementations. If you follow the official documentation tutorial, you’ll stop at the previous section of this article. It’s bizarre, but there’s nothing in the official documentation explaining how to manipulate Handles. Everything that follows from here is deduced from trial and error and interpreting the comments of each function to implement. Perhaps, based on this article, I’ll contribute to Godot’s documentation to address this gap.

Let’s see which methods need to be implemented in CustomNode3DGizmo to manipulate the Handles we placed in the _Redraw() method.

The first is _GetHandleName(). This method must return a string with the identifying name of the Handle. It’s common to return the name of the property modified by the Handle:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Two things stand out in the previous screenshot.

First, we could have returned the property name as a hardcoded string, but using the nameof() method ensures that if we refactor the property name using our IDE, this part of the code will update as well.

Second, each Handle is identified by an integer, so we can know which Handle’s name is being requested based on the handleId parameter passed to _GetHandleName(). The integer for each Handle depends on the order in which we added the Handles when calling AddHandles() in _Redraw(). By default, if you leave the ids parameter of AddHandles() empty, the first Handle you pass will be assigned ID 0, the second ID 1, and so on. However, if you look at the _Redraw() screenshot earlier, I didn’t leave the ids parameter empty. Instead, I passed an array with a single element, an integer defined as a constant, to force that Handle to be assigned that integer as its ID, allowing me to use that constant as an identifier throughout the code:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Once we’ve implemented how to identify each Handle, the next step is to define what value the Handle returns when clicked. This is done by implementing the _GetHandleValue() method:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Like _GetHandleName(), _GetHandleValue() is passed the Handle’s identifier for which the value is being requested. With the gizmo parameter, we can obtain the node associated with the Gizmo using the GetNode3D() method (line 130 of the previous screenshot). Once we have a reference to the node, we can return the value of the property associated with each Handle (lines 133 to 136).

When you click on a Handle, look at the bottom-left corner of the scene view in the editor; a string will appear, formed by what _GetHandleName() and _GetHandleValue() return for that Handle.

Now comes what may be the most challenging part of this tutorial: using the Handle to assign a value to the associated node’s property. This is done by implementing the _SetHandle() method:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

This method is passed a gizmo parameter, which allows access to the associated node using GetNode3D(), as we did in _GetHandleValue(). It’s also passed the Handle’s identifier for which the value is being set. Most importantly, it’s passed the camera viewing the scene and the Handle’s screen position.

In this method, we need to interpret the Handle’s screen position to set the node’s property value associated with the Handle based on that position. In this case, it seems simple: the Handle’s position should be the value stored in the associated property, since both NodeMainPoint and NodeSecondaryPoint are positions. The problem is that Handles are dragged on the two-dimensional surface of the screen, which is why the screenPos parameter is a Vector2, so it’s not immediately clear which three-dimensional scene coordinate corresponds to that screen point.

When we add a Camera3D node to a scene, that node is represented with the following Gizmo:

Camera3D Gizmo
Camera3D Gizmo

I find it very clarifying to think of our head being at the tip of the Gizmo’s pyramid, looking at a screen at the base. The scene in front of the camera is back-projected onto that screen.

Suppose we have an object A in the scene. The screen position where A is drawn (let’s call this position Ap) is the result of drawing a straight line from the object to the camera’s focus and seeing where it intersects the camera’s back-projection plane:

Back-projection diagram
3D object projection onto the flat surface of the screen


So far, this is straightforward. In fact, the Camera3D class has the UnprojectPosition() method, which takes a three-dimensional position (Vector3) of an object in the scene and returns its two-dimensional position (Vector2) on the screen. In our case, if we passed A’s position to UnprojectPosition(), the method would return Ap (understood as a two-dimensional screen position).

Now suppose we have a Handle at the screen position Ap representing A’s position, and we drag the Handle to the screen position Bp. How would we calculate the object’s new position in three-dimensional space? (Let’s call this new position B.) The logical approach is to apply the inverse process of back-projecting the object onto the camera’s plane. To do this, we’d draw a line from the camera’s focus through Bp. Following this reasoning, the object’s new position would lie along that line—but where? At what point on the line?

The key is to realize that the object moves in a plane (Pm) parallel to the camera’s plane. The intersection of that plane with the line from the camera’s focus passing through Bp will be the object’s new position (B):

Object moved across the screen
Object moved across the screen

The Camera3D node has a ProjectPosition() method, which is used to convert two-dimensional screen coordinates into three-dimensional scene coordinates. The method accepts two parameters. The first is a two-dimensional screen position (in our example, Bp). With this parameter, the method draws a line from the camera’s focus through the two-dimensional camera coordinate (Bp). The second parameter, called zDepth, is a float indicating the distance from the camera’s focus at which the plane Pm should intersect the line.

zDepth
zDepth

This distance is the length of the line from the camera’s focus that intersects perpendicularly with the plane Pm. In the previous diagram, it’s the distance between the focus (F) and point D.

But how do we calculate this distance? Using trigonometry. If we recall our high school lessons, the cosine of the angle between segments FA and FD equals the ratio of FD divided by FA. So, FA multiplied by the cosine gives us FD.

FD distance calculation
FD distance calculation

This calculation is so common that game engine vector math libraries include it under the name Dot Product. With this operator, we can transform the previous formula into:

Dot product
Dot product

This formula means that if we compute the Dot Product of vector FA onto the normalized vector of FD, we get the full vector FD.

It’s common to visualize the Dot Product as a projection of one vector onto another. If you placed a powerful light behind FA, shining perpendicularly onto the normalized vector FD, the shadow FA would cast onto FD would be exactly FD.

Therefore, to obtain the distance FD to use as the zDepth parameter, we only need to compute the Dot Product of FA onto the normalized FD, which is the Forward vector of the Camera3D node (by default, the inverse of its local Z-axis).

All this reasoning boils down to a few lines in the GetZDepth() method:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

In this method, the variable vectorToPosition corresponds to FA, and cameraForwardVector to FD. The zDepth result returned by the method is FD and is used in the calls to ProjectPosition() in _SetHandle() to set the new positions of NodeMainPoint and NodeSecondaryPoint.

Having resolved _SetHandle(), the only method left to implement is _CommitHandle(). This method is responsible for building the history of modifications we make to our Handles, so we can navigate through it when performing undo/redo (Ctrl+Z or Ctrl+Shift+Z):

_CommitHandle() method 1 of 2

_CommitHandle() method 2 of 2
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

The history is built on an object of type EditorUndoRedoManager (in this case, _undoRedo), which is obtained from the EditorPlugin object that registers the Gizmo (in this example, CustomNode3DGizmoRegister, which we’ll discuss soon) and passes an instance of EditorUndoRedoManager through its constructor.

With the EditorUndoRedoManager, each history entry is created with the CreateAction() method (line 78 of the previous screenshot). Each entry must include actions to execute for Undo and Do (called during Redo). These actions can involve setting a property with the AddDoProperty() and AddUndoProperty() methods or executing a method with AddDoMethod() and AddUndoMethod(). If the direct action only involved changing a property’s value, it’s usually sufficient to set the property back to its previous value to undo it. However, if the direct action triggered a method, in addition to changing the property, you’ll likely need to call another method to undo what the first did.

In this example, I only change the values of customNode3D’s properties, so for the action history, it’s enough to use the Add...Property() methods. These methods require as parameters the instance owning the properties to modify, the string with the property’s name to manipulate, and the value to set the property to. Each action captures the value we pass to the Add...Property() method. For AddDoProperty(), we pass the property’s current value (lines 84 and 91); for AddUndoProperty(), we pass the restore parameter’s value, which contains the value retrieved from the history when performing an Undo.

When _CommitHandle() is called with the cancel parameter set to true, it’s equivalent to an Undo on the Handle, so we restore the restore value to the property (lines 101 to 106).

Finally, but no less important, once we’ve shaped the property changes and method calls that make up the history entry, we register it with CommitAction() (line 109).

Updating the Gizmo After Changes

The visual representation of a Gizmo may need to be updated for two reasons:

  1. Because we have modified the fields of the represented node from the inspector.
  2. Because we have manipulated the Gizmo’s Handles.

The Gizmo is updated by calling the UpdateGizmos() method, which is available to all Node3D nodes.

The question is where to call this method to ensure that both types of changes mentioned above are updated. In the previous code screenshots, you’ll notice several commented-out calls to UpdateGizmos(). These were tests of possible places to execute that method. All the commented-out calls had issues: either they didn’t trigger for one of the two cases above, or they updated in a choppy manner, as if there were some performance problem.

In the end, my tests led me to conclude that, in my case, the best place to call UpdateGizmos() is from the properties of CustomNode3D that we’re modifying. For example, in the case of NodeMainPoint:

nodes/CustomNode3D/CustomNode3D.cs
nodes/CustomNode3D/CustomNode3D.cs

By calling UpdateGizmos() from the setter of the property, which is exported, we ensure that the method is called both when the property is modified from the inspector and from the _SetHandle() method of the Gizmo.

Registering the Gizmo

Just like with the custom node, our Gizmo also needs to be registered in the editor so that it knows to account for it. For this, we’ll again use a plugin to handle the registration:

addons/custom_node3D_gizmo/CustomNode3DGizmoRegister.cs
addons/custom_node3D_gizmo/CustomNode3DGizmoRegister.cs

We’ll create the plugin using the same method we used to create the CustomNode3DRegister plugin, but this time, the plugin will be called CustomNode3DGizmoRegister and will be based on a C# script of the same name.

In this case, we load the C# script where we configured the Gizmo and instantiate it, passing an instance of EditorUndoRedoManager to its constructor by calling the GetUndoRedo() method (lines 13 and 14).

Once that’s done, we register the plugin instance by passing it to the AddNode3DGizmoPlugin() method (line 15).

Similarly to how we handled the registration of the custom node, we also use the _ExitTree() method here to deregister the Gizmo using the RemoveNode3DGizmoPlugin() method (line 21).

Once the script is complete, we can activate the plugin from Project > Project Settings... > Plugins, and the next time we add a CustomNode3D to the scene, we’ll be able to use the Gizmo’s Handles.

Initially, the Handles might not be clearly distinguishable because both will coincide at the origin of the coordinates:

Handles at coordinate origin
Handles at coordinate origin

However, they are there. They’re the small points visible at the origin of the coordinate axis. If we click and drag these points, we’ll see them move, altering the values of the CustomNode3D properties:

Dragging Handles
Dragging Handles

Conclusions

This article has been extremely long, but I wanted to address a glaring gap in the official documentation on this topic.

My previous experience was with Unity, which also has its own Gizmos and Handles API. Compared to Unity, Godot’s approach seems more compact and straightforward, as it centralizes both Gizmo and Handle configuration in a single class inheriting from EditorNode3DGizmoPlugin. In contrast, to achieve the same in Unity, you need to spread the Gizmo and Handle code across different classes.

That said, Unity’s documentation on this topic seems much more comprehensive.

It’s also worth noting that Unity’s Gizmos and Handles API covers both 2D and 3D games, whereas in Godot, everything we’ve covered in this article applies only to 3D games. There is no EditorNode2DGizmoPlugin class, or at least I’m not clear on what the equivalent of this code would be for a 2D Godot game. I’ll investigate this, and when I figure it out, I’ll likely write another article. However, at first glance, the official documentation doesn’t make it clear how to do this in 2D.

Code for This Tutorial

The code for the project used as an example is available for download in my GitHub repository GodotCustomGizmoAndHandleExample. Feel free to download it to examine the code in detail and try it out for yourself.

23 June 2025

"Agile Game Development: Build, Play, Repeat" by Clinton Keith

Agile development techniques are revolutionizing the way software is created, enhancing collaborative work and enabling development teams to better adapt to constant market changes and initiate virtuous cycles of continuous improvement.

Instead of planning the entire project from the beginning (as in traditional waterfall development methods), agile techniques divide the work into small parts called "iterations" or "sprints" (usually lasting 1 to 4 weeks). At the end of each sprint, a functional part of the product is delivered. This iterative nature of the work aims to establish constant communication channels between the development team and its clients, empower the team to respond quickly to changing client needs, and deliver software frequently so that clients can start gaining value early and raise alerts as soon as the software does not meet their expectations. All of this is achieved by promoting the self-organization of the development team and improving their motivation.

Based on these general principles of agile development, there are methodologies that translate them into concrete, everyday work mechanisms. One example is Scrum, which proposes the aforementioned short cycles and a range of roles and regular meetings so that everyone involved in the project (not just developers) can understand its status and influence its evolution. It is also common to combine this with Kanban techniques, using boards with columns to help visualize the workflow and avoid bottlenecks; or to include Extreme Programming (XP) techniques to improve code quality, using practices such as pair programming and automated testing.

The world of video game development is no stranger to these techniques. In fact, the highly creative nature of video games and the ever-changing tastes of consumers make it highly advisable to avoid rigid waterfall development and instead adopt the aforementioned agile techniques.

The book "Agile Game Development: Build, Play, Repeat" by Clinton Keith struck me as the ideal work for both beginners and those looking to deepen their understanding of this topic.

It is a very mature work, written by a Scrum professional who previously worked in traditional development (for U.S. fighter jet control systems) and later in top-tier video game studios. This prior experience constantly surfaces through real, enriching, and entertaining examples that help contextualize the theory, unlike many similar works. Other books merely explain what Scrum is, but this one starts with the problems that waterfall development posed for the video game industry, using examples and anecdotes that are truly entertaining, enlightening, and deeply human (it's comforting to know that "everyone has their struggles").

From these challenges, the author explains in great detail the principles of agile development and how the Scrum methodology brings them into daily practice. In the first chapters (approximately the first third of the book), the description of Scrum tasks is similar to those performed in more generic application development. In fact, since I had already read other books on Scrum, my first impression during those chapters was "Scrum seems to apply exactly the same in video games as in general applications." This is not a criticism—on the contrary, I believe this first third is valuable for those new to Scrum, as the examples are from the video game world, very visual, easy to understand, and fun, yet perfectly applicable to general development.

The remaining two-thirds are where, fortunately, the book diverges from more generic Scrum works and delves into the peculiarities required by video game development. It details cases where the literal application of Scrum would not work in a video game development team and the approaches the author has seen to solve these issues. The result is not only a practical reference manual on how to lead a video game development team but also a testimony to the highs and lows of this market segment.

Therefore, I believe this work is essential for anyone looking to mature, making the leap from solo hobbyist development to team-based commercial development. Developing is not just about programming, but about delivering something functional to our clients that they can enjoy. This book will teach you how to achieve that.

15 June 2025

What files should we include in Git for a Unity project?

Cover
If you still haven’t secured your Unity project in a Git repository like GitHub, you should do so as soon as possible.

The main reasons to add version control to your workflow are:

  • Having your project in a remote repository (like GitHub, GitLab, or Bitbucket) ensures you don’t lose your work if something happens to your computer.  
  • Git allows you to track changes in your project, revert to previous versions if something goes wrong, and understand what changes were made and when.  
  • If you work in a team, Git makes it easier for multiple people to work on the same project without conflicts, using branches and merges.  
  • You can use branches to experiment with new features or fixes without affecting the main version, keeping the project organized.  
  • Many CI/CD platforms and development tools integrate with Git, streamlining workflows like automated testing or deployments.  
  • Using Git is an industry standard, and mastering it prepares you for professional environments.

However, you shouldn’t upload all your files to a Git repository. For example, it makes no sense to upload compiled files because anyone can generate them from the source code. Uploading compiled files only unnecessarily increases the repository’s size, making it more expensive to maintain and slowing down synchronization for the rest of the development team.

For Unity projects, ensure you include an appropriate .gitignore file to avoid uploading unnecessary files. Git expects this file at the project’s root (at the same level as the Assets folder). Its content lists the file and folder names that Git should ignore to keep the repository clean. On the internet, you can find plenty of examples of .gitignore files suitable for Unity. If you use Rider, JetBrains has an official plugin (called .ignore) that provides a wizard to generate a .gitignore tailored to your project (including Unity). Another source is GitHub, which has an official repository with .gitignore files for the most popular development frameworks, including Unity.

What to Leave Out of the Repository

If you choose to create a .gitignore file from scratch, you should exclude the following:

  • Unity-related folders: Exclude Library, Temp, Obj, Build, Logs, and UserSettings, as they are automatically generated by Unity and should not be versioned.  
  • Build files: Files like .csproj, .sln, and others generated by Visual Studio or Rider are not needed in the repository.  
  • Cache and assets: Exclude Addressables cache, StreamingAssets, and packages to reduce the repository size.  
  • Operating system files: Ignore .DS_Store (macOS) and Thumbs.db (Windows).  
  • Editors and tools: Exclude editor-specific configurations like VS Code or Rider, but allow shareable configurations (e.g., .vscode/settings.json).  
  • Large files: If you don’t use Git LFS, exclude .unitypackage, compressed or heavy files like .zip.

What to Include in the Repository Files

With the above excluded, your repository should only contain the following folders:

  • Assets  
  • Packages  
  • ProjectSettings

The most critical folder is Assets, as it contains all the project’s code, as well as models, textures, music, sounds, and all the elements that make up your project.  

The Eternal Question: What Do I Do with .meta Files?

When you browse the Assets folder using a file explorer, you’ll notice that each item has an associated .meta file, In the .meta, files. Since .metadata Unity automatically generates these .meta files, many people wonder whether they should be included in the repository.  

Unity associates a .meta file with each asset included in the project. These .meta files store the import parameters for each asset. This is especially important for assets like textures or sounds, as failing to include .meta files in version control could lead to other developers importing the same assets with different settings, which can cause issues.  

Since Unity generates them automatically if they’re missing, it’s critical to emphasize to the entire development team that when creating a new asset, they must include both the asset and its corresponding .meta file in the repository. If someone forgets to include the .meta file for a newly created asset, the next developer who clones the repository will have Unity automatically generate a .meta file for it (when Unity notices it’s missing), which can lead to merge conflicts, as different .meta files (one per developer) might exist for the same asset. This is an ugly situation that often sparks recurring questions on Reddit.  

Moral of the story: Never forget to include .meta files in your version control repository

Conclusion

With the above, you now have the basics to check if your repository is missing anything critical. Operating a Git repository is another matter entirely and has plenty of nuance. Using Git is easy, but using it effectively without messing things up takes a lot more effort.  

If you want to dive deeper into this topic, I dedicate Chapter 15 of my book, "Reconstruction of a Legendary Game with Unity", to it.

12 June 2025

Blender Game Engines

Yo Frankie! game cover
Blender has rightfully established itself as one of the leading tools for modeling, animation, and rendering in the market. Its quality, combined with its open-source and free nature, has skyrocketed its popularity, allowing it to stand toe-to-toe with major commercial "giants" like Maya or 3D Studio.

What many people don't know is that Blender was also a game engine. It was called Blender Game Engine (BGE), and it was integrated with Blender from the early 2000s. It allowed the creation of interactive applications directly from Blender, without the need for external tools. For logic, it allowed not only programming in Python but also using a visual language, very similar to the current Blueprints of Unreal Engine.

The engine allowed the creation of both 3D and 2D games, with real-time physics (using the Bullet Physics library). It also allowed the creation of basic shaders and animations. Since it was integrated into Blender, there was no need to export models to other engines.

To demonstrate the engine's capabilities, the Blender Foundation developed the game "Yo Frankie!" in 2008. The game was visually appealing, but it soon became clear that it lagged behind what other engines like Unity or Unreal could offer. Compared to these, the BGE could only offer limited performance in complex projects, lacked support for modern mobile platforms, and its development interface was less polished.

In the end, the Blender Foundation had to make a decision. To keep up with other engines, the BGE required an exclusive team of developers that the foundation could not dedicate. Additionally, the BGE developer community, outside the foundation, was very small, which made the update pace very slow. BGE was falling behind visibly and dragging down the rest of the Blender project. In the end, it was decided to abandon the BGE to concentrate development resources on what Blender did best: modeling, animation, and rendering tools (like Cycles).

The formal elimination of the BGE took place in 2019, with Blender version 2.80. From then on, the foundation recommended using Blender as a modeling tool and exporting assets from there to more advanced engines.

Fortunately, as often happens in the open-source world, the closure of a project is not usually such, but rather its transfer. Other developers, interested in continuing the project, picked up the BGE source code and evolved it from there. Thanks to this, where we once had one engine (BGE), we now have two descendants of it and one strongly inspired by it. Let's analyze them:

UPBGE (Uchronia Project Blender Game Engine)

UPBGE logo
This engine was created, along with other collaborators, by Porteries Tristan, one of the original BGE developers. Initially, UPBGE aimed to be a fork that improved the BGE code, cleaning its base and experimenting with new features, but with a view to eventually incorporating into the main BGE code. However, the elimination of BGE in Blender version 2.80 led UPBGE to acquire its own independent identity. Since then, UPBGE has continued its development, ensuring to synchronize its code with Blender's to maintain compatibility.

UPBGE is fully integrated with Blender, allowing modeling, animation, and game development in a single environment without the need to export assets.

It uses Blender's EEVEE engine for real-time rendering, with support for PBR shading, SSR reflections, GTAO ambient occlusion, bloom, soft shadows, and volumetrics.

It uses the Bullet Physics library for real-time simulations, including rigid bodies, obstacle simulation, and pathfinding.

Additionally, it includes an audio engine based on OpenAL and Audaspace, with support for 3D sound.

For game logic, it maintains the original BGE "logic bricks" system and adds Logic Nodes, facilitating the creation of interactive behaviors without programming. It also supports Python scripting for those who prefer to program by keyboard.

When importing and exporting elements, it allows the same formats as Blender, mainly FBX, Collada, glTF, OBJ, and STL.

It is a very active project, with a small but very involved team of developers, who release regular updates every few months and keep the engine synchronized with Blender's evolutions.

In terms of functionality, developers plan to add modern features like SSAO, depth of field (DoF), online mode tools, or area lights.

Therefore, it is a very interesting and promising project, with only one drawback: the performance that can be expected in complex scenes, with many polygons or rich effects, may be more limited than what can be achieved with engines like Unity or Unreal.

It should also be noted that it inherits the GPL license from BGE. Games developed with UPBGE are not required to be open-source if they only use project data (models, textures, scripts, etc.) and do not distribute the UPBGE executable (blenderplayer). However, if a game includes the blenderplayer or parts of the UPBGE code, it must comply with the GPL and be distributed with the source code.

Range Engine

Range Engine logo
This engine is not a fork of the original BGE, but of UPBGE. It originated in 2022, as a result of certain design decisions made for UPBGE 3.0. Some developers felt that these decisions moved UPBGE away from the spirit of the original BGE, so they decided to split through a fork. For this reason, Range Engine retains an interface much closer to what was in the original BGE. Compared to UPBGE, Range Engine prioritizes ease of use.

Range Engine prioritizes compatibility with old BGE projects and simplicity in the developers' workflow.

Its youth and relatively smaller developer base explain why its features and update pace are lower than UPBGE's.

Even so, it seems they have managed to optimize the animation section compared to UPBGE. On the other hand, performance limitations have also been observed when using this engine with large open scenes.

Like UPBGE, it is licensed under GPL. However, it includes Range Armor, a proxy format that allows game data to be packaged separately from the blenderplayer executable, enabling the creation of games under more flexible licenses like MIT, facilitating commercial distribution without strict restrictions.

Armory 3D

Armory 3D
Unlike the previous two, it is not a direct fork of BGE, but an independent game engine that uses Blender as a complement for modeling, texturing, animation, and scene composition. Additionally, it enriches the above with related tools like ArmorPaint (for PBR texture painting) and ArmorLab (for AI texture creation), both integrated with the Armory engine. It bases its logic editor on Haxe language scripts. Some say this language is more flexible than the Python used in BGE. It also offers a node-based visual editor, to which new nodes for value, vector, and rotation interpolation (tweening), camera drawing nodes, pulse nodes to control firing rates, and improvements in the logical node hierarchy have recently been added.

At the multimedia level, this engine relies on the Kha framework. It uses a physically-based renderer, compatible with HDR pipelines, PBR shading, and post-processing effects, all highly customizable.

Like BGE, it uses Bullet Physics as its physics engine, supporting rigid bodies, collisions, and soft physics. It also allows configuring the Oimo physics engine as an alternative. Additionally, it offers support for particle systems (emitter and hair) with GPU instancing to improve performance.

Armory 3D has greater compatibility for exporting your game to different platforms and offering optimized animation performance. It currently supports Windows, macOS, Linux, Web, Android, iOS, and consoles (with specific configurations).

Unlike the previous ones, Armory 3D is mainly driven by a single developer (Lubos Lenco), and this is reflected in the fact that updates arrive sparingly. Even so, collaborations and external contributions are gradually increasing, forming a nascent core of developers.

In terms of licensing, an advantage over the direct heirs of BGE is that Armory 3D uses the more permissive Zlib license than Blender's GPL, allowing greater flexibility for commercial projects.

Conclusions 

As you can see, game development with Blender is very much alive, not only using it as a complementary modeling and animation tool but as an engine in itself.

Today it does not offer performance comparable to what can be obtained with dedicated engines like Unity, Unreal, or Godot Engine, but the possibility of concentrating both programming and modeling, texturing, and rigging of assets in a single tool can greatly simplify the lives of those who want to get started in the world of video games. Not to mention that languages like Python or Haxe are really easy to learn to implement typical game algorithms.

My advice, if you want to get started with these types of engines, is to start with UPBGE and then try the other two to see what differential value they can offer you.

07 June 2025

How to implement a vision cone in Unity

A vision cone
A vision cone in video games is a mechanism primarily used in stealth or strategy games to simulate the field of vision of a non-playable character (NPC), such as an enemy or a guard. It is represented as a conical area originating from the NPC's eyes and extending forward at a certain angle, defining the range and direction in which the NPC can detect the player or other objects. If there are no obstacles in the way, all objects within the player's vision cone are visible to them. 

You can find some famous examples of this concept in games like Commandos or Metal Gear Solid. In Commandos, the enemies' vision cone is visible in the main window to show the surveillance area of enemy soldiers.

Vision cones in Commandos
Vision cones in Commandos

In Metal Gear Solid, the vision cones are not shown in the main window but in the minimap in the upper right corner, allowing the player to plan their movements to navigate the scene without being detected.

Vision cones in Metal Gear Solid
Vision cones in Metal Gear Solid

In general, the vision cone is key to creating stealth mechanics, as it forces the player to plan movements, use the environment (such as cover or distractions), and manage time to avoid detection. It also adds realism, as it simulates how humans or creatures have a limited, non-omnidirectional field of vision.

In this article, we will see how we can implement a vision cone in Unity. The idea is to create a sensor that simulates this detection mode, so we can add it to our NPCs.

Main Characteristics of Vision Cones

  • Angle: The cone usually has the shape of a triangle or circular sector in 2D, or a three-dimensional cone in 3D games. The viewing angle (e.g., 60° or 90°) determines the width of the field that the NPC can "see". 
  • Distance: The cone has a maximum range, beyond which the NPC will not detect the player, even if they are within the viewing angle. 
You can add more embellishments, but a vision cone is defined only by these two factors.

In many games, the vision cone is graphically shown to the player (especially in top-down view or specific interfaces) to indicate where they should avoid being seen. It can change color (e.g., green for "no alert" and red for "alert"). In this article, I will not cover the visual part because it doesn't add much. I want to focus on implementing what the NPC can see and what it cannot, not on the representation of the cone.

In Unity, the component that implements the vision cone usually exposes these two characteristics in the inspector, as seen in the following screenshot:

Basic fields to implement a vision cone
Basic fields to implement a vision cone

In my case, detectionRange (line 15) implements the distance, while detectionSemiconeAngle (line 18) implements the angle.

In the case of the angle, my code is based on some premises that need to be considered. The first is that I used a [Range] attribute (line 17) to configure the value of this field with a slider and to limit the range of possible values to the interval between 0 and 90 degrees. Although the viewing angle of a person is greater than 90° lateral degrees, in games it would be too difficult to avoid a character with that vision cone, so it is normal not to exceed 90°, with 45° being the most common. The second premise is that I treat the angle as a semi-angle. That is, I measure it from the direction I consider frontal (Forward, in my case) in one direction, and then it is mirrored in the other direction to generate a symmetrical cone.

The two parameters that define a vision cone
The two parameters that define a vision cone

In my example, I am working in 2D, so I have defined Forward as the local +Y axis, as seen in the following screenshot.

Definition of the frontal vector (Forward)
Definition of the frontal vector (Forward)

In line 20 of the code screenshot, I included one more field, layersToDetect, which we will use as a filter, as we will see a little later.

How to Detect if a Position is Within the Vision Cone

With the distance and angle defined, we need to assess whether the position to be checked is less than that distance and whether the angle between the relative position vector and the Forward vector of the cone is less than the angle of the cone. In Unity, it is very easy to calculate both.

Method to determine if a position is within the vision cone
Method to determine if a position is within the vision cone

The easiest way to calculate the distance is to use the Vector2.Distance() method, as I do in line 126 of the screenshot, passing the position of the vision cone (coinciding with its vertex) and the position to be checked as parameters.

For the angle, we can use the Vector2.Angle() method, as seen in line 127. This method returns the absolute angle between two vectors, so I pass Forward (line 128) on one side and the vector of the position to be checked, relative to the origin of the cone (line 129), on the other.

If both the distance and the angle are below the thresholds defined in the cone, then the checked position is within it.

Object Filtering

We could leave the article here, and you would have a functional vision cone. You would just need to collect all potentially visible objects in the scene and pass their positions (one by one) to the PositionIsInConeRange() method defined earlier. This check would need to be done periodically, perhaps in the Update() or FixedUpdate() method.

However, this would not be very efficient as the scene could be huge and contain many objects. It would be much better if we could do a preliminary filtering, so we only pass the minimum and essential objects to PositionIsInConeRange().

Filtering by Layers

The first filtering we could apply is by layer. We can distribute the objects in the scene into different layers and configure the vision cone to only consider objects in a specific layer. That was the purpose of the layersToDetect field mentioned earlier. Extended, this field looks like the screenshot.

layersToDetect field of type LayerMask
layersToDetect field of type LayerMask

This type of field allows multiple selection, so you can define that your cone analyzes several layers simultaneously.

Once you know which layers you want to analyze, discriminating whether an object is in one of those layers is apparently simple, as seen in the following screenshot.

How to know if an object is in one of the layers of a LayerMask
How to know if an object is in one of the layers of a LayerMask

I say "apparently" simple because, although you can limit yourself to copy-pasting this code into yours, fully understanding it has its intricacies.

To begin with, a LayerMask has a value field that is a 32-bit integer in which each of them represents the 32 possible layers in a Unity scene. You can imagine a succession of 32 ones and zeros. If you include two layers in the layerMask field, the value field will have 2 bits with a value of one, and the rest will be zeros. The final integer value of the field will depend on the position of those ones, although, in reality, that value is indifferent because what matters is which positions have a one.

On the other hand, all Unity objects have a layer field that contains an integer with values ranging from 0 to 31. This integer indicates the index of the layer to which the object belongs, within the LayerMask of all possible layers in the scene. For example, if an object's layer field has a value of 3, and that layer is included in a LayerMask, then that LayerMask will have a one in its bit at index 3.

To know if an object's layer is within the layers marked in a LayerMask, we need to make a comparison, using the object's layer as a mask. The trick is to generate an integer whose binary value is filled with zeros and put a one in the position corresponding to the layer to be checked. That integer is what we call the mask. We will compare that mask with the LayerMask, doing a binary AND, and see if the resulting value is different from zero. If it were zero, it would mean that the LayerMask did not include the layer we wanted to check.

It is better seen by representing the example from before. Look at the following screenshot.

Operation to check if a layer is contained within a LayerMask
Operation to check if a layer is contained within a LayerMask

In it, I have represented a LayerMask with two layers, the one at index 1 and the one at index 3 (they are the positions that have a one). Suppose now we want to check if the LayerMask contains layer 3.

What we have done is generate a mask with all zeros, except for the one at position 3, and we have done AND with the LayerMask. Doing AND with a mask makes the final result depend on the value that the LayerMask digits had in the positions marked by the mask. In this case, the mask points to position 3, so the final result will be zero or different from zero depending on whether position 3 of the LayerMask is zero or different from zero. In this case, it will be different from zero.

Filtering by Proximity

With layer filtering, we will avoid calling PositionIsInConeRange() for objects that are in layers we are not interested in. That will improve performance, but we can improve it further.

Another preliminary filtering we can do is to discard objects that are too far from the cone to have a chance of being in it.

As seen in the screenshot, every vision cone can be enclosed in a bounding box.

Bounding box of a vision cone
Bounding box of a vision cone

If that box were a volumetric sensor (in Unity terms: a collider in trigger mode), we could pass to PositionIsInConeRange() only the objects that entered the volumetric sensor and were in the layers we were interested in.

Method to process objects that entered the box
Method to process objects that entered the box

In the code screenshot, OnObjectEnteredCone() would be an event handler that would apply if an object entered the box. In my case, the trigger mode collider has an associated script that emits a UnityEvent when the trigger triggers its OnTriggerEnter2D. What I have done is associate OnObjectEnteredCone() with that UnityEvent.

Starting from there, the code in the screenshot is simple. In line 159, we check if the object is in one of the layers we are interested in, using the ObjectIsInLayerMask() method we analyzed earlier. If affirmative, in line 161, we check if the object is within the area covered by the vision cone, using the PositionIsInConeRange() method we saw at the beginning. And finally, if both checks are positive, the object is added to the list of objects detected by the vision cone (line 164), and an event is emitted so that the scripts using the vision cone know that it has made a new detection.

As you can imagine, it is necessary to implement a reciprocal method to process objects that leave the detection box, as well as another method to process those that might remain within the detection box but have left the area covered by the cone. It will be enough to link eventHandler to the OnTriggerExit2D() and OnTriggerStay2D() methods of the detection box's trigger collider script. None of these cases have special complexity, once the OnObjectEnteredCone() code is understood, but I will show you my implementation of the check for an object that remains in the detection area.

Check for an object that remains in the detection area
Check for an object that remains in the detection area

At this point, you are probably wondering how to dimension the box to fit the vision cone.

If you look at the screenshot I put before, with the box enclosing the vision cone, you will see that the height of the box coincides with the parameter I called detectionRange.

What has a bit more intricacy is the width of the box, as we have to resort to basic trigonometry. Look at the screenshot:

Some trigonometry to calculate the width of the box
Some trigonometry to calculate the width of the box

Starting from the screenshot, to find the width of the detection box, we need to calculate the length of B, which will correspond to half of that width.

B is one of the sides of the rectangle created using detectionRange as the diagonal. Every rectangle is composed of two right triangles whose hypotenuse will be precisely detectionRange. If we look at the upper right triangle (the red area), and review the trigonometry we learned in school, we will agree that the sine of detectionSemiConeAngle is equal to B divided by detectionRange. Therefore, we can calculate B as the product of detectionRange and the sine of detectionSemiConeAngle; with the total width of the detection box being twice B.

Translated into code, the dimensions of the detection box would be calculated as follows:

Calculation of the dimensions of the detection box
Calculation of the dimensions of the detection box

You can do this calculation every time you change the vision cone parameters and manually modify the dimensions of the trigger collider with the result of the calculation; but I preferred to do it automatically by linking the collider with a BoxRangeManager component that I implemented, and that dynamically modifies the size of the collider as you change the Range and Width fields of that BoxRangeManager. The implementation of that component is based on what I explained in my article on "Volumetric sensors with dynamic dimensions in Unity" so I will not repeat it here.

Conclusion

With this, you have everything you need to create a simple, efficient vision cone. My advice is to create a generic component that you reuse in your different projects. It is such a common element that it doesn't make sense to implement it from scratch every time. This should be one of the elements of your personal library of reusable components.

I hope you found this article interesting and that it helps you create exciting game mechanics.