30 January 2025

"Game Development Patterns with Godot 4" by Henrique Campos

Apps and video games have been around for so long that it's hard to find yourself in the middle of a problem that hasn't been solved by another developer before. Over time, this has given rise to a set of general solutions that are considered best practices for solving common problems. These are known as design patterns; a set of "recipes" or "templates" that developers can follow to structure their code when implementing solutions to certain problems.

It is a subject that is studied in any degree related to programming, with the book " Design Patterns, Elements of Reusable Object-Oriented Software " by Gamma, Helm, Johnson and Vlissides (the famous Group of Four) being the classic book that started this branch of study. 

However, when told in an academic way, design patterns can be difficult to understand and their application is not always clear. This is precisely where Henrique Campos' " Game Development Patterns with Godot 4 " shines.

Campos quite successfully selects 9 of the 23 design patterns enunciated by the Group of Four and explains them in a clear, simple way, rich in examples applied to game development. The patterns chosen by Campos seem to me to be the most useful and the ones with the most immediate application in any game. I have not missed any of the other remaining design patterns because they always seemed too abstract and esoteric to me.

In a very original way, Campos presents his examples based on dialogues or communications between the designer of a fictional game and his programmer. In these communications, the designer demands functionalities focused on enriching specific aspects of a game, while the programmer receives these demands and has to fit them with his objective of creating a scalable and easy-to-maintain code. In each of these examples, a specific design pattern is explained as the best solution to fit the designer's demand with the programmer's objective.

Following this scheme, the author explains the singleton pattern, the observer pattern, the factory, state machines, the command pattern, the strategy pattern, decorators, service locators and event queues. All of this in an order of increasing complexity and based on the previous ones. Along the way, he also explains in a simple but effective way basic principles of development such as those of object-oriented programming or the SOLID principles.

The examples are implemented in GDScript, using Godot 4. I must admit that I was initially a bit wary of the book because I didn't think GDScript was a rich enough language to illustrate these patterns (I admit that I develop in Godot C#). However, the field code is very expressive and GDScript is so concise that the examples end up being read as if they were pseudocode. In the end I didn't miss C#, because the GDScript code managed to convey the idea of ​​the examples in just a few lines, which made them very easy to read and understand.

Therefore, I think it is a highly recommended book that makes a subject that is often repulsive due to the excessively academic treatment it has received in other previous works, enjoyable and fun. If you give it a chance, I think you will enjoy it and it will help you considerably to improve the quality of your code. 

26 January 2025

Creating interactive Gizmos in Unity

In a previous article I explained how to use the OnDrawGizmos() and OnDrawGizmosSelected() callbacks, present in all MonoBehaviours, to draw Gizmos that visually represent the magnitudes of the fields of a GameObject. However, I already explained then that Gizmos implemented in this way were passive in the sense that they were limited to representing the values ​​that we entered in the inspector. But the Unity editor is full of Gizmos that can be interacted with to change the values ​​of the GameObject, simply by clicking on the Gizmo and dragging. An example is the Gizmo that is used to define the dimensions of the Collider. 

How are these interactive Gizmos implemented? The key is in the Handles. These are visual "handles" that we can place on our GameObject and that allow us to click and drag them with the mouse, returning the values ​​of their change in position, so that we can use them to calculate the resulting changes in the magnitudes represented. This will be better understood when we see the examples.

The first thing to note is that Handles belong to the editor namespace and can therefore only be used within it. This means that we cannot use Handles in final game builds that we run independently of the editor. For this reason, all code that uses Handles has to be placed either in the Editor folder or with #if UNITY_EDITOR ... #endif guards. In this article I will explain the first approach, as it is cleaner.

With the above in mind, Gizmo code that uses Handles should be placed in the Editor folder and not in the Scripts folder. Unity only takes into account the Scripts folder when compiling the C# code for the final executable. It is important to be rigorous with this because if you mix Handles code with MonoBehaviour code, everything will seem to work as long as you start the game from the editor, but when you try to compile it with File --> Build Profiles... --> Windows --> Build, you will get errors that will prevent you from compiling. At this point, you will have two options: either you redirect the distribution of the code that uses Handles to the structure that I am going to explain here or you fill your code with #if UNITY_EDITOR ... #endif guards around all the calls to objects in the Handles library. Anyway, I think that the structure that I am going to explain here is generic enough so that you do not need to mix your Handles code with MonoBehaviour code.

To start with the examples, let's assume we have a MonoBehaviour (in our Scripts folder) called EvadeSteeringBehavior that is responsible for initiating the escape of its GameObject if it detects that a threat is approaching below a certain threshold. That threshold is a radius, a distance around the fleeing GameObject. If the distance to the threat is less than that radius, the EvadeSteeringBehavior will start executing the escape logic. Let's assume that the property that stores that radius is EvadeSteeringBehavior.PanicDistance.

To represent PanicDistance as a circle around the GameObject, we could use calls to Gizmos.DrawWireSphere() from MonoBehaviour's OnDrawGizmos() method, but with that approach we could only change the PanicDistance value by modifying it in the inspector. We want to go further and be able to alter the PanicDistance value by clicking on the circle and dragging in the Scene window. To achieve this, we're going to use Handles via a custom editor.

For that, we can create a class in the Editor folder. The name doesn't matter too much. In my case, I've named the file DrawEvadePanicDistance.cs. Its content is as follows:

DrawEvadePanicDistance Custom Editor Code

Look at line 1, all your alarms should go on if you find yourself importing this library in a class intended to be part of the final compilation of your game, since in that case you will get the errors I mentioned before. However, by placing this file in the Editor folder we will no longer have to worry about this problem.

Line 6 is key as it allows us to associate this custom editor with a specific MonoBehaviour. With this line we are telling the editor that we want to execute the logic collected in OnSceneGUI() whenever it is going to render an instance of EvadeSteeringBehavior in the Scene tab.

As a custom editor, our class must inherit from UnityEditor.Editor, as seen on line 7.

So far, whenever I've needed to use Handles, I've ended up structuring the contents of OnSceneGUI() in the same way you see between lines 10 and 24. So in the end it's almost a template.

If we want to access the data of the MonoBehaviour whose values ​​we want to modify, we will have it in the target field to which all custom editors have access. Although it is true that you will have to cast to the type that you know the MonoBehaviour we are representing has, as can be seen in line 11.

We must place the code for our Handles between a call to EditorGUI.BeginChangeCheck() (line 13) and a call to EditorGUI.EndChangeCheck() (line 19) so that the editor will monitor whether any interaction with the Handles occurs. The call to EditorGUI.EndChangeCheck() will return true if the user has interacted with any of the Handles created from the call to EditorGUI.BeginChangeCheck().

To define the color with which the Handles will be drawn, we do it in a very similar way to how we did with the Gizmos, in this case loading a value in Handles.color (line 14).

We have multiple Handles to choose from, the main ones are:

  • PositionHandle : Draws a coordinate origin in the Scene tab, identical to that of Transforms. Returns the position of the Handle.
  • RotationHandle : Draws rotation circles similar to those that appear when you want to rotate an object in the editor. If the user interacts with the Handle, it will return a Quaternion with the new rotation value, while if the user does not touch it, it will return the same initial rotation value with which the Handle was created.
  • ScaleHandler : It works similarly to RotationHandle, but in this case it uses the usual scaling axes and cubes when you want to modify the scale of an object in Unity. What it returns is a Vector3 with the new scale, in case the user has touched the Handle or the initial one, otherwise.
  • RadiusHandle : Draws a sphere (or a circle if we are in 2D) with handles to modify its radius. In this case, what is returned is a float with said radius.

In the example at hand, the natural choice was to choose the RadiusHandle (line 15), since what we are looking for is to define the radius that we have called PanicDistance. Each Handle has its creation parameters, to configure how they are displayed on the screen. In this case, RadiusHandle requires an initial rotation (line 16), the position of its center (line 17) and the initial radius (line 18).

In case the user interacts with the Handle, its new value would be returned as a return from the Handle creation method. In our example, we have saved it in the variable newPanicDistance (line 15). In such cases, the EditorGUI.EndChangeCheck() method would return true, so we could save the new value in the property of the MonoBehaviour whose value we are defining (line 22).

To ensure that we can undo changes to the MonoBehaviour with Ctrl+Z, it is convenient to precede them with a call to Undo.RecordObject() (line 21), indicating the object that we are going to change and providing an explanatory message of the change that is going to be made.

The result of all the above code will be that, whenever you click on a GameObject that has the EvadeSteeringBehavior script, a circle will be drawn around it, with points that you can click and drag. Interacting with those points will change the size of the circle, but also the value displayed in the inspector for the EvadeSteeringBehavior's PanicDistance property.

The RadiusHandle displayed thanks to the code above
The RadiusHandle displayed thanks to the code above

What we can achieve with Handles doesn't end here. If we were Unity, the logical thing would have been to offer Handles to users, for interactive modification of script properties, and leave Gizmos for the visual representation of those values ​​from calls to OnDrawGizmos(). However, Unity did not leave such a clear separation of functions and, instead, provided the Handles library with drawing primitives very similar to those offered by Gizmos. This means that there is some overlap between the functionalities of Handles and Gizmos, especially when it comes to drawing.

It is important to know the drawing primitives that Handles can offer. In many cases it is faster and more direct to draw with Gizmos primitives, from OnDrawGizmos(), but there are things that cannot be achieved with Gizmos that can be drawn with Handles methods. For example, with Gizmos you cannot define the thickness of the lines (they will always be one pixel wide), while Handles primitives do have a parameter to define the thickness. Handles also allows you to paint dashed lines, as well as arcs or rectangles with translucent fills.

Many people take advantage of the benefits of Handles and use them not only for entering new values ​​but also for drawing, completely replacing Gizmos. The problem is that this forces the entire drawing logic to be extracted to a custom editor like the one we saw before, which implies creating one more file and saving it in the Editor folder.

In any case, it is not about following any dogma, but about knowing what the Handles and Gizmos offer in order to choose what best suits us in each occasion.

15 January 2025

Node Configuration Alerts in Godot

Godot emphasizes composition. Each atomic functionality is concentrated in a specific node, and complex objects (what Godot calls scenes) are formed by grouping and configuring nodes to achieve the desired functionality. This creates node hierarchies in which nodes complement each other.

Because of this, some nodes cannot provide complete functionality unless complemented by other nodes attached to them. A classic example is the RigidBody3D node, which cannot function without being complemented by a CollisionShape3D that defines its physical shape.

Godot offers an alert system to notify you when a node depends on another to function properly. You’ve probably seen it many times: a yellow warning triangle that displays an explanation when you hover your mouse over it.

Warning message indicating a missing child node
Warning message indicating a missing child node

When developing scenes in Godot, they become nodes within others. If you take good design principles seriously and separate responsibilities, sooner or later, you’ll find yourself designing nodes that depend on other nodes for customization.

At that point, you might wonder if you, too, can emit alerts if one of your nodes lacks a complementary node. The answer is yes, you can, and I’m going to show you how.

For example, let’s assume we have a node named MovingAgent. Its implementation doesn’t matter, but for illustration, let’s suppose this node defines the movement characteristics (speed, acceleration, braking, etc.) of an agent. To define how we want the agent to move, we aim to implement nodes with different movement algorithms (e.g., straight line, zigzag, reverse). These nodes have diverse implementations but adhere to the ISteeringBehavior interface, offering a set of common methods that can be called from MovingAgent. Thus, the agent’s movement will depend on the ISteeringBehavior-compliant node attached to MovingAgent.

In this case, we’d want to alert the user of the MovingAgent node if it’s used without an ISteeringBehavior node attached to it.

To trigger this dependency alert, all base nodes in Godot provide the _GetConfigurationWarnings() method. To have our node issue warnings, we simply need to implement this method. For MovingAgent, this could look like the following implementation:

Implementation of _GetConfigurationWarnings()
Implementation of _GetConfigurationWarnings()

The method is expected to return an array with all detected error messages (line 148). If Godot detects that the method returns an empty array, it interprets that everything is correct and won’t display the warning icon.

As you can see, the first thing the method does is check whether the MovingAgent node has a child node implementing ISteeringBehavior (line 149). If no such child is found (line 154), an error message is generated (line 156).

There’s no reason to limit ourselves to checking for just one type of node. We can search for multiple nodes, check their configurations, and generate multiple error messages, as long as we store the generated error messages in an array to return as the method’s result.

In this example, I store the error messages in a list (line 152) and convert it into an array before the method ends (line 159).

_GetConfigurationWarnings() runs in the following situations:

  • When a new script is attached to the node.
  • When a scene containing the node is opened.
  • When a property is changed in the node’s inspector.
  • When the script attached to the node is updated.
  • When a new node is attached to the one containing the script.

Therefore, you can expect the script to refresh the warning, displaying or removing the alert, in any of these scenarios.

And that’s it—there’s no more mystery to it... or maybe there is. Observant readers may have noticed that in line 150, I searched for a child node solely by type. Developers familiar with Unity are accustomed to searching for components by type because this engine provides a native method for it (GetComponent<>). However, Godot doesn’t offer a native method to search by type. The native FindChild implementations search for nodes by name, not by type. This was inconvenient for me because I wanted to attach nodes with different names (indicative of functionality) to MovingAgent, as long as they adhered to the ISteeringBehavior interface. So, lacking a native method, I implemented one via an extension method:

Extension method for searching child nodes by type
Extension method for searching child nodes by type

The extension method iterates through all child nodes (line 20) and checks if they are of the type passed as a parameter (line 22).

If it locates a node of the desired type, it returns it and ends the method (line 24). Otherwise, it can continue searching recursively in the children’s children (line 28) if requested via parameters (line 26).

Thanks to this extension method, any instance of a Godot class inheriting from Node (is there any that doesn’t?) will offer the ability to search by type, as seen in line 150 of _GetConfigurationWarnings().

In your case, it might suffice to search child nodes by name. If not, the extension method solution for type-based searches might suit you better. This is the only complexity I see in this alert system, which is otherwise extremely easy to use.

Resulting alert from our implementation
Resulting alert from our implementation

11 January 2025

Creating custom Gizmos in Unity

I've written several articles explaining how to implement Gizmos in Godot, but I realized I haven’t explained how to do the equivalent in Unity. Godot has several advantages over Unity, mainly its lightweight nature and the speed it allows during development iterations, but it’s worth recognizing that Unity is very mature, and this maturity shows when it comes to Gizmos.

As you may recall, Gizmos are visual aids used to represent magnitudes related to an object’s fields and properties. They are primarily used in the editor to assist during the development phase. For example, it’s always easier to visualize the shooting range of a tank in a game if that range is represented as a circle around the tank. Another example is when you edit the shape of a Collider; what is displayed on the screen is its Gizmo.

In Unity, any MonoBehaviour can implement two methods for drawing Gizmos: OnDrawGizmos() and OnDrawGizmosSelected(). The first one draws Gizmos at all times, while the second only does so when the GameObject containing the script is selected. It's important to note a significant caveat about OnDrawGizmos(): it is not called if its script is minimized in the inspector.

The script SeekSteeringBehavior is minimized.
The script SeekSteeringBehavior is minimized.

In theory, you can interact with the Gizmos drawn in OnDrawGizmos(), but not with those drawn in OnDrawGizmosSelected(). Practically, knowledge of how to interact with Gizmos was lost a long time ago. In the past, it was possible to click on them, but this functionality seems to have disappeared around Unity version 2.5. The mention in Unity’s OnDrawGizmos() documentation about being able to click on Gizmos seems more like a sign that the documentation hasn’t been fully updated.

In any case, Unity’s editor is full of Gizmos you can interact with, but that’s because they include an additional element: Handles. In this article, we’ll focus on Gizmos as a means of passive visual representation, leaving the explanation of interactive Gizmos via Handles for a future article. To simplify further, I’ll refer only to OnDrawGizmos(); the other method is identical, but is only called when its GameObject is selected in the hierarchy.

The OnDrawGizmos() method is only called in the editor during an update or when the focus is on the Scene View. We should avoid overloading this method with complex calculations, as we could degrade the editor’s performance. Although it’s only called from the editor, we could implement it as-is, knowing that Gizmos won’t appear in the final compiled game. However, I prefer to wrap the method’s implementation in #if UNITY_EDITOR ... #endif. It’s an old habit. While redundant when using only Gizmos, this guard becomes necessary if you include Handles in the method, as we’ll see in a later article.

Let’s assume we’re designing an agent that interposes itself between two others (Agent-A and Agent-B). The movement algorithm for the interposing agent isn’t relevant here, but its effect will be to measure a vector between Agents A and B and position the interposing agent at the midpoint. In such a case, we’d want to draw this midpoint on the screen to verify that the interposing agent is actually heading toward it. This is an ideal use case for Gizmos.

The MonoBehaviour responsible for calculating this midpoint also implements the OnDrawGizmos() method with the following code:

Example of OnDrawGizmos() implementation.
Example of OnDrawGizmos() implementation.

Let’s analyze it line by line to understand how to draw any figure within this method.

Lines 127–128: These lines prevent the method from executing if we’ve decided to make the Gizmos invisible by setting the predictedPositionMarkerVisible variable to false, or if _predictedPositionMarker is null. This variable refers to the implementation of the interposing agent. For reasons not covered here, when the MonoBehaviour starts, I create a GameObject linked to _predictedPositionMarker. As the script calculates the midpoints between Agents A and B, it positions this GameObject at those midpoints. For our purposes, _predictedPositionMarker is a GameObject that acts as a marker for the position where the interposing agent should be. If it’s null, there’s nothing to draw.

Line 130: This line sets the color used for drawing all Gizmos until a different value is assigned to Gizmos.color.

Lines 131–133: Here, we use the Gizmos.DrawLine() call to draw a line between Agent A’s position and the marker.

Line 134: This line changes the drawing color to magenta (purple) to draw a circle at the marker’s position using Gizmos.DrawSphere(). This method draws a filled circle. If we only wanted an outline, we could use Gizmos.DrawWireSphere().

Lines 137–139: These lines use Gizmos.DrawLine() to draw another line (with its own color) between Agent B’s position and the marker.

The result can be seen when running the game from the editor:

Gizmos drawn when running the game from the editor.
Gizmos drawn when running the game from the editor.

Agents A and B are colored blue and red, respectively, while the interposing agent is green. The Gizmos are the blue and red lines and the purple circle.

And that’s it! Using these primitives, along with the rest of the module’s Gizmos offerings, we can draw any shape we want. These shapes will update in the editor when we change the fields they depend on in the inspector or, if tied to variables, as those variables change while running the game in the editor.

One last note: I often use booleans like predictedPositionMarkerVisible to decide which specific scripts can draw their Gizmos. However, Unity’s editor allows you to disable the drawing of all Gizmos. To do this, just click the button on the far right of the toolbar at the top of the Scene tab.

Button to toggle Gizmo visibility.
Button to toggle Gizmo visibility.

I recommend ensuring this button is enabled. The internet is full of posts from people asking why their Gizmos aren’t being drawn... only to realize they had inadvertently disabled this button.

Implementation of Gizmos in Godot 2D

Remember that Gizmos are visual aids activated in the editor, associated with a specific node, to visually represent the magnitudes of some of a node's fields. For example, if a node had a field called Direction, we could implement a Gizmo that draws an arrow originating from the node and pointing in the direction configured in the field. This way, it would be easier to configure the field because we would see the result of the different values we input.

On the other hand, Handles are grip points drawn alongside the Gizmo that allow us to manipulate it visually and, consequently, the field it corresponds to. In the example mentioned, a Handle would be a point drawn at the end of the Gizmo's arrow. By selecting that point and dragging it with the mouse, the arrow would change direction and, in doing so, the value of the Direction field of the node associated with the Gizmo would also change.

In a previous article, I discussed how to implement Gizmos and even Handles in Godot projects. The issue with that implementation is that it only worked for 3D projects. Godot's 3D and 2D APIs are so distinct that what works in one does not work in the other. The fact that everything revolved around the EditorNode3DGizmo node should have been a clue.

Therefore, in a 2D Godot project, we won’t be able to use the EditorNode3DGizmo method to draw Gizmos and Handles on the screen. In this article, we'll explore an alternative method for including Gizmos in our 2D projects. I won’t discuss Handles for reasons I’ll explain at the end of the article.

First of all, it’s worth noting that, unlike Unity, Godot does not offer specialized nodes for drawing Gizmos in 2D. Its native drawing functionalities are powerful enough to use them when the node is executed within the editor. The key lies in how to make the node execute within the editor and distinguish when it's in the editor versus running as part of the game.

As an example, let's implement a circular distance Gizmo. It will be in the form of a node that we can associate with others that have fields related to circular distances relative to their position. To see a possible implementation, we’ll associate it with an escape agent, which defines a panic distance. If another agent gets closer than the panic distance, the escape agent will move away. The Gizmo's purpose will be to visually represent the panic distance field.

A Gizmo used to represent the field PanicDistance
A Gizmo used to represent the field PanicDistance

As shown in the illustration, the Gizmo draws a circle around the agent, with a radius equal to the PanicDistance field's value. We need to ensure that the circle updates every time the PanicDistance value is changed.

At the level of the level scene, the Gizmo node is not visible because it's included within the FleeSteeringBehavior scene (selected in the illustration). If we open the implementation of that scene, we’ll see the Gizmo node.

Our Gizmo node
Our Gizmo node

In the illustration above, our Gizmo takes the form of a custom node named CircularRange. This node offers a very simple API. In the inspector, it exports a color variable (RangeColor) so we can choose the color of the drawn circle. In the code, the node provides a public property for the circle's radius. The parent node using CircularRange only needs to update the radius property for CircularRange to redraw the circle with the updated radius.

However, you won’t find CircularRange among the available nodes in Godot. We’ll have to implement and register it in Godot so it appears in the node list. This can be done via a plugin.

Creating a Plugin

This part is very similar to the article on 3D Gizmos and Handles. You need to use Godot’s wizard to create the structure for our plugin. The wizard can be found at Project → Project Settings... → Plugins → Create New Plugin. A window like the following will appear:

Plugin creation wizard
Plugin creation wizard

To give you an idea, here are some of the settings I configured for mine:

Plugin Configuration
Plugin Configuration

In the “Subfolder” field, I entered res://addons/InteractiveRanges so that it would create that folder and build the plugin's skeleton there.I left the “Activate now?” checkbox unchecked because in C#, you have to implement your node’s code and compile it before activating the plugin that uses it.

Once you create the plugin, the specified folder will be generated and populated with a plugin.cfg file containing the data you entered via the wizard.

plugin.cfg Contents
plugin.cfg Contents

In my case, I plan to include several types of Gizmos in the plugin, so I created a specific folder for the example Gizmo: res://addons/InteractiveRanges/CircularRanges. In this folder, I placed all the resources needed for the node implementation.

Final content for the plugin folder
Final content for the plugin folder

Implementing the Gizmo Node

To implement the node, we only need a script with its code and an icon to represent it in Godot's node list.

For the icon, I found an SVG file for easier scaling. The simpler and more conceptual, the better. In my case, it’s just a circle with a line from its center to its perimeter. For consistency with the rest of Godot’s icons, I gave it the typical purple color of 2D nodes.

For the Gizmo’s code, inheriting from Node2D sufficed.

First Part of the Node Implementation in CircularRange.cs
First Part of the Node Implementation in CircularRange.cs

Notice that I decorated the class with the [Tool] attribute. This is essential for its implementation to execute within the editor. Otherwise, the node would be inert until the game runs, but this node is only useful as an editor aid.

As described earlier, the node exports a RangeColor property so we can configure the Gizmo's color via the inspector.

There's also another property, Radius, which is not exported to the inspector but is public so that parent nodes using the Gizmo can update the represented radius. In line 27, each time the Radius value is updated, the Gizmo is forced to redraw via QueueRedraw().

Implementing the Gizmo's Drawing in CircularRange.cs
Implementing the Gizmo's Drawing in CircularRange.cs

As you can see, drawing a circle doesn’t need to be complicated. A simple call to DrawCircle on line 35, with the circle's center position (relative to the parent node), radius, and color, is enough. The "filled" parameter lets you specify whether you want a filled circle (true) or just a perimeter outline (false), which is hollow inside but colored on its edge.

Take note of line 33—it’s important. The call to Engine.IsEditorHint() returns true if the node is running within the editor and false if it’s executing in the game. Since we only want the Gizmo to be drawn in the editor, the _Draw() method will exit if Engine.IsEditorHint() returns false.

It is crucial to use Engine.IsEditorHint() in all methods of a [Tool] node. Some nodes of this type are intended to run only in the editor, so every method must start with a safeguard like the one above. There will also be cases of mixed nodes, where you want them to have one functionality in the editor and another during gameplay. For these, each method should include a safeguard (or multiple ones) to limit the execution of code depending on what Engine.IsEditorHint() returns. While this example is simple, that won't always be the case. Adding these safeguards can complicate your code, and you might encounter errors where editor-only code is executed during gameplay. For this reason, it’s recommended to decorate nodes with the [Tool] attribute only when absolutely necessary. In fact, if you have mixed nodes (with both editor and gameplay functionality), my advice is to separate their functionality into two nodes: one for gameplay and another for the editor.

Registering the Gizmo Node in Godot

Let’s revisit a step common to the article on 3D Gizmos: registering our Gizmo node so that Godot shows it in the node list.

The script responsible for the registration is the one configured in the plugin creation wizard. In my case, this is InteractiveRanges.cs, which I’ve placed in the root folder of the plugin.

Registering the Gizmo Node in InteractiveRanges.cs
Registering the Gizmo Node in InteractiveRanges.cs

As you may recall, registering the node is quite simple. A call to the AddCustomType() method (line 30) is all it takes. You pass the method the node's name, the base node it inherits from, the location of its script, and the icon to represent it.

In my case, since I plan to create more Gizmos, I consolidated the entire process into the RegisterCustomNode() method (line 22), which I call every time I want to register a new Gizmo. That call is made from the _EnterTree() method (line 8), which executes whenever the plugin is activated.

After completing this last script, our plugin has everything it needs to work. It’s time to compile it and go to Project → Project Settings... → Plugins to activate the plugin.

Once we’ve done this, we’ll be able to select our node from Godot's list.

Using Our Custom Gizmo

After including the Gizmo node in the scene hierarchy, we need to integrate it with the rest of the code.

Example Scene Structure
Example Scene Structure

In our example, the node that will use the Gizmo by passing its data is the FleeSteeringBehavior node.

FleeSteeringBehavior locates its child nodes and obtains references to them in its _Ready() method.

FleeSteeringBehavior Gets References to Its Children in _Ready()
FleeSteeringBehavior Gets References to Its Children in _Ready()

Don’t be misled by the call to FindChild(). It’s an extension method I implemented to search for child nodes by type. I had to implement it myself because the default FindChild() method in Godot only searches for nodes by name. Searching by name wouldn’t have been a big problem, but in this case, I preferred to search by type.

Implementation of the Extension Method for Finding Child Nodes by Type
Implementation of the Extension Method for Finding Child Nodes by Type

Once it has a reference to the Gizmo, FleeSteeringBehavior must use it every time the value of the field represented by the Gizmo changes.

Updating the Gizmo from FleeSteeringBehavior
Updating the Gizmo from FleeSteeringBehavior

This kind of feature is the reason properties exist. In this case, whenever the PanicDistance property of FleeSteeringBehavior is updated, it also updates the Radius property of the Gizmo (line 50). Remember, Radius is a property that forces the Gizmo to redraw itself whenever its value changes. In this way, the Gizmo will update every time PanicDistance changes.

It’s worth mentioning that FleeSteeringBehavior must also be marked as [Tool] so that the logic tied to the PanicDistance property executes in the editor.

Conclusion

Gizmos are extremely useful when shaping your game. They allow you to see the effects of node field values without running the game, which speeds up development.

Unity has a very mature implementation for Gizmos and Handles. In fact, the APIs for these are redundant in some aspects. I hope to write an article about this soon.

In contrast, Godot’s implementation is not as developed. There seems to be some effort to reach Unity's level, but limitations are still noticeable. Gizmos and Handles, as such, only seem to exist in the 3D realm. They have limitations (e.g., only lines can be drawn) and are poorly documented, as I mentioned in my article on the subject.

The case for 2D Gizmos is even more striking because there doesn’t seem to be a specialized API. Instead, the engine’s drawing primitives are used. This isn’t inherently bad, but it’s surprising that no effort has been made to create auxiliary classes that make 3D and 2D Gizmo management similar.

The worst part comes with 2D Handles in Godot. They don’t exist as such, and their manual implementation is complex or even impossible. In my case, I managed to implement something similar to a Handle by having InteractiveRanges.cs intercept mouse clicks. This involved implementing the _ForwardCanvasGuiInput() and Handles() methods from the EditorPlugin parent class and calculating positions over a circle drawn on the Gizmo as a makeshift Handle. However, this implementation only worked when the Gizmo node was selected in the hierarchy, making it useless. Typically, the Gizmo would be hidden as an auxiliary node within other scenes, so when selecting those scenes, the root nodes of those scenes would be clicked instead of the Gizmo, preventing it from being drawn.

I spent a lot of time on this, searched online countless times, and eventually gave up. My impression is that Handles are not widely used in Godot development. Perhaps I’m wrong, but very few people ask about this topic online, and no one provides a solution or an alternative to the issues I encountered.

Hopefully, I’ll find some time to explain how to achieve very similar functionality in Unity so you can compare the support both engines offer.

Interpolation in Unity

Interpolation is a mathematical method used to find an intermediate value between two known points. When those two known points are connected by a straight line, we call it linear interpolation.

Although the mathematical definition might sound a bit technical, the truth is that our daily lives are filled with examples of interpolation:

  • If my automatic car has a maximum speed of 160 km/h, how fast will it go if I press the accelerator halfway?
  • If my plane has a range of 300 km, how many more kilometers can it fly when one-third of the fuel tank is used up?
  • If my fully stocked fridge lasts me 7 days and half of the shelves are empty, when will I need to go shopping?

Although these examples seem different, they are actually not. All of them can be modeled using a linear function, with a graph that looks like this:

A linear function
A linear function

The difference lies in what the axes represent in each case:

In the automatic car example, the X-axis represents the accelerator depth, while the Y-axis represents the car's speed.

  • For the plane, the X-axis is the amount of fuel consumed, and the Y-axis is the distance traveled.
  • In the fridge example, the X-axis is the days that pass, and the Y-axis is the number of empty shelves.
  • In all these cases, the structure of the graph (and the function it represents) is the same: a straight line.

What differentiates one case from another is the slope of the line, which represents the rate of change of the Y-axis as we progress along the X-axis.

If in real life we often need to perform interpolations, you can imagine that in game development we do too. After all, what are video games if not recreations of real life?

Let’s consider a similar example. Suppose I’m developing a simulator and want to provide HOTAS support.

A HOTAS controller
A HOTAS controller

When the player moves the throttle lever (the left one), Unity's Input System will return a value between 0 and 1: 0 when the lever is at its minimum position (closest to the player) and 1 when it's at its maximum position (farthest from the player). With this information, we need to perform interpolation to calculate the vehicle's speed at all intermediate lever positions, knowing that the vehicle should be stationary at 0 and move at maximum speed at 1.

If we return to our graph, we will have the same structure, except that the X-axis will represent the lever position (which ranges from X = 0 to X = 1), and the Y-axis will represent the speed of the game’s vehicle.

How do we perform this calculation? There are several options, depending on the ranges represented on the X and Y axes. Let’s study them from simplest to most complex.

Interpolation with X-axis range between 0 and 1

This is the simplest case: the X-axis varies between 0 and 1, while the Y-axis varies between an initial value (V1) and a final value (V2).

Mathematically, the interpolated value (Y) is calculated as follows:

The formula for a linear function
The formula for a linear function


We could write a function to implement this formula, but it would be reinventing the wheel, because Unity already provides a method that does exactly this: Mathf.Lerp().

Lerp() accepts three parameters:

  • An initial value.
  • A final value.
  • A value between 0 and 1.

If the third parameter is 1, the function returns the final value; if it’s 0, it returns the initial value; and if it’s between 0 and 1, it returns an intermediate value calculated using the line connecting the two values.

In our throttle lever example, we would pass Lerp() the vehicle’s minimum speed, maximum speed, and the value between 0 and 1 returned by the Input System based on the lever’s position. The method’s return value would be the vehicle’s new speed.

Sometimes, we might need the inverse of an interpolation.

For example, suppose the shields on our game’s spaceship have been hit by enemy projectiles. The shield has a strength value, and we’ve been reducing it with each impact. We want to take the current shield value and display it on the cockpit’s control panel as a percentage so the player knows how strong their shields are. That is, we have the shield's maximum value (let's say 300), its minimum value (typically 0), and its current value (let’s say 150). We want to calculate the fraction of the total shield capacity that this current value represents, so we can present a percentage to the player.

Reorganizing the previous formula gives us the inverse function:

The formula for the inverse function
The formula for the inverse function

Using our example:

  • V2, the maximum shield value: 300
  • V1, the minimum shield value: 0
  • Y, the current shield value: 150

This yields a fraction of 0.5. To calculate the percentage, simply multiply by 100, resulting in 50%.

Inverse function, applied to our example
Inverse function, applied to our example

Fortunately, we don’t have to implement this formula manually either. Unity provides the Mathf.InverseLerp() method.

InverseLerp() accepts three parameters:

  • An initial value, 0 in our example.
  • A final value, 300 in our case.
  • The intermediate value for which we want the inverse interpolation, 150 in our example.

With these inputs, InverseLerp() returns 0.5.

It’s important to note that if the intermediate value lies outside the range—either below the initial value or above the final value—InverseLerp() returns 0 or 1, respectively.

Angular Interpolation

When working with angles, it can be tempting to use Lerp(), but this should be avoided. Angles are a special case because they wrap around when they reach their maximum of 360°. For instance, an angle of 380° is equivalent to 20°.

To handle this peculiarity, Unity provides the Mathf.LerpAngle() method.

This method accepts three parameters:

  • The initial angle in degrees.
  • The final angle in degrees.
  • The intermediate value between 0 and 1.

Be cautious, as the method doesn’t operate within the 0° to 360° range but instead within -180° to 180°.

For example, calling LerpAngle(0.0f, 190.0f, 1.0f) will return -170.0f because, at 180°, it assumes the remaining 10° were obtained by rotating 170° to the left. In other words, LerpAngle() always returns the shortest rotation. If you’re interested in the longer rotation (e.g., rotating the full 190° to the right), you would need to use Lerp().

Linear Interpolation with X-axis Values Beyond 0 and 1

By default, Lerp() only accepts intermediate values between 0 and 1, but there may be cases where we want values outside this range. Unity provides the Mathf.LerpUnclamped() method for this.

Its parameters are the same as Lerp(), but it allows intermediate values below 0 and above 1. The method simply extends the line beyond the [0,1] range to calculate the resulting value.

Non-Linear Interpolation

So far, we have assumed that our range on the X-axis was limited to 0 and 1. All the intermediate values we fed into Lerp() were constrained to that range.

However, there may be situations where we want the X-axis to represent a different range, for example, because we want to apply different linear functions as we progress along the X-axis.

Imagine we have a spaceship, and we want its speed to be affected by accumulated damage, slowing the ship down as more damage is taken. However, we don't want the slowdown to be uniform; instead, we prefer that beyond a certain damage threshold, the slowdown accelerates.

We would have a graph where damage is represented on the X-axis and the slowdown on the Y-axis. From 0 to the damage threshold, the graph would be a line ascending gradually in the first segment, but at the threshold, it would have a bend, and from there, a second segment would begin, ascending much more steeply. Instead of a continuous straight line, we would have one that bends at a certain point. These types of functions are called nonlinear.

There are several ways to implement something like this. We could normalize the X-axis for the first segment using InverseLerp() with the initial, final, and current values of that segment on the X-axis, and use the resulting value to perform a Lerp() for the minimum and maximum values of the Y-axis for that segment. The problem is that we would need to repeat all those operations for the second segment.

We could simplify the calculations slightly by using the remap() function from Unity's Mathematics package. This method allows us to pass a source range (on the X-axis), a destination range (on the Y-axis), and an intermediate value within the source range to obtain the equivalent value in the destination range. This would save us from having to chain InverseLerp() and Lerp(), but it would still require applying the remap() method to each segment. Additionally, it would require us to install the Mathematics package via the Package Manager.

This method becomes unsustainable if our graph has multiple segments, and moreover, the transition between segments might be too abrupt.

In such cases, the best approach is to use AnimationCurves. As their name implies, they are designed to be used in the animation window, but we can also use them in our code. AnimationCurves allow us to define our function graphically, including as many segments, curves, and straight lines as we want.

For example, let's say we want to implement a vehicle with smooth acceleration and braking. If these were linear, they would feel unnatural. Instead, we are going to apply curves. To include them in our code, we could do something like this:

Including AnimationCurves in our code
Including AnimationCurves in our code

The above fields would appear in the inspector as follows:

AnimationCurves in the inspector
AnimationCurves in the inspector

Clicking on any of the AnimationCurves allows us to edit its shape. For example, the acceleration curve might look like this:

Acceleration curve
Acceleration curve

If you observe, the curve above is normalized on both the X and Y axes, as both are confined to values between 0 and 1. However, this shouldn't be a problem given what we've learned so far. For example, to calculate the speed during acceleration, we could do:

Sampling an AnimationCurve
Sampling an AnimationCurve

Given that the variable accelerationRadius defines the distance from the start at which the vehicle reaches its maximum speed, we know that this distance corresponds to X=1 on the acceleration curve. Therefore, to determine where we are on the acceleration curve, we perform an InverseLerp(), passing our distance from the starting point as the intermediate value (line 2).

The point obtained from the InverseLerp() is then passed to the Evaluate() method of the AnimationCurve, which returns the graph value at that point (line 3). Since the Y-axis of the graph corresponds to the speed, and Y=1 represents the maximum speed, we simply multiply the value returned by the curve by the maximum speed.

As you can see, using an AnimationCurve spares you from having to apply different calculations for each segment and makes transitions much more natural and smooth.

Conclusion

Interpolations are invaluable in game development. The simplest cases can be resolved using Lerp and InverseLerp, with LerpAngle for angles. For more complex scenarios, AnimationCurve provides a powerful and flexible solution, especially when exposed in the inspector for visual editing.