Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

22 August 2025

C# Interfaces and the Unity Inspector

Interface contract
In C#, an interface is a contract that defines a set of methods, properties, events, or indexers that a class or structure must implement, without providing the actual implementation. It's like a blueprint that specifies "what" must be done, but not "how". When a class adheres to an interface, it publicly commits to providing at least the methods defined by that interface. Therefore, the class must implement the code for each of the interface's methods.

Interfaces in C# offer several advantages that make them fundamental in software design. The main ones are:

  • Abstraction: They define a contract without implementation details, allowing focus on "what" an object does, not "how" it does it.
  • Flexibility and extensibility: They make it easy to add new classes that implement the same interface without modifying existing code.
  • Multiple inheritance: They allow a class to implement multiple interfaces, overcoming C#'s single inheritance limitation. A class can only inherit from one base class but can implement several interfaces simultaneously.
  • Decoupling: They reduce dependency between components, promoting modular design and facilitating dependency injection.
  • Maintenance and testing: They simplify replacing implementations (e.g., in unit tests with mocks) by working with contracts instead of concrete classes.
  • Consistency: They ensure that classes implementing the interface adhere to a standard behavior, improving interoperability.
  • Support for design patterns: They are essential in patterns like Strategy, Factory, or Repository, which rely on abstractions.

All this is better understood with examples. In my case, I developed an abstract class called SteeringBehavior, which implements the basic functionality for the movement of an intelligent agent. Many child classes inherit from it and implement specialized movements.

The abstract class SteeringBehavior
The abstract class SteeringBehavior

Some (not all) of these child classes need a target to calculate their movement—whether to reach it, pursue it, look at it, etc. That target is defined with a Target field.

I could have included that field within the definition of the child classes, but that would have reduced flexibility. The problem arises when chaining SteeringBehavior classes. Some SteeringBehavior classes provide their functionality by composing the outputs of simpler SteeringBehavior classes. The flexibility of this approach comes from being able to build the dependency tree of the different SteeringBehavior classes in the inspector.

An agent built by composing several SteeringBehavior classes
An agent built by composing several SteeringBehavior classes


Inspector references to child SteeringBehavior classes
Inspector references to child SteeringBehavior classes


There are some SteeringBehavior classes for which we know exactly the type of child SteeringBehavior they will need. In the screenshot above, you can see that ActiveWallAvoiderSteeringBehavior specifically needs a child of type PassiveWallSteeringBehavior. But there are also cases where the parent SteeringBehavior doesn't care about the specific type of the child, as long as it inherits from SteeringBehavior and provides a Target field (to pass its own Target field to it). In the screenshot, I passed a SeekSteeringBehavior, but I could have passed an ArriveSteeringBehavior or a PursuitSteeringBehavior to get slightly different movement types. If I had implemented the Target field through inheritance, I would have lost the flexibility to pass any type of SteeringBehavior with a Target field, being forced to specify the concrete type, child of SteeringBehavior.

Instead, I created an interface.

The ITargeter Interface
The ITargeter Interface


As shown in the screenshot, the interface couldn't be simpler. It's declared with the interface keyword and, in this case, simply states that classes adhering to this interface must provide a Target property.

A class adhering to this interface only needs to implement that property.

A class implementing the ITargeter interface
A class implementing the ITargeter interface

As shown in the screenshot, the SeekSteeringBehavior class inherits from SteeringBehavior and also implements the ITargeter interface.

You may notice some overlap between abstract classes and interfaces. Although both are abstraction mechanisms, they have important differences in purpose and use. An interface defines a pure contract with method, property, event, or indexer signatures, without any implementation. It's ideal for specifying behaviors that different unrelated classes can share. An abstract class, on the other hand, is a class that cannot be instantiated directly and can contain both abstract (unimplemented) and concrete (implemented) members. It serves as a base for related classes that share common logic. In general, if you want to provide common behavior to related classes through a base class, you'd use abstract classes (possibly making the base class abstract); whereas if the common behavior occurs in heterogeneous classes, interfaces are typically used.

While an abstract class can include concrete methods, fields, constructors, and shared logic that derived classes can use directly, an interface cannot include method implementations or fields (only from C# 8.0 onward does it allow default methods, but with restrictions—generally everything must be implemented by the adopting class). Additionally, abstract classes allow access modifiers (public, private, protected, etc.) for their members, while in an interface all members are implicitly public and access modifiers cannot be specified. Also, abstract classes only allow single inheritance, while interfaces allow multiple inheritance.

Unity adds an additional difference, which is crucial for this article: it allows abstract classes to be shown in the inspector, but not interfaces. In one of the previous screenshots, you saw the inspector for the ActiveWallAvoiderSteeringBehavior class. Pay attention to its Steering Behavior field. The code that declares this field and shows it in the inspector is:

Declaration of the steeringBehavior field
Declaration of the steeringBehavior field


Note that steeringBehavior is of type SteeringBehavior (an abstract class), and the inspector has no problem displaying it. Therefore, we can use the inspector to assign any script that inherits from SteeringBehavior to this field.

If we wanted this field to accept any script that implements ITargeter, we should be able to write [SerializeField] private ITargeter steeringBehavior. The problem is that while C# allows this, Unity does not allow interface-type fields to be exposed in the inspector.

So, how can we ensure that the steeringBehavior field receives not only a class that inherits from SteeringBehavior, but also implements the ITargeter interface? The answer is that Unity doesn't offer any built-in safeguard, but we can develop our own, as I’ll explain.

It will be easier if I first show you what we want to achieve and then explain how to implement it. The solution I found was to design my own PropertyAttribute to decorate fields that I want to verify for compliance with one or more interfaces.

Custom attribute to confirm interface compliance
Custom attribute to confirm interface compliance


In the screenshot above, you can see that I named my attribute InterfaceCompliant. In the example, the attribute verifies that the SteeringBehavior passed to the steeringBehavior field complies with the ITargeter interface.

If the script passed to the field does not comply with the interface, the attribute will display a warning message in the inspector.

Warning message when a script does not comply with the interface
Warning message when a script does not comply with the interface


Now that we understand what we want to achieve, let’s see how to implement it. Like other attributes, we need to create a class that inherits from PropertyAttribute. This class is abstract and requires us to implement the InterfaceCompliantAttribute method, whose parameters are passed to the attribute when decorating a field.

Implementation of our custom attribute
Implementation of our custom attribute


Note that the name of this class must be the name of our attribute followed by the suffix "Attribute". The class should be placed inside the Scripts folder.

This class serves as the data model for the attribute. We’ll store in it all the information needed to perform our check and display the warning message if necessary. In this case, the attribute accepts a variable number of parameters (hence the params keyword). These parameters are the types of our interfaces and will be passed in an array stored in the InterfaceTypes field.

To represent the custom attribute in the inspector, as well as the field it decorates, we’ll use a PropertyDrawer. These are classes that inherit—surprise!—from PropertyDrawer and must be placed in an Editor folder, separate from the Scripts folder. We can name our class whatever we want, but we must decorate it with a CustomPropertyDrawer attribute to indicate which attribute this PropertyDrawer draws in the inspector.

PropertyDrawer to draw our attribute and the field it decorates
PropertyDrawer to draw our attribute and the field it decorates

In our case, I used the CustomPropertyDrawer attribute to indicate that this PropertyDrawer should be used whenever a field is decorated with an InterfaceCompliantAttribute.

PropertyDrawer is another abstract class that requires implementing a method called CreatePropertyGUI. This method is called by the inspector when it wants to draw a field decorated with this attribute.

Implementation of the CreatePropertyGUI method of InterfaceCompliantAttributeDrawer
Implementation of the CreatePropertyGUI method of InterfaceCompliantAttributeDrawer

A class inheriting from PropertyDrawer has a reference to the attribute it draws via the attribute field. That’s why in line 22 we cast it to the InterfaceCompliantAttribute type, which we know it actually is, to retrieve the list of interfaces to check in line 23.

Note that while the attribute can be retrieved using the attribute field of PropertyDrawer, the decorated field is passed as a parameter to the method, property, encapsulated in a SerializedProperty type (line 19).

Next, in line 26, we generate the visual container in which we’ll place the widgets to represent our field in the inspector, as well as the warning message if necessary. For now, that visual container is empty, but we’ll add content to it in the following lines. As we add elements to the container, they will be displayed top to bottom in the inspector.

If you look at the screenshot showing the warning message in the inspector, you’ll see that the message appears before the field, so the first element we’ll add to the container is the warning message, if necessary (line 28). This part is quite involved, so I moved it to its own method (AddErrorBoxIfNeeded) to make CreatePropertyGUI easier to read and to allow reuse of that method in line 45. Once we finish explaining the implementation of CreatePropertyGUI, we’ll look at AddErrorBoxIfNeeded.

After adding the warning message (if necessary) to the container, we need to display the decorated field. We want to show it with its default appearance in the inspector, so we simply create a PropertyField (line 31). PropertyField will detect the type of the field encapsulated in property and display it using Unity’s default widget for that type.

Now pay attention. We want the interface compliance to be re-evaluated every time a new script is assigned to the decorated field. To do this, in line 43 we instruct the container to monitor the value of property and re-evaluate whether to show the warning message (line 45), redrawing the field (line 46).

Once this is done, we add the decorated field to the container (line 50) and return this container for the inspector to draw.

As promised, let’s now analyze the implementation of AddErrorBoxIfNeeded.

Checking the decorated field for interface compliance
Checking the decorated field for interface compliance


To access the field encapsulated within a SerializedProperty, we use its objectReferenceValue field (line 70).

This field is of type Object, so we’ll need to cast it to the specific type we want to check. We do this in the GetNotComplyingInterfaces method on line 74. We’ll look at its implementation shortly, but at a high level, this method takes a list of interfaces (line 75) and an object (line 76), then casts that object against each interface. If any of those casts fail, the type of the failed interface is added to a list that is returned as the method’s output (line 74).

This list is passed to the GenerateBox method, which draws the warning message if the list contains any elements. If the list is empty, the method draws nothing. The drawn message is returned in an ErrorBox (line 79), which is added to the container (line 85), after clearing its previous content (line 82).

The interface check itself is very simple.

Interface check on the script in the field
Interface check on the script in the field


Starting at line 120, for each interface in the list (passed as parameters to the attribute), we check whether the script (checkedObject) implements it. To do this, we use C#’s IsInstanceOfType method (line 123). I’ve always found IsInstanceOfType misleading, because read left to right it seems to check whether "interfaceType is an instance of checkedObject", when in reality, it checks whether checkedObject is an instance of interfaceType.

If checkedObject does not implement the interface, IsInstanceOfType returns false, and the interface type is added to the list of non-compliant interfaces (line 125), which is returned as the method’s output.

Finally, for completeness, let’s look at how the error message is generated in the GenerateErrorBox method.

Generating the warning message
Generating the warning message


If the script being checked complies with all the interfaces, the list passed as a parameter to GenerateErrorBox would be empty, and the method would simply return null (line 90).

However, if there were any non-compliance, the names of the non-compliant interfaces would be chained together, concatenated with commas between them, as shown in lines 93 to 95.

That list of interface names, concatenated with commas, would be added to the end of the warning message (line 98), and with it, an error message would be created and returned to be added to the container (line 102).

And with that, you have everything. It's true that you won't be able to prevent scripts that don't comply with the interfaces from being assigned to the decorated field, but at least a message will appear warning of the issue. It will be very difficult not to notice the message and correct the oversight. On the other hand, this entire example has served to show you how to implement a custom attribute that leverages the flexibility of Unity's inspector. I hope you found it interesting.

02 August 2025

Resources in Godot and the Risk of Sharing Them

In Godot, Resources are a type of object used to store and manage reusable data within a project. They are modular components that can be created, saved, loaded, and shared among different nodes or scenes within the engine. Resources are essential for organizing and structuring projects efficiently.

They are designed to store specific information, such as configurations, properties, or custom data, which can be accessed or modified by nodes or other systems. Resources can be saved as files (usually with the .tres extension for textual resources or .res for binary ones) and loaded into the project using ResourceLoader. This allows resources to be shared across scenes or even projects, as they are not tied to a specific scene and can be referenced by any node or script.

Advantages of using resources include:

  • Modularity: Facilitates data reuse without duplication.
  • Organization: Helps keep data separate from code or scenes.
  • Flexibility: You can modify a resource and the changes will be reflected in all nodes that use it.

Godot comes with many predefined resources, but you can also create your own by extending the Resource class. For example, in a project I'm developing in Godot C#, I created the following resource:

Custom Resource Example in Godot C#
Custom Resource Example in Godot C#

As you can see, it's straightforward. You just inherit from Resource and define the fields/properties you want the resource to have. What deserves special mention are the attributes.

As in any other Godot script, the [Export] attribute makes the decorated field editable in the inspector when the resource is used.

The [GlobalClass] attribute on line 7 is needed to make your custom resource available in the editor's search tool when you want to use it.

The [Tool] attribute is only necessary if you need to access the contents of your resources from code that runs in the editor (not just in the game).

This particular resource is used to create an array of them in another node and store links to other nodes present in the scene (that link is precisely the NodePath field). I could have added methods to the resource—nothing prevents you from doing so—for example, to validate the data stored in the resource, but in my case, it wasn’t necessary.

Example of Using My Custom Resource
Example of Using My Custom Resource

When you click on each resource, it expands and you can view and edit its contents.

Editing Resource Contents
Editing Resource Contents

If you come from the Unity world, resources are equivalent to ScriptableObjects since they also allow saving their instances as assets in the project. 

They also fill the role of Unity’s serializable classes. Although Godot C# has access to C#’s [Serializable] attribute, unlike Unity, Godot cannot display them in the inspector. Therefore, the alternative is to create a class that inherits from Resource with the fields you would have put in the serialized class in Unity. I have a Unity version of the entire example above, and there I solved it by making WeightedBehavior a serializable class.

Resources are ubiquitous in Godot—you’ll find them everywhere. For example, whenever you want to use an Area2D node to create a detection area, that node requires attaching another node of type CollisionShape2D to define the boundaries of the detection area.

An Area2D Node Uses a CollisionShape2D Node to Define Its Area
An Area2D Node Uses a CollisionShape2D Node to Define Its Area

Interestingly, you can’t define the shape of the CollisionShape2D until you create a new resource in its Shape field.

A CollisionShape2D Node with an Associated RectangleShape2D Resource
A CollisionShape2D Node with an Associated RectangleShape2D Resource

This reflects Godot’s obsession with separating responsibilities. In this case, the CollisionShape2D node simply implements functionality in an area, but the shape of that area is stored in a resource (in this case, RectangleShape2D). 

If you expand the combo box, you’ll see that RectangleShape2D is not the only resource you can associate with that field.

Different Resources That Can Be Associated with the Shape Field
Different Resources That Can Be Associated with the Shape Field

If you’re familiar with Unity’s ScriptableObjects, you’ll intuit the usefulness of Godot’s resources. If not, you’ll realize it as you use the engine. Ultimately, you’ll likely model resources in your mind as “little boxes of data.”

One of the uses of these “little boxes” is that multiple nodes can read data from the same box instead of duplicating that data in each node. This is an efficiency measure that Godot follows by default, transparently to the user.

Usually, this is fine, but sometimes it can cause surprises if you’re not aware of this behavior. Here’s an example I just experienced firsthand.

In my project, I developed a cone sensor. It’s essentially a vision cone. The scene that makes up the sensor includes the following nodes:

Nodes That Make Up My Sensor
Nodes That Make Up My Sensor

The earlier screenshots of CollisionShape2D were taken from this scene. 

The general functionality is that the definition of the cone’s range and angle parameters (defined from another node) causes the BoxRangeManager node to resize the CollisionShape2D so that only objects within its area are detected. Then, a finer filtering is done by angle and distance, but the first “coarse” filtering is whether the detected object is inside the box.

An Agent in My Project with the Sensor Added
An Agent in My Project with the Sensor Added

In the screenshot above, the blue box is the area of the RectangleShape2D. With just one agent, everything worked perfectly. The problem started when I added a second agent with another cone sensor.

Example of a Problem When Instantiating Scenes with Resources
Example of a Problem When Instantiating Scenes with Resources

To my surprise, the respective RectangleShape2D resources (note the italics) insisted on having the same size. When I changed the dimensions in one agent, both areas adopted that agent’s dimensions. When I changed them in the other agent, both areas adopted the new dimensions.

It took me a while to realize the problem. The issue was that there weren’t two independent RectangleShape2D resources. Since RectangleShape2D is a resource, all sensor instances were using and modifying the same resource. Hence the italics earlier—there weren’t “respective” RectangleShape2D resources, but a single one shared by both agents’ sensors.

How Do You Fix This? Are We Doomed to Use a Single Sensor Instance? Obviously not, but the solution isn’t immediately obvious.

The key is to go back to the CollisionShape2D node’s configuration and click on the RectangleShape2D resource to configure it internally.

Internal Configuration of the RectangleShape2D Resource
Internal Configuration of the RectangleShape2D Resource

Inside that configuration, the Size field is obvious and corresponds to the resource’s dimensions. The key field is within the Resource section and is called Local To Scene. By default, it’s unchecked, meaning all instances of the current scene will share this resource. In other cases, this configuration might be beneficial, but in my case, it caused the problem I just described.

Therefore, the solution was to check the Local To Scene field. Doing so causes each scene instance to create its own copy of the resource and work with it. After checking it and saving the scene, both agents correctly displayed their respective (now truly) RectangleShape2D configurations.

Problem Solved
Problem Solved

So be careful when using resources in your Godot scenes. Keep in mind that Godot’s default behavior is to share resources among different scene instances. Therefore, if you detect a case where instances need to customize the resource’s values, you should anticipate problems and check the Local To Scene option so that each instance has its own copy of the resource.

I hope this article helps you avoid problems like the one I had.

13 July 2025

Using Timers in Unity and Godot


Timers in video game development are tools or mechanisms that allow measuring and controlling time within the game. They are used to manage events, actions, or behaviors that depend on time, whether in real-time or in game update cycles (frames).

They are a fundamental tool for controlling temporal logic in video games, from basic mechanics to complex systems, and are therefore essential in many aspects of game design and development.

There are many uses and cases for timers. Here are some of the most notable examples:

  • Control of timed events: For example, executing actions after a specific time, such as triggering an animation, spawning enemies, or displaying a message on screen.
  • Time-based game mechanics: For instance, implementing time limits to complete a mission, solve a puzzle, or survive a wave of enemies.
  • Synchronization of animations and effects: To coordinate animations to occur at the right moment, such as explosions, transitions, or visual effects.
  • Cooldown management: To control the cooldown time of abilities, special attacks, weapon reloads, or player actions.
  • System update frequency: To regulate how often certain systems (like enemy AI or item generation) update to optimize performance.
  • Cyclic or repetitive events: To create patterns that repeat at regular intervals, such as enemy waves, weather changes, or day/night cycles.
  • Multiplayer synchronization: Used to ensure that events in multiplayer games are synchronized across all clients, such as match start or state updates.
  • Delay or wait effects: Introduce intentional pauses to improve gameplay or narrative, such as waiting before showing dialogue or triggering an event.

Based on the above, we can distinguish the following types of timers:

  • Game-time based: Progresses at the same rate as game time. For example, if the game is slowed down (e.g., bullet time), the timer slows down accordingly.
  • Real-time based: Uses the system clock and progresses uniformly regardless of game time speed.
  • Countdown: Starts with a value and decreases to zero, triggering an event when finished.
  • Cyclic timers: Automatically restart for repetitive events, like automatic shooting or health regeneration.

General Implementation

All engines offer a function that executes periodically. Unreal calls it Tick(), Unity uses Update(), and Godot C# uses _Process(). These functions usually receive a value called deltaTime, which is the time elapsed since the last call.

Assuming we call this function Update(), we could use it to create a timer with the following pseudocode:

Pseudocode to implement a timer
Pseudocode to implement a timer

Many engines offer two versions of this periodic function: one that runs when a new frame is rendered (idle frame), and another that runs at fixed time intervals (physics frame). The latter is better for implementing timers, as graphics cards do not render frames at a constant rate.

Implementation in Godot

Following its "a node for each task" philosophy, Godot has a specialized timer node called Timer. This node is ideal for using timers without implementing them from scratch.

The node offers the following configuration fields:

Timer node fields
Timer node fields

  • Process Callback: Defines whether the timer progresses with idle frames or physics frames.
  • Wait Time: Duration of the timer in seconds.
  • One Shot: If checked, the timer runs once and stops. Otherwise, it restarts automatically.
  • AutoStart: If checked, the timer starts when added to the scene tree. Otherwise, it must be started manually.
  • Ignore Time Scale: If checked, the timer uses real time; otherwise, it uses game time.

The node emits the timeout() signal when the timer reaches the set Wait Time.

timeout signal
timeout signal

By configuring the Timer node in the inspector and connecting its timeout() signal to a method, most use cases are covered.

This node can also be manipulated from code. Let’s look at an example of how to use it. Imagine we want to program a cooldown, such that once a certain method is executed, it cannot be executed again until the timer has finished. Let’s suppose this method is called GetSteering() and it’s called from the _Process() method of another node.

To start the cooldown, we could call an activation function right at the end of the GetSteering() method.

Example of starting a cooldown
Example of starting a cooldown

The cooldown activation function is called StartAvoidanceTimer() in this case, and it simply starts the Timer node’s countdown and sets a flag to true:

Implementation of the cooldown start method
Implementation of the cooldown start method

If a Timer does not have the AutoStart field enabled, we’ll need to call its Start() method for it to work, as seen on line 235.

The flag set to true on line 236 can be used as a guard at the beginning of the method we want to limit execution of.

Using the flag as a guard to control whether the method executes
Using the flag as a guard to control whether the method executes

The example in the screenshot might be a bit convoluted if you’re not familiar with the development context in which the method is used, but it essentially works by calculating a path to a target (line 157). If there’s no other agent we might collide with, and no timer is active, the calculated path is followed (line 160). If, on the other hand, a timer is running, the agent continues along the escape route calculated in the previous call to GetSteering() (line 162). The method is only fully executed if a potential collision is detected and no timer is active, in which case it continues calculating an escape route to avoid the collision (from line 164 onward).

Without a cooldown, my agent would detect a possible collision ahead, calculate an escape route, and turn to avoid the collision. The problem is that in the next frame, the obstacle would no longer be in front of it, so it would see the shortest path to its target and turn to face it again, ending up right back in front of the obstacle we were trying to avoid. This cycle would repeat. To prevent this, an escape route is calculated and followed for a set time (the cooldown) so the agent moves significantly away from the obstacle, avoiding the collision; only then is the path to the target reevaluated.

The Timer configuration can be done from the _Ready() method of the node that owns the Timer.

Configuring a Timer in the _Ready method
Configuring a Timer in the _Ready method

Ignore the lines before line 83, they have nothing to do with the Timer, but I’ve left them so you can see a real example of Timer usage.

In line 83, I load a reference to the Timer node that is a child of the node executing this script. With that reference, we could configure all the fields available in the Timer’s inspector. In my case, it wasn’t necessary because I didn’t need to change the Timer’s configuration already set in the inspector, but I did manually connect a method to the Timer’s signal (line 84). In this case, the connected method is OnAvoidanceTimeout(). I could have connected it using the "Node" tab in the inspector, but I preferred to do it via code.

The OnAvoidanceTimeout() method couldn’t be simpler, as it just sets the flag to false, so that in the next frame the GetSteering() method knows there are no active timers.

Callback connected to the Timer signal
Callback connected to the Timer signal

Implementation in Unity

Unity doesn’t have a specialized component like Godot’s Timer node, but creating one is very easy using coroutines.

Let’s see how we could create a component that emulates the functionality provided by Godot’s node. We could call our component CustomTimer:

Start of our Timer class for Unity
Start of our Timer class for Unity

The fields this component could offer would be the same as those of the aforementioned node.

Class fields exposed to the inspector
Class fields exposed to the inspector

With this, what we’d see in the inspector would be the following:

Inspector view of our CustomTimer class
Inspector view of our CustomTimer class

If autoStart is set to true, the component would simply call the StartTimer() method from Awake(), as soon as the script starts.

Timer start
Timer start

The method called on line 50 of the Awake method simply starts a coroutine and stores a reference to it so it can be controlled externally (line 55).

The body of the launched coroutine is very simple:

Coroutine body of the Timer
Coroutine body of the Timer

The coroutine simply waits for the number of seconds set in waitTime. Depending on the type of time we want to use (game time or real time), it waits for game seconds (line 65) or real-time seconds (line 68).

After that time, the timeout event is triggered (line 72).

If the timer is single-use, the coroutine ends (line 74); otherwise, the cycle repeats (line 60).

What’s the purpose of the reference we stored on line 55 of the StartTimer() method? It allows us to stop the coroutine early if needed, using the StopCoroutine method.

Code for early stopping of the timer
Code for early stopping of the timer

A GameObject with a CustomTimer component like the one described could use it from another GameObject’s script by getting a reference to the component:

Getting a reference to CustomTimer
Getting a reference to CustomTimer

From that reference, using the Timer would be identical to what we saw with Godot’s node.

Using the CustomTimer component
Using the CustomTimer component

Since this example is the Unity version of what we already saw for Godot, I won’t go into more detail.

Implementation in C#

What do Godot C# and Unity have in common? Exactly, their programming language: C#. This is a general-purpose language and comes “batteries included,” meaning it has lots of modules and libraries for doing many things. One of those modules offers a timer with very similar functionality to what we’ve already seen.

If C# already offers a timer, why is it so common to see implementations like the ones above in Godot and Unity? Why not use the C# timer directly?

I think the simplest answer is for the case of Godot. Godot’s default language is GDScript, and it doesn’t come “batteries included,” or rather, its “batteries” are the thousands of nodes Godot offers. That’s why Godot C# inherits access to the Timer node thanks to its availability in GDScript code.

The Unity case is harder to answer, and I think it’s due to a lack of awareness of what C# offers on its own. Also, we’ve seen that creating a custom timer is very easy using a mechanism as familiar to Unity developers as coroutines. I think that ease of creating your own timer has led many to never explore what C# offers in this regard.

The timer C# offers is System.Timers.Timer, and its usage is very similar to what we’ve seen in Unity and Godot.

Using C#’s native timer
Using C#’s native timer

In line 103 of the screenshot, you can see that you need to pass the desired duration to its constructor.

The AutoReset field on line 105 is completely equivalent to Godot’s Autostart.

In line 106, you can see that the event this timer triggers is Elapsed (equivalent to Godot’s timeout signal). You can subscribe callbacks (like OnTimerTimeout) to this event just like we did in the previous versions for Godot and Unity.

Finally, this C# timer also has Start() (line 117) and Stop() methods to start and stop it.

The only precaution to take with this timer is that you should never call its Start() method unless you’re sure the previous countdown has finished. If you need to start a new countdown before the previous one ends, you must first call Stop() and then Start(). The issue is that this timer is optimized for multithreaded environments, so when you call multiple Start()s in a row, it doesn’t finish the previous countdowns but accumulates them in different threads, causing the countdowns to run in parallel and the events to trigger as those countdowns finish (which may appear as irregular timing due to being from different countdowns).

Conclusion

If you program in C# in both Godot and Unity, there are very few reasons not to use the C# timer. It’s lightweight, efficient, and being cross-platform, it allows you to reuse code between Unity and Godot more easily.

The only reason I can think of to keep using Godot’s node is for manipulation from the inspector when integrating with other game elements; but beyond that, I don’t see much use in continuing to use either Godot’s node or Unity’s coroutine-based timers.

25 June 2025

How to implement Gizmos and Handles in Godot

A cone view gizmo

Gizmos allow you to draw lines and shapes in the editor window to visually represent an object’s values.

For example, imagine you want to create a component that implements a vision cone. A vision cone is modeled with two parameters:

  • Range: This is the distance between the observer and the observed object beyond which the latter becomes invisible.
  • Aperture: This is the angle of the cone. It is generally calculated as an angle from the vector marking the observer’s gaze direction. If the object is at an angle relative to the observer greater than the aperture angle, then the object is not visible.

Thus, your component can have these two parameters as fields, and you can edit them from the component’s inspector. The problem is that it’s very difficult to set the ideal values for these parameters without a visual reference of the result. It’s easier to understand what falls within the vision cone if, when setting the aperture in the inspector, a triangle representing the vision cone with that angle appears in the scene editor.

To represent these geometric shapes, which help us visualize the values of our component’s fields, is exactly what Gizmos are for.

Of course, Gizmos will only appear in the editor. They are aids for the developer and level designer but will be invisible when the final game starts.

In fact, if you’ve been practicing with Godot, you’ve likely already used Gizmos, for example, when setting the shape of a CollisionShape. The boxes, circles, and other geometric shapes that appear in the editor when you configure a CollisionShape are precisely Gizmos, drawn as we will see here.

In fact, if you look at CollisionShapes, you’ll notice that, in addition to the Gizmo, there are points you can click and drag to change the component’s shape. These points are called Handles and are the “grabbers” for manipulating what we represent in the editor. In this article, we’ll also see how to implement our own Handles.

Side view of a CollisionShape with a box shape. The blue edges are the Gizmo, representing the shape, and the red points are the Handles, for changing the shape.

Gizmos are associated with the nodes they complement, so Godot knows which Gizmos to activate when a specific node is included in the scene. As we’ve seen, many of Godot’s nodes already have associated Gizmos (as seen with CollisionShape nodes).

Creating a Custom Node

Since I don’t want to mess with the default Gizmos of Godot’s nodes, we’ll start by adding a custom node to Godot’s node list. We’ll associate a Gizmo with its respective Handles to this custom node to serve as an example.

In our example, to create this custom node, I’ve created a folder called nodes/CustomNode3D inside the project folder. In that folder, we can create the script for our custom node by right-clicking the folder and selecting Create New > Script.... A pop-up window like the one below will appear, where I’ve filled in the values for this example:

The script creation window
The script creation window

Once the script is generated, we only need it to implement two public exported Vector3 properties. I’ve called them NodeMainPoint and NodeSecondaryPoint:


[Export] public Vector3 NodeMainPoint { get; set; }

[Export] public Vector3 NodeSecondaryPoint { get; set; }

 

I’m not including a screenshot because we’ll add code to the setter part later.

The idea is that dragging the Handles in the editor updates the values of the two properties above. The reverse should also work: if we change the property values in the inspector, the Handles should reposition to the locations indicated by the properties. Additionally, we’ll draw a Gizmo in the form of a line, from the node’s origin to the positions of the properties.

This should be enough to illustrate the main mechanics: representing a node’s properties with Gizmos and modifying those properties using Handles.

The next step will be to create an addons folder inside the Godot project. Custom Gizmos and Handles are considered plugins, so the consensus is to place them in an addons folder within the project.

Once that’s done, go to Project > Project Settings ... > Plugins and click the Create New Plugin button. A window like the one below will appear, where I’ve already filled in the values for the example:

The plugin creation window
The plugin creation window

Note that the folder specified in the Subfolder field of the previous window will be created inside the addons folder we mentioned earlier. The same applies to the plugin script, defined in the Script Name field.

Also, notice that I’ve unchecked the Activate now? checkbox. GDScript plugins can be activated immediately with the generated template code, but C# plugins require some prior configuration, so they will throw an error if activated with the default template code. The error won’t break anything, but it displays an error window that needs to be closed, which looks messy. So, it’s best to uncheck that box and leave the activation for a later step, as we’ll see below.

After doing this, a folder will be generated inside addons, containing a plugin.cfg file and the C# script from the previous window. The purpose of this script is to register the type represented by CustomNode3D in Godot so that we can select it from the engine’s node list. Remember that CustomNode3D inherits from Node3D, so it makes sense to include it alongside other nodes.

As with any other plugin, CustomNode3DRegister will need to inherit from the EditorPlugin class and implement the _EnterTree() and _ExitTree() methods. In the first method, we’ll register CustomNode3D as an eligible node in the node list, and in the second, we’ll deregister it so it no longer appears in the list. The implementation is straightforward:

addons/custom_node3D_register/CustomNode3DRegister.cs
addons/custom_node3D_register/CustomNode3DRegister.cs

As you can see, in the _EnterTree() method, we load two things: the script associated with the custom node and the icon we want to use to represent the node in Godot’s node list. For the icon, I’ve used the one included in all Godot projects, copying it from the root into the custom node’s folder.

Then, we associate these elements with a base node using the AddCustomType() method, which registers the custom node in the node list. Since the custom node’s script inherits from Node3D, we’ve used that as the base class in the AddCustomType() call. With this call, when we select CustomNode3D from the node list, a Node3D will be created in the scene, and the script we defined will be associated with it.

The implementation of _ExitTree() is the opposite: we use the RemoveCustomType() method to remove the custom node from the node list.

To execute the registration, we’ll compile the game to apply the changes to CustomNode3DRegister.cs. After that, go to Project > Project Settings ... > Plugins and ensure the Enable checkbox for the CustomNode3DRegister plugin is checked. This will trigger its logic and register our custom node in the node list. From there, we can locate our node in the list and add it to the scene:

The node list with our custom node

Add it to the scene before proceeding.

Creation of a Gizmo for the Custom Node

Now that we have our custom node, let’s create a Gizmo to visually represent its properties.

Gizmos are considered addons, so it makes sense to create a folder addons/CustomNode3DGizmo to house their files. These files will be two: a script to define the Gizmo, which will be a class inheriting from EditorNode3DGizmoPlugin, and another script to register the Gizmo, which will inherit from EditorPlugin and will be quite similar to the one we used to register the custom node.

The Gizmo script is where the real substance lies. I’ve called it CustomNode3DGizmo.cs. As I mentioned, it must inherit from EditorNode3DGizmoPlugin and implement some of its methods.

The first of these methods is _GetGizmoName(). This method simply returns a string with the Gizmo’s name:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Somewhat more intriguing is the _HasGizmo() method, which is passed all the nodes in the scene until the method returns true for one of them, indicating that the Gizmo should be applied to that node. Therefore, in our case, the method should return true when a node of type CustomNode3D is passed:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Here, we need to consider a specific issue that occurs in C# but not in GDScript. Although the comparison with is is syntactically correct, in practice, it doesn’t work in Godot C# unless the class we’re comparing against is marked with the [Tool] attribute. So, this is a good time to add that attribute to the header of the CustomNode3D class:

nodes/CustomNode3D/CustomNode3D.cs
nodes/CustomNode3D/CustomNode3D.cs

In reality, this is an anomaly. We shouldn’t need the [Tool] attribute to make that comparison work. In fact, the equivalent GDScript code (which appears in the official documentation) doesn’t require it. This is a bug reported multiple times in Godot’s forums and is still pending resolution. Until it’s fixed, the workaround in C# is to use the [Tool] attribute.

The next method to implement is the constructor of our Gizmo class. In GDScript, we would use the _init() method, but in C#, we’ll use the class constructor:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

In this constructor, we’ll create the materials to apply to our Gizmo and its Handles. These aren’t full materials like those created with a shader but rather a set of styles to apply to the lines we draw for our Gizmo. They are created using the CreateMaterial() method for the Gizmo and CreateHandleMaterial() for the Handle. Both accept a string as the first parameter, which is the name we want to give the material. This name is used with the GetMaterial() method to obtain a reference to that material. This reference can be useful, for example, to assign it to a StandardMaterial3D variable for in-depth customization by setting the values of its properties. However, it’s common not to need that level of customization and to simply set the line color using the second parameter of the CreateMaterial() method. However, the CreateHandleMaterial() method doesn’t accept this second parameter, so we have no choice but to use the GetMaterial() method (lines 19 and 20 of the previous screenshot) to obtain references to the material and set the value of its AlbedoColor property (lines 21 and 22).

In the example constructor, I’ve configured the lines drawn from the coordinate origin to the position marked by the NodeMainPoint property to use the color red. The lines going to the position of the NodeSecondaryPoint property will use green. I’ve configured the materials for the respective Handles to use the same color.

Finally, we have the _Redraw() method. This is responsible for drawing the Gizmos every time the UpdateGizmo() method, available to all Node3D nodes, is called:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

The _Redraw() method is like our canvas, and it’s common to clear a canvas at the start before drawing on it. That’s why the Clear() method is typically called at the beginning of the method (line 29 of the previous screenshot).

Then, we collect the positions of the lines we want to draw in a Vector3 array. In this case, we want to draw a line from the coordinate origin to the position marked by the NodeMainPoint property, so we store both points in the array (lines 33 to 37 of the previous screenshot).

For the Handles, we do the same, storing the points where we want a Handle to appear in another array. In this case, since we want a Handle to appear at the end of the line, marked by the NodeMainPoint position, we only add that position to the Handles array (lines 38 to 41 of the previous screenshot).

Finally, we use the AddLines() method to draw the lines along the positions collected in the array (line 42) and the AddHandles() method to position Handles at the positions collected in its array (line 43). Note that, in both cases, we pass the material defining the style with which we want the elements to be drawn.

I didn’t include it in the previous screenshot, but the process for drawing the line and Handle for a second point (in this case, NodeSecondaryPoint) would be the same: we’d confirm their position arrays and pass them to the AddLines() and AddHandles() methods.

Manipulating a Gizmo Using Handles

At this point, our Gizmo will draw lines and Handles based on the values stored in the properties of the node it’s associated with (in this example, CustomNode3D). However, if we click on the Handles, nothing will happen. They will remain static.

To interact with the Handles, we need to implement a few more methods from EditorNode3DGizmoPlugin in our CustomNode3DGizmo class. However, Godot’s official documentation doesn’t cover these implementations. If you follow the official documentation tutorial, you’ll stop at the previous section of this article. It’s bizarre, but there’s nothing in the official documentation explaining how to manipulate Handles. Everything that follows from here is deduced from trial and error and interpreting the comments of each function to implement. Perhaps, based on this article, I’ll contribute to Godot’s documentation to address this gap.

Let’s see which methods need to be implemented in CustomNode3DGizmo to manipulate the Handles we placed in the _Redraw() method.

The first is _GetHandleName(). This method must return a string with the identifying name of the Handle. It’s common to return the name of the property modified by the Handle:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Two things stand out in the previous screenshot.

First, we could have returned the property name as a hardcoded string, but using the nameof() method ensures that if we refactor the property name using our IDE, this part of the code will update as well.

Second, each Handle is identified by an integer, so we can know which Handle’s name is being requested based on the handleId parameter passed to _GetHandleName(). The integer for each Handle depends on the order in which we added the Handles when calling AddHandles() in _Redraw(). By default, if you leave the ids parameter of AddHandles() empty, the first Handle you pass will be assigned ID 0, the second ID 1, and so on. However, if you look at the _Redraw() screenshot earlier, I didn’t leave the ids parameter empty. Instead, I passed an array with a single element, an integer defined as a constant, to force that Handle to be assigned that integer as its ID, allowing me to use that constant as an identifier throughout the code:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Once we’ve implemented how to identify each Handle, the next step is to define what value the Handle returns when clicked. This is done by implementing the _GetHandleValue() method:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

Like _GetHandleName(), _GetHandleValue() is passed the Handle’s identifier for which the value is being requested. With the gizmo parameter, we can obtain the node associated with the Gizmo using the GetNode3D() method (line 130 of the previous screenshot). Once we have a reference to the node, we can return the value of the property associated with each Handle (lines 133 to 136).

When you click on a Handle, look at the bottom-left corner of the scene view in the editor; a string will appear, formed by what _GetHandleName() and _GetHandleValue() return for that Handle.

Now comes what may be the most challenging part of this tutorial: using the Handle to assign a value to the associated node’s property. This is done by implementing the _SetHandle() method:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

This method is passed a gizmo parameter, which allows access to the associated node using GetNode3D(), as we did in _GetHandleValue(). It’s also passed the Handle’s identifier for which the value is being set. Most importantly, it’s passed the camera viewing the scene and the Handle’s screen position.

In this method, we need to interpret the Handle’s screen position to set the node’s property value associated with the Handle based on that position. In this case, it seems simple: the Handle’s position should be the value stored in the associated property, since both NodeMainPoint and NodeSecondaryPoint are positions. The problem is that Handles are dragged on the two-dimensional surface of the screen, which is why the screenPos parameter is a Vector2, so it’s not immediately clear which three-dimensional scene coordinate corresponds to that screen point.

When we add a Camera3D node to a scene, that node is represented with the following Gizmo:

Camera3D Gizmo
Camera3D Gizmo

I find it very clarifying to think of our head being at the tip of the Gizmo’s pyramid, looking at a screen at the base. The scene in front of the camera is back-projected onto that screen.

Suppose we have an object A in the scene. The screen position where A is drawn (let’s call this position Ap) is the result of drawing a straight line from the object to the camera’s focus and seeing where it intersects the camera’s back-projection plane:

Back-projection diagram
3D object projection onto the flat surface of the screen


So far, this is straightforward. In fact, the Camera3D class has the UnprojectPosition() method, which takes a three-dimensional position (Vector3) of an object in the scene and returns its two-dimensional position (Vector2) on the screen. In our case, if we passed A’s position to UnprojectPosition(), the method would return Ap (understood as a two-dimensional screen position).

Now suppose we have a Handle at the screen position Ap representing A’s position, and we drag the Handle to the screen position Bp. How would we calculate the object’s new position in three-dimensional space? (Let’s call this new position B.) The logical approach is to apply the inverse process of back-projecting the object onto the camera’s plane. To do this, we’d draw a line from the camera’s focus through Bp. Following this reasoning, the object’s new position would lie along that line—but where? At what point on the line?

The key is to realize that the object moves in a plane (Pm) parallel to the camera’s plane. The intersection of that plane with the line from the camera’s focus passing through Bp will be the object’s new position (B):

Object moved across the screen
Object moved across the screen

The Camera3D node has a ProjectPosition() method, which is used to convert two-dimensional screen coordinates into three-dimensional scene coordinates. The method accepts two parameters. The first is a two-dimensional screen position (in our example, Bp). With this parameter, the method draws a line from the camera’s focus through the two-dimensional camera coordinate (Bp). The second parameter, called zDepth, is a float indicating the distance from the camera’s focus at which the plane Pm should intersect the line.

zDepth
zDepth

This distance is the length of the line from the camera’s focus that intersects perpendicularly with the plane Pm. In the previous diagram, it’s the distance between the focus (F) and point D.

But how do we calculate this distance? Using trigonometry. If we recall our high school lessons, the cosine of the angle between segments FA and FD equals the ratio of FD divided by FA. So, FA multiplied by the cosine gives us FD.

FD distance calculation
FD distance calculation

This calculation is so common that game engine vector math libraries include it under the name Dot Product. With this operator, we can transform the previous formula into:

Dot product
Dot product

This formula means that if we compute the Dot Product of vector FA onto the normalized vector of FD, we get the full vector FD.

It’s common to visualize the Dot Product as a projection of one vector onto another. If you placed a powerful light behind FA, shining perpendicularly onto the normalized vector FD, the shadow FA would cast onto FD would be exactly FD.

Therefore, to obtain the distance FD to use as the zDepth parameter, we only need to compute the Dot Product of FA onto the normalized FD, which is the Forward vector of the Camera3D node (by default, the inverse of its local Z-axis).

All this reasoning boils down to a few lines in the GetZDepth() method:

addons/custom_node3D_gizmo/CustomNode3DGizmo.cs
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

In this method, the variable vectorToPosition corresponds to FA, and cameraForwardVector to FD. The zDepth result returned by the method is FD and is used in the calls to ProjectPosition() in _SetHandle() to set the new positions of NodeMainPoint and NodeSecondaryPoint.

Having resolved _SetHandle(), the only method left to implement is _CommitHandle(). This method is responsible for building the history of modifications we make to our Handles, so we can navigate through it when performing undo/redo (Ctrl+Z or Ctrl+Shift+Z):

_CommitHandle() method 1 of 2

_CommitHandle() method 2 of 2
addons/custom_node3D_gizmo/CustomNode3DGizmo.cs

The history is built on an object of type EditorUndoRedoManager (in this case, _undoRedo), which is obtained from the EditorPlugin object that registers the Gizmo (in this example, CustomNode3DGizmoRegister, which we’ll discuss soon) and passes an instance of EditorUndoRedoManager through its constructor.

With the EditorUndoRedoManager, each history entry is created with the CreateAction() method (line 78 of the previous screenshot). Each entry must include actions to execute for Undo and Do (called during Redo). These actions can involve setting a property with the AddDoProperty() and AddUndoProperty() methods or executing a method with AddDoMethod() and AddUndoMethod(). If the direct action only involved changing a property’s value, it’s usually sufficient to set the property back to its previous value to undo it. However, if the direct action triggered a method, in addition to changing the property, you’ll likely need to call another method to undo what the first did.

In this example, I only change the values of customNode3D’s properties, so for the action history, it’s enough to use the Add...Property() methods. These methods require as parameters the instance owning the properties to modify, the string with the property’s name to manipulate, and the value to set the property to. Each action captures the value we pass to the Add...Property() method. For AddDoProperty(), we pass the property’s current value (lines 84 and 91); for AddUndoProperty(), we pass the restore parameter’s value, which contains the value retrieved from the history when performing an Undo.

When _CommitHandle() is called with the cancel parameter set to true, it’s equivalent to an Undo on the Handle, so we restore the restore value to the property (lines 101 to 106).

Finally, but no less important, once we’ve shaped the property changes and method calls that make up the history entry, we register it with CommitAction() (line 109).

Updating the Gizmo After Changes

The visual representation of a Gizmo may need to be updated for two reasons:

  1. Because we have modified the fields of the represented node from the inspector.
  2. Because we have manipulated the Gizmo’s Handles.

The Gizmo is updated by calling the UpdateGizmos() method, which is available to all Node3D nodes.

The question is where to call this method to ensure that both types of changes mentioned above are updated. In the previous code screenshots, you’ll notice several commented-out calls to UpdateGizmos(). These were tests of possible places to execute that method. All the commented-out calls had issues: either they didn’t trigger for one of the two cases above, or they updated in a choppy manner, as if there were some performance problem.

In the end, my tests led me to conclude that, in my case, the best place to call UpdateGizmos() is from the properties of CustomNode3D that we’re modifying. For example, in the case of NodeMainPoint:

nodes/CustomNode3D/CustomNode3D.cs
nodes/CustomNode3D/CustomNode3D.cs

By calling UpdateGizmos() from the setter of the property, which is exported, we ensure that the method is called both when the property is modified from the inspector and from the _SetHandle() method of the Gizmo.

Registering the Gizmo

Just like with the custom node, our Gizmo also needs to be registered in the editor so that it knows to account for it. For this, we’ll again use a plugin to handle the registration:

addons/custom_node3D_gizmo/CustomNode3DGizmoRegister.cs
addons/custom_node3D_gizmo/CustomNode3DGizmoRegister.cs

We’ll create the plugin using the same method we used to create the CustomNode3DRegister plugin, but this time, the plugin will be called CustomNode3DGizmoRegister and will be based on a C# script of the same name.

In this case, we load the C# script where we configured the Gizmo and instantiate it, passing an instance of EditorUndoRedoManager to its constructor by calling the GetUndoRedo() method (lines 13 and 14).

Once that’s done, we register the plugin instance by passing it to the AddNode3DGizmoPlugin() method (line 15).

Similarly to how we handled the registration of the custom node, we also use the _ExitTree() method here to deregister the Gizmo using the RemoveNode3DGizmoPlugin() method (line 21).

Once the script is complete, we can activate the plugin from Project > Project Settings... > Plugins, and the next time we add a CustomNode3D to the scene, we’ll be able to use the Gizmo’s Handles.

Initially, the Handles might not be clearly distinguishable because both will coincide at the origin of the coordinates:

Handles at coordinate origin
Handles at coordinate origin

However, they are there. They’re the small points visible at the origin of the coordinate axis. If we click and drag these points, we’ll see them move, altering the values of the CustomNode3D properties:

Dragging Handles
Dragging Handles

Conclusions

This article has been extremely long, but I wanted to address a glaring gap in the official documentation on this topic.

My previous experience was with Unity, which also has its own Gizmos and Handles API. Compared to Unity, Godot’s approach seems more compact and straightforward, as it centralizes both Gizmo and Handle configuration in a single class inheriting from EditorNode3DGizmoPlugin. In contrast, to achieve the same in Unity, you need to spread the Gizmo and Handle code across different classes.

That said, Unity’s documentation on this topic seems much more comprehensive.

It’s also worth noting that Unity’s Gizmos and Handles API covers both 2D and 3D games, whereas in Godot, everything we’ve covered in this article applies only to 3D games. There is no EditorNode2DGizmoPlugin class, or at least I’m not clear on what the equivalent of this code would be for a 2D Godot game. I’ll investigate this, and when I figure it out, I’ll likely write another article. However, at first glance, the official documentation doesn’t make it clear how to do this in 2D.

Code for This Tutorial

The code for the project used as an example is available for download in my GitHub repository GodotCustomGizmoAndHandleExample. Feel free to download it to examine the code in detail and try it out for yourself.