26 December 2024

Automatized testing (TDD) in Godot - GdUnit4

GdUnit4 logo
TDD (Test-Driven Development) is a software development methodology where you write tests first and then the code necessary to make those tests pass. By having to define the tests beforehand, TDD forces you to consider what functionality you want to achieve before writing any code. You need to design what inputs your test will receive and what outputs will be considered correct. These inputs and outputs will define your functionality, and having defined them in advance allows you to implement it in a cleaner and more encapsulated way.

Many people dislike this methodology because they find it boring to start with tests and prefer to dive straight into writing code and then test it manually. They often justify this by claiming they can't afford to waste time writing tests. The problem arises when their project starts growing in size and complexity, and introducing new features becomes a nightmare because older functionalities break. This often happens unnoticed, as it becomes impossible to manually test the entire application with every change. Consequently, hidden bugs accumulate with each update. By the time one of these bugs is discovered, it becomes incredibly difficult to determine which update caused it, making the resolution process a torment.

With TDD, this doesn't happen because the tests you design for each feature are added to the test suite and can be automatically re-executed when incorporating new functionalities. When a new feature breaks something from previous ones, the corresponding tests will fail, serving as an alarm to indicate where the problem lies and allowing you to resolve it efficiently.

Games are just another application. TDD can be applied equally to them, and that's why all engines have automated testing frameworks. In this article, we will focus on one of the most popular in Godot Engine: GdUnit4.

GdUnit4 allows test automation using both GdScript and C#. Since I use the latter to develop in Godot, we will focus on it throughout this article. However, if you're a GdScript user, I recommend you keep reading because many concepts are similar in that language.

Plugin Installation

To install it, go to the AssetLib tab at the top of the Godot editor. There, search for "gdunit4" and click on the result.

GdUnit4 in the AssetLib
GdUnit4 in the AssetLib

In the pop-up window, click "Download" and accept the default installation folder.

After doing this, the plugin will be installed in your project folder but will remain disabled. To enable it, go to Project --> Project Settings... --> Plugins and check the Enabled box.

GdUnit4 plugin activation
GdUnit4 plugin activation

I recommend restarting Godot after enabling the plugin; otherwise, errors might appear.

C# Environment Configuration

The next step is to configure your C# environment to use GdUnit4 without issues.

Ensure you have .NET version 8.0.

Open your .csproj file and make the following changes in the <PropertyGroup> section:

  • Change TargetFramework to net8.0.
  • Add <LangVersion>11.0</LangVersion>.
  • Add <CopyLocalLockFileAssemblies>true</CopyLocalLockFileAssemblies>.

Create an <ItemGroup> section (if it doesn’t already exist) and add the following:


<ItemGroup>

    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.9.0" />

    <PackageReference Include="gdUnit4.api" Version="4.3.*" />

    <PackageReference Include="gdUnit4.test.adapter" Version="2.*" />

</ItemGroup>


As an example, here's my .csproj file:

A csproj file adapted for GdUnit4
A csproj file adapted for GdUnit4

If you rebuild your project (either from your IDE or MSBuild) without errors, your configuration is correct.

IDE Configuration

GdUnit4 officially supports Visual Studio, Visual Studio Code, and Rider. Configuration varies slightly between them.

I use Rider, so I'll focus on that IDE.

GdUnit4 expects an environment variable named GODOT_BIN with the path to the Godot executable. I use ChickenSoft's GodotEnv tool to manage Godot versions, which maintains a GODOT variable pointing to the editor executable. I created a user-level GODOT_BIN variable pointing to GODOT. If you don't use GodotEnv, manually set the path to the executable.

Env var configuration in windows
Env var configuration in windows

In Rider, ensure the Godot support plugin is enabled and enable VSTest adapter support:

Enable the "Enable VSTest adapters support" option and add a wildcard (*) in the "Projects with unit tests" list.

Configuration for VSTest support
Configuration for VSTest support

Create a .runsettings file at your project root with the following content:


<?xml version="1.0" encoding="utf-8"?>

<RunSettings>

    <RunConfiguration>

        <MaxCpuCount>1</MaxCpuCount>

        <ResultsDirectory>./TestResults</ResultsDirectory>

        <TargetFrameworks>net7.0;net8.0</TargetFrameworks>

        <TestSessionTimeout>180000</TestSessionTimeout>

        <TreatNoTestsAsError>true</TreatNoTestsAsError>

    </RunConfiguration>

    <LoggerRunSettings>

        <Loggers>

            <Logger friendlyName="console" enabled="True">

                <Configuration>

                    <Verbosity>detailed</Verbosity>

                </Configuration>

            </Logger>

        </Loggers>

    </LoggerRunSettings>

    <GdUnit4>

        <DisplayName>FullyQualifiedName</DisplayName>

    </GdUnit4>

</RunSettings>


Point Rider to this file in its Test Runner settings.

Path to .runsettings configuration
Path to .runsettings configuration

Once all of the above is done, you need to restart Rider to apply the configuration.

If you want to verify that the setup is correct, you can create a folder named Tests in your project and, inside it, a C# file (you can call it ExampleTest.cs) with the following content:

Test/ExampleTest.cs
Test/ExampleTest.cs

The example is designed so that Success() will be a passing test, while Failed() will intentionally fail.

You can run this test either from Rider or from the Godot editor.

In Rider, you first need to enable the test panel by navigating to View → Tool Windows → Tests. From the test panel, you can run an individual test or all tests at once.

Test Execution Tab in Rider
Test Execution Tab in Rider


To run them from Godot, you need to go to the GdUnit tab.

Test Execution Tab in Godot
Test Execution Tab in Godot

If the Godot tab doesn’t display the tests, it might not be looking in the correct folder. To verify this, click the tools button in the top-left corner of the tab and ensure that the "Test Lookup Folder" parameter points to the folder where your tests are located.

Path to the Test Folder
Path to the Test Folder

If the tests still don’t appear, I recommend restarting. Both Godot and Rider sometimes fail to detect changes or newly added tests. When this happens, restarting Godot or Rider (whichever is failing) usually resolves the issue. I assume this problem will be addressed over time. In any case, from the tests I’ve conducted, those run through Rider have consistently worked much better than those executed directly from the Godot editor.

Testing a Game

All the previous setup has been arduous, but the good thing is that you only need to do it once. From then on, it’s just a matter of creating tests and running them.

There are many things to test in a game. Unit tests focus on testing specific methods and functions, but I’m going to explain something broader: integration tests. These tests evaluate the game at the same level a player would, interacting with objects and verifying whether the game reacts as expected. For this purpose, it’s common to create special test levels, where functionalities are concentrated to allow for quick and efficient testing.

To make this clear, let’s use a simple example. Imagine we have a game where there’s an element (an intelligent agent) that must move towards a specific marker's position. To verify the agent's correct behavior, we would place it at one end of the scenario, the marker at the other, and wait a few seconds before evaluating the agent's position. If it has moved close enough to the marker, we can conclude that it’s working correctly.

You can download the Godot code for this example from a specific commit in one of my repositories. If you open it in the Godot editor, load the scene Tests/TestLevels/SimpleBehaviorTestLevel.tscn, set it as the main scene (Project → Project Settings... → General → Application → Run → Main Scene) and run the game, you’ll see that it behaves just as described earlier: the red crosshair will move to where you click, and the green ball will move towards the crosshair’s position. So, manually, we can verify that the game behaves as expected. Now let’s automate this verification.

The game we’re testing
The game we’re testing

The first step is to set up the level with the necessary elements for testing. These auxiliary elements would only get in the way in the levels accessed by players, which is why dedicated test levels are usually created (hence, this one is in the folder Tests/TestLevels).

Our test level has the following structure:

Test Level Structure
Test Level Structure

The elements are as follows:

  • ClearCourtyard: This is the box where our agent moves. It’s a simple TileMap where I’ve drawn dark gray walls (impassable) and a light gray floor (where movement happens).
  • Target: This is a scene whose main node is a Marker2D, with a Sprite2D underneath containing the crosshair image. The main node has a script that listens to Input events and reacts to left mouse button clicks, placing itself at the clicked screen position.
  • SeekMovingAgent: This is the agent whose behavior we want to verify.
  • StartPosition1: This is the position where we want the agent to start at the beginning of the test.
  • TargetPosition1: This is the position where we want the Target to start at the beginning of the test.

The code for our test will consist of one or more C# files located in the Tests folder. In this case, we’ll only have one file, but we could have several if we were testing different functionalities. Each file can contain multiple tests. A test is essentially a method marked with the [TestCase] attribute. The class containing that method must be marked with the [TestSuite] attribute so that the test runner includes it when executing tests.

Our example test is located in the file Tests/SimpleBehaviorTests.cs.

The first half of the file focuses on preparations.

Test Preparation
Test Preparation

As you can see in line 9, I’ve stored the path to the level we want to test in a constant variable.

We load this level in line 17. This line belongs to the method LoadScene(), which we can name however we like as long as we mark it with the [BeforeTest] attribute. This signals that LoadScene() should run just before each test. This way, we ensure that the level starts fresh each time, and that previous tests don’t interfere with subsequent ones.

There are other useful attributes:

  • [Before]: Runs the decorated method once before all tests in the class execute. Useful for initializing resources shared across tests that don’t need resetting.
  • [AfterTest] and [After]: These are the counterparts of [BeforeTest] and [Before], used for cleaning up after tests.

Once we have a reference to the freshly loaded level, we can start the test:

The code for our test
The code for our test

Between lines 24 and 30, we gather references to the test elements using the FindChild() method. In each call, we pass the name of the node we want to retrieve. Since FindChild() returns a Node type, we need to cast it to its actual type.

Once we have the references to the elements, we place them at their starting positions. In line 33, we place _target at the position marked by _targetPosition. In line 34, we place _movingAgent at the position marked by _agentStartPosition.

We could have set these positions via code, but if they need adjustment later, it’s much easier to tweak them visually in the editor rather than editing the code.

In line 37, we activate the agent so it starts moving. In general, I recommend keeping agents deactivated by default and activating only those needed for each specific test, avoiding interference.

In line 39, we wait for 2.5 seconds, which we estimate is sufficient for the agent to reach the target.

Finally, the moment of truth arrives. In line 41, we measure the distance between the agent and the target. If this distance is smaller than a tiny threshold (as floating-point math isn’t perfectly precise), we assume the agent successfully reached its target (line 43).

Conclusion

Like this example, many different tests can be created. The usual approach is to group related tests into a single class (a .cs file) for organizational clarity.

This test was highly visual, as it evaluated movement through position changes. However, other tests might focus on internal values of the agent, verifying specific properties using assertions.

Lastly, I highly recommend checking out the official GdUnit4 documentation. While originally focused on GDScript, it has started supporting C#. The documentation contains specific installation guidelines for C# and example code with tabs for both languages.

21 December 2024

Course "Unity Cutscenes: Master Cinematics, Animation, and Trailers" by GameDev.tv

I just finished another course from GameDev.tv included in the Humble Bundle pack I bought recently. This time, it was "Unity Cutscenes: Master Cinematics, Animation, and Trailers". Like the others, this course is available both on GameDev.tv's own platform and on Udemy. Its name is quite descriptive of its focus: teaching you the tools Unity provides for creating cinematic scenes.

As you probably know, a cinematic scene is a sequence generated in real-time, using the game engine, that narrates an important part of the story. Using cinematics has two significant advantages: on one hand, it saves the storage space required for pre-rendered video sequences; on the other hand, since it runs using the same assets as the rest of the game, there’s no risk of breaking the game's aesthetic, allowing smoother transitions between gameplay and cutscenes, and vice versa.

The course structures its 10 hours into five sections. In my opinion, the first two are the most interesting because they introduce genuinely new concepts. In the first section, it covers importing animations from Mixamo, explains the basics of rigging and skeletons (from Unity's perspective), and dives deeply into the Timeline. The second section is, in my opinion, the best because it thoroughly explains Unity's gem: the Cinemachine system. Specifically, it covers the use and transitions between multiple cameras, tracking cameras, "travelling" shots using Dolly Tracks, and automatic camera switching when a character leaves the field of view (the foundation of a camera system like the one in the first Resident Evil). All of this is explained clearly and in detail. I truly enjoyed that part.

The problem starts after that. The last three sections span just under seven hours of the course, but I think they could have been condensed into a third of that time. In these sections, the instructor stops introducing new concepts and focuses on illustrating the previously covered topics through examples. While this isn’t inherently bad, dedicating only 3.5 hours to introducing concepts and then spending over 6.5 hours repeatedly fine-tuning the same examples feels very unbalanced. It became quite tedious to keep making adjustments in the same windows just to frame the shot at the "cool" angle the instructor wanted. Admittedly, the examples are impressive, and you can tell the instructor enjoys polishing them, but I believe it would have been far more productive and engaging if he had stuck to simpler examples in favor of explaining more concepts that would fit within the course's scope. Topics like lighting, VFX (with VFX-Graph), or shaders are only briefly touched upon and could have been explored in more depth.

Additionally, the instructor has a very thick accent, making him hard to understand even if you're used to following courses in English. To make matters worse, the course lacks subtitles (which is unusual for GameDev.tv), so you can’t rely on them during moments when the instructor’s explanation isn’t clear. A real shame.

Therefore, my recommendation is to buy this course during a sale if you're curious about learning Cinemachine or Timeline, and focus on the first two sections. The following ones can be watched at double speed, slowing down only when the instructor does something genuinely new (which doesn’t happen often), or you can skip them altogether — in my opinion, you won’t miss much.

20 December 2024

Particle systems in Godot introduction (Part II) - The rain

In one of my previous articles, I explained how we could use Godot’s particle systems to simulate the falling of cherry blossom petals. It was a very subtle effect but encapsulates the basic principles of what particle systems offer. Another much more spectacular and common effect is rain. If we think about it, adding rain to our game using a particle system is very simple. In this article, we’ll see how.

As with the other articles, I’ve shared the code and resources for this one on GitHub. I recommend downloading it and opening it in the Godot editor to comfortably follow along with what I explain here. The code is in the SakuraDemo repository, although I recommend downloading this specific commit to ensure you see exactly the version I’ll explain.

Once you’ve done that, you can open the project in Godot. The new feature, compared to previous SakuraDemo examples, is that I’ve introduced three new nodes: two particle emitters (Rain and RainSplash) and a LightOccluder2D (WaterRainCollider). However, I’ve disabled the latter, making it invisible in the editor (by closing the eye icon in the hierarchy) and setting its Process → Mode to Disabled in the inspector (so the node doesn’t execute any logic on each frame).

Scene Hierarchy
Scene Hierarchy

In this article, we’ll explain the two particle emitters. The first one, Rain, generates the raindrops, while the second, RainSplash, activates at the point where a drop ceases to exist to simulate the small splashes that would occur when rain falls on the lake’s water. In particle system terms, this is called a sub-emitter. They’re named this way because they originate at the endpoints (either due to lifespan expiration or collision) of particles in a parent particle system.

We mentioned that particles can cease to exist either because their lifespan expires or because they collide. The thing is, particle systems are managed by the GPU, which doesn’t have knowledge of the CollisionShapes that populate our scene, as these are managed by the CPU. What particles can detect are polygons defined on the screen with a LightOccluder2D. In theory, when a particle touches one of the edges of a LightOccluder2D, it will cease to exist. In practice, it’s a bit more complex. If the particles are too small and fast, they may pass through the edges of a LightOccluder2D without being detected. To solve this, you need to increase the physical (but not visible) size of the particles by adjusting the Collision → Base Size parameter of the GPUParticles2D node in the particle system. If you test increasing the particle size within the physics simulation, you’ll see they penetrate less into the LightOccluder2D polygon.

Another difficulty with LightOccluder2D nodes is that, in my opinion, they’re only useful for simulating surfaces that rain can hit if your game is purely side-scrolling. However, in a perspective-based scene like this one, we don’t want our raindrops to collide with the edge of a polygon but rather on the visual area covered by the water sprite. While reducing the physical size of the particles can allow them to reach the water area, I haven’t observed much difference compared to the effect achieved by randomizing the particles’ lifetimes, as we’ll see below. That’s why, in the end, I ended up disabling the LightOccluder2D. Still, I recommend activating it and experimenting with the particles' physical size to observe its effect.

LightOccluder2D Shape
LightOccluder2D Shape

Let’s focus on the primary rain effect. If you access the Rain node, you’ll see I’ve made the following configurations:

  • Amount: Set to 1000 to ensure a sufficient number of raindrops.
  • Sub Emitter: Drag the particle system node you want to activate when your raindrops die here. In this case, it’s RainSplash, which we’ll discuss later in this article.
  • Time → Lifetime: I set it to 0.7 because this is the minimum time a raindrop needs to travel from its generation point above the top edge of the screen to the area of the screen where the water is located.
  • Time → Preprocess: It would look odd if the drops appeared nearly stationary at the top edge of the screen and then accelerated from there. Intuitively, you’d expect the drop to already be moving at high speed, having originated from a great height above the screen. This parameter allows you to simulate that prior acceleration, making particles appear with physical characteristics as if they started falling 0.5 seconds earlier. Through trial and error, I found 0.5 seconds sufficient to achieve a believable effect.
  • Drawing → Visibility Rect: This is the rectangle within which the particles will be visible. In my case, I wanted the rain to cover the entire screen, so I configured a rectangle spanning the entire screen.
  • CanvasItem → LightMask and Visibility Layer: I placed the particle system in its own layer. For this specific case, I think it has no effect, but I like to keep things tidy.
  • Ordering → Z Index: This parameter is important. Raindrops should not be obscured by anything, so their Z-Index must be higher than the other sprites. For this reason, I set it to 1,200.

As with the cherry blossom particle system, the ParticleProcessMaterial created for the Process Material parameter has its own settings:

  • Lifetime Randomness: Increasing this value expands the range of possible lifetimes. The value you set here is subtracted from 1.0 to calculate the lower bound of the range. In my case, I set it to 0.22, meaning the raindrops will have random lifetimes between 0.78 times the lifetime and 1.0 times the lifetime. With this parameter, we can make the raindrops cover the entire lake sprite. The lower range values represent drops that will reach the lake’s shore, while the upper range values represent drops that will live long enough to reach the bottom edge of the screen. All intermediate values will be distributed across the screen area occupied by the lake.
  • Disable Z: Since this is a 2D scene, particles moving along the Z-axis doesn’t make sense.
  • Spawn → Position → Emission Shape: I chose Box because I want the particles to generate more or less simultaneously along the entire top edge of the screen. This aligns well with the volume of a long box parallel to the screen.
  • Spawn → Position → Emission Box Extents: This defines the shape of the box where particles will generate. In my case, it’s narrow (Y=1) but long enough to cover the top edge of the screen (X=700).
  • Accelerations → Gravity: Raindrops are generated in the sky, so they fall at high speed by the time they reach the ground. We can simulate this high velocity by setting gravity to 3,000 on the vertical axis (Y).
  • Display → Scale: It would be boring if all raindrops were the same size. Instead, I added variety by setting a minimum size of 0.2 and a maximum of 1.
  • Display → Scale Over Velocity Curve: The acceleration of the drops is more visually engaging if they stretch as they speed up. This can be achieved using this parameter by creating a curve resource and making it rise along the Y-axis as it approaches the maximum velocity. In my case, I set the drops to have an initial size of 15 when they start falling and 30 when they reach 25% of their maximum velocity.

Curve to scale particle size along the Y-axis as velocity increases.
Curve to scale particle size along the Y-axis as velocity increases.

  • Display → Color Curves → Color Initial Ramp: Another way to add variety to the drops is to give them slightly different colors. With this parameter, we can set a gradient so each drop adopts a color from it.
  • Display → Color Curves → Alpha Curve: Another curve, this time to vary opacity throughout the particle’s lifespan. I configured a bell curve so the particle is transparent at the beginning and end of its life but visible in the middle.
  • Collision → Mode: Defines what happens when a particle collides. If you’re using a LightOccluder2D to simulate impact zones, you’d likely want the drop to disappear on impact, so you’d set this to Hide On Contact. However, if you’re simulating other types of particles and want them to bounce, you’d use Rigid.
  • Sub Emitter → Mode: Defines when to activate the secondary particle system, the sub-emitter. In our case, this is RainSplash. I set this parameter to At End so the secondary particle system activates at the position where a particle extinguishes due to its lifespan ending. If we had simulated collisions, we’d set it to At Collision here.

If we only wanted to simulate the drops, we could stop here. However, as I’ve mentioned throughout the article, we want to simulate the splashes produced when raindrops hit the lake water. To do this, we need to configure the RainSplash node, the particle system that activates whenever a drop extinguishes.

Just like with the Rain node, there are two levels of configuration: general node settings and specific Process Material settings.

At the general node level, I used the following parameters:

  • Emitting: I set this to false because the Rain node will handle its activation whenever one of its drops extinguishes.
  • Amount: I set this to 200. However, the effect is so subtle and quick that there aren’t significant differences between values for this parameter.
  • Time → Lifetime: The splash effect is so brief that I left this time at 0.2.
  • CanvasItem → LightMask and Visibility Layer: I kept these splashes in the same layer as the rain.
  • Ordering → Z Index: I placed it above the raindrops with a value of 1,300.
At the Process Material level, the settings I used were:

  • Lifetime Randomness: I set this to 1 for maximum randomness in the splashes' lifetimes.
  • Spawn → Velocity → Direction: To make the splashes rise before falling, I set an initial velocity vector of (0, -1, 0).
  • Spawn → Velocity → Spread: In my case, I noticed little difference when modifying this parameter, but in theory, it should randomize the initial velocity vector within an angular range.
  • Accelerations → Gravity: The splashes should fall quickly but not as fast as the raindrops. So, I set it to 500.
  • Display → Color Curves → Alpha Curve: I used a curve similar to the one for the Rain node.

And that’s it. With this configuration, I achieved a sufficiently convincing rain and splash effect. However, this is neither the only nor the best way to do it. All the parameters we’ve adjusted can be set to different values, and there are many others we haven’t even touched. I encourage you to experiment with other parameters and values to see their effects. I have no doubt that, with a bit of time, you’ll achieve effects better than mine.

Final Rain Effect
Final Rain Effect

06 December 2024

Course "Mastering Game Feel in Unity: Where Code Meets Fun!" by GameDev.tv

I'm still working through the courses from the latest Humble Bundle pack I purchased, featuring courses by GameDev.tv. This time, I tackled "Mastering Game Feel in Unity: Where Code Meets Fun!". It's an 8-hour course aimed at developers who have already completed introductory courses and want to dip their toes into more advanced topics. You can purchase it either on the GameDev.tv platform or on Udemy.

The author starts with a pre-made game based on what is taught in the beginner-level course. It’s a 2D shoot'em up where you control your character with the keyboard and aim with the mouse. At this stage, the game is fully functional but flat and boring. The author's thesis is that by adding small, specific details, you can make the game incredibly engaging and appealing. Almost all these details revolve around the idea of providing feedback to the player, visually reinforcing their actions. If the player shoots, the camera should shake slightly; if they throw a grenade, the camera should shake more intensely. If the player moves, their character should sway in the direction of motion and kick up a slight dust trail. And so on, with a long list of examples, each one used to demonstrate a different Unity feature.

For camera shakes, the course briefly introduces Cinemachine impulses. Dust trails and explosions serve as an excuse to overcome the fear of particle systems. The main character is given the ability to fly with a jetpack, which serves as a way to configure a Trail Renderer. The course breathes life into the game environment using Unity's 2D lighting capabilities in the URP, and gets unexpected mileage out of HDR colors with a simple Bloom post-processing effect. Sound isn't overlooked either, providing an opportunity to use Unity’s AudioMixer for the first time (at least for me).

As a result, the course touches on a lot of features. All of them are presented through examples, none explored in great depth but with enough detail to get comfortable with Unity tools and mentally map out where they might be useful. When the time comes to use them, you'll have the chance to dive deeper.

I diverged quite a bit from the course at the coding level. I'm not saying the author writes poor code—far from it—but their organizational style didn't quite resonate with me. It's a matter of personal preference. For instance, they tend to use events to subscribe methods within the same class as the event, while also leaning heavily (in my opinion) on direct component references. I usually work the other way around: using direct method calls within the same class, references to components below the caller, and events to communicate upwards in the hierarchy or between components at the same level. I'm not claiming my way is better, but I feel more comfortable with it than the author’s approach. In the end, I implemented the code in my own way, and as I progressed through the course, my version of the code became increasingly different from the author’s. It’s not a big deal. I'm mentioning this because if you take the course and find the author's coding style doesn’t work for you, don’t be afraid to do things your way. You'll probably learn much more that way than by just copying the instructor’s code.

It’s also true that GameDev.tv courses give you room to do things your way. A hallmark of their approach is the frequent challenges they present. In each one, the instructor describes what they want you to achieve functionally and then asks you to pause the video and try to implement it yourself. When you resume, the instructor explains how they solved it. If you get stuck, their solution helps you out; and if you manage to solve it, the instructor’s approach might reveal new options you hadn’t considered. Either way, it’s a much more interesting, fun, and rewarding way to learn than just following the instructor’s steps.

The only thing to keep in mind is that with this method, it’s normal for your implementation to gradually diverge from the instructor’s. By the final chapters, it’s not uncommon to spend extra time adapting what the instructor does to your own code version. This means you often spend much more time than the course estimates per chapter, but I think the learning process is much more effective.

In conclusion, I don’t regret taking the course. I found it entertaining, interesting, and I finished it having learned a lot of Unity features I wasn’t aware of before. For a course to give you that is not bad at all.