When developing software, performance is one of the most important facets, especially if targeting a platform like web/mobile.
Creating and Destroying objects requires a lot of memory and processing power relative to our other game actions, but we can reduce the impact of Instantiation in Unity by simply reusing them.
In Unity, we can do this by Instantiating all of the objects first, then storing references to them.
We will explore this concept in an example open source game I created ‘slashdot’, which also contains shaders from the last two posts.
We will begin creating the class which will actually handle our pooled objects. When working with pooled GameObjects vs simply Instantiating and Destroying them, we want to be careful of a few key concepts. Firstly, we want to disable most properties for reuse later as opposed to destructing them. Rarely you will need to create and destroy components on initialization, but the vast majority of components or the GameObject itself can be disabled and enabled.
public GameObject enemyPrefab;
public Queue<Enemy> PooledEnemies;
public List<Enemy> TrackedActiveEnemies;
Assign an enemy through the inspector. Next we will create our pools.
Creating the Objects
Call the setup function in the Awake of the class to setup the pool.
void SetupPools()
{
for (int i = 0; i < 100; i++)
{
var enemy = Instantiate(enemyPrefab, Vector3.zero, Quaternion.identity);
PooledEnemies.Add(enemy.GetComponent<Enemy>());
enemy.SetActive(false);
}
}
This will Instantiate all of the objects and keep a reference for us.
Using the Objects
Now, when we want to use a GameObject we can simply call our function in our class from our instance to return a GameObject for us to manipulate.
A super simple implementation might look something like the below.
If only using the <Queue> type and planning for one enemy. However, we want to use multiple enemy types. We can make our pooled enemies a list to have more flexibility. An example implementation for this logic would be an EnemyType enum that the GetEnemy function checks, like so.
public List<Enemy> PooledEnemies = new List<Enemy>();
public GameObject GetEnemy(Enemy.EnemyType enemyType)
{
foreach (var enemy in PooledEnemies)
{
if (enemy.CurrentEnemyType == enemyType)
{
PooledEnemies.Remove(enemy);
return enemy.gameObject;
}
}
}
Now we can simply use this as we would an instantiated object.
randomEnemyType = Random.Range(0, 3) == 0 ? 1 : 0;
var enemy = GetEnemy((Enemy.EnemyType)randomEnemyType);
enemy.transform.position = new Vector3(Random.Range(0,100), Random.Range(0,100), enemy.transform.position.y, 0f);
enemy.SetActive(true);
var enemyComponent = enemy.GetComponent<Enemy>();
enemyComponent.Init();
TrackedActiveEnemies.Add(enemyComponent);
Returning the Object to the Pool
We can use a function like the one below to return a used object to the pool after we are done with it.
public void RemoveEnemy(Enemy enemy)
{
enemy.gameObject.SetActive(false);
TrackedActiveEnemies.Remove(enemy);
PooledEnemies.Add(enemy);
}
Simply call RemovePooledEnemy() wherever needed.
Manager.Instance.RemoveEnemy(this);
Re-using Objects
Most of the quirks that you’ll encounter from pooling GameObjects like this stem from figuring out how to reset everything nicely. Unity doesn’t run most code on disabled objects; it’s usually preferable to reset things on Init to avoid unexpected behavior.
I recently saw these UI effects in a game called Cult of the Lamb and they were very satisfying to watch. Let’s learn how to create our own types of effects like these.
We want to begin with a base shader to manipulate, so let’s start by displaying a sprite.
Our shader must expose it to the editor in order to set our texture. Add a line under our properties defining a main texture.
_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
And the variable under SubShader.
sampler2D _MainTex;
float4 _MainTex_ST;
The _ST value will contain the tiling and offset fields for the material texture properties. This information is passed into our shader in the format we specified.
We want to add some movement and distortion to our sprite. Begin with movement.
How can we manipulate our shader pixels? Let’s show an example by modifying our main texture. We’ll simply change the position. To do so, we can do something simple like shifting the texture coordinate down and to the left.
If you examine your sprite at this point, you may notice some odd distortion as it moves.
Set your sprite’s import settings correctly! Mesh Type: Full Rect Wrap Mode: Repeat
Once you ensure your sprite has the correct import settings, it’s time to introduce our final 2d sprite we want to manipulate with the shader to achieve our effect.
This image will greatly change the shader appearance, and you should try different gradients and patterns. Here’s my image scaled up:
But I recommend using the smallest resolution that looks good for your project due to memory and performance.
We also need a seamless noise texture, for the distortion.
Let’s add another variable for it.
_NoiseTex ("Base (RGB) Trans (A)", 2D) = "white" {}
Once we’ve assigned our noise texture, it’s time to start moving it.
Shaders can be a useful way to enhance the visual presentation of your project through subtle or otherwise effects. Beyond code, the engine provides a built in visual scripting tool to create shaders from version 2019 onwards.
We will create an effect that allows us to highlight the player and obscure the rest of our stage. With scripting, we can also modify our exposed shader properties to adjust the intensity of the transparency effect, and transition to having no highlight. Examples will be shown later in the post.
Prerequisites
Ensure you have the Shader Graph package installed in your version of Unity. I am using 2022.3.17f for this post.
Creating the Shader
Right click in your Unity Project and do Create > Shader Graph > Blank Shader Graph
Now that we have a Shader Graph file, simply open the editor by double clicking it.
Let’s add some basic shader properties first. Navigate to the Graph Settings and add Built In as a target. We want the ability to control the transparency of our pixels, so also add the Alpha property to our fragment.
In order to properly utilize the Alpha property, we will need to edit the Built In settings Surface Type to Transparent.
Shader Inputs
The first thing to consider is the Player’s world position. Since we want the highlight effect to follow the player, we’ll need some sort of input into the shader.
In the Shader Graph editor, ensure the ‘Blackboard’ option is checked and visible, then click the plus button on the left side of the editor to create an input variable. Make it a Vector3 category. The ‘Name’ is for visual purposes, and the ‘Reference’ field will allow scripts access to the property. Give that some value like “_PlayerPosition” and drag it into the stage.
Since that’s simply a Vector, we need to translate that into something usable for our shader. We need to subtract the input player position from our world position so we can get the individual area to affect.
Right click, and create a Position and Subtract node.
Connect the player position and world position node to the subtract node. At this point your graph should look similar to below.
Next we will need a Length node to translate our position into a distance.
At this point, if we connect the output of our length to our Base Color on our Fragment, we can see a strange divine light.
How can we control the actual effect size?
We need a multiply node and some additional input here to control the highlight amount.
Let’s create a new Multiply node, and a Float input.
Name the Float input something like _EffectStrength, and feed the length output into the new multiply node.
You should have something similar to this, and the shader will go black again. This is simply because we haven’t given it an effect strength yet.
Save this Shader Graph asset and assign it to an object in our scene if you haven’t already.
Notice the warning. This refers to the fact that we aren’t rendering a sprite. This is correct, and can be safely ignored.
Assuming a reference to the sprite renderer component, we can then use the material set property functions to pass along our game values in an Update function or whenever needed.
Set the effect to something visible like 1 for now. We can also set a default through the Shader Graph editor.
All of this grey is pretty boring, so let’s add some color. The ability to edit our colors through scripting is pretty important, so let’s create two new Color variables.
The shader will lerp between these two colors for the highlight effect. We could use only one color considering our goal of mixing the effect with transparency, but the additional color gives more control over the effect appearance.
Create a Lerp node. Connect the output of the previous multiply node to the lerp T input, and the two new colors to the A and B inputs, respectively.
I set BGColor to blue, and PlayerRevealColor to red through the graph inspector to clearly show the shader effect.
If all goes well, you should have a circular gradient in the input colors you’ve specified.
And something like this in your Shader Graph.
That gradient isn’t really the look we want. Instead, we want a tight circular highlight around the player position.
To achieve this, we can add a Step node.
Insert it between the multiply and lerp node at the end, and it will produce a gated circular output.
Adjusting the EffectStrength should make the circle appear larger. Try values from 0 -> 1. Above 1 will make the highlight smaller.
Now we just need to connect our transparency logic.
Add another Multiply node that we will use for the Alpha property on the Fragment. The input should be our previous multiply node’s output, before the Step node. This allows control over the strength of the highlight fade. I went with 1.5.
You’re pretty much finished!
We can adjust the colors to do screen wave effects like this that could be enhanced with particle effects.
Or as a game over effect where you hide the rest of the stage and highlight the player. I added a purple background sprite behind the player to show the masking effect.
Force fields, lights for dark mazes etc all follow a similar concept.
There are many acceptable JavaScript game engines out nowadays, but often you can get good performance from writing your own simple engine or renderer depending on your use case. The code for this project will be on my GitHub linked below.
What goes into writing a game engine?
Ideally, we want to handle a few important things.
States, whether that be states of objects (alive, dead, moving, the type of enemy)
We approach this task with an object-oriented mindset instead of a functional programming mindset. Although there are a few global variables such as the overall running game state or the object pool arrays, most of the memory or information we need to remember occurs on a per-object basis.
We will be using a ‘Canvas‘ to draw our simple asteroid graphics. Writing a 3d renderer in JS is a much more complex task, although libraries like threeJS exist to get you started.
To begin with, we want to define a Vector2D class that we can reuse throughout our game. I’m familiar with Unity so I imagine an implementation similar to their engine’s GameObject setup, but any class that can read / write an X and Y will work.
This will allow us to reference positions easier. It’s vital to implement a few capabilities for our renderer. One important need is to be able to draw an object to our canvas at a specified position, and have the capability to clear said canvas, preparing for the next frame the game renders.
To draw a line, we can write JavaScript such as:
var c = document.getElementById("canvas");
var ctx = c.getContext("2d");
ctx.moveTo(0, 0);
ctx.lineTo(200, 100);
ctx.stroke();
And if we wanted to clear our canvas, we can use clearRect:
ctx.clearRect(0, 0, canvas.width, canvas.height);
We can define a render function to handle our different objects.
We are handling rendering and state management from inside an object now. All that just for a triangle.
We aren’t done yet. Next we need to handle Input. The goal with creating object classes is reusability and extensibility. We don’t need to spawn multiple instances of an input, so we can handle that globally. Your Input function may look something like this:
window.onkeydown = function(e) {
switch (e.keyCode) {
//key A or LEFT
case 65:
case 37:
keyLeft = true;
break;
//key W or UP
case 87:
case 38:
keyUp = true;
break;
//key D or RIGHT
case 68:
case 39:
keyRight = true;
break;
//key S or DOWN
case 83:
case 40:
keyDown = true;
break;
//key Space
case 32:
case 75:
keySpace = true;
break;
//key Shift
case 16:
keyShift = true;
break;
}
e.preventDefault();
};
window.onkeyup = function(e) {
switch (e.keyCode) {
//key A or LEFT
case 65:
case 37:
keyLeft = false;
break;
//key W or UP
case 87:
case 38:
keyUp = false;
break;
//key D or RIGHT
case 68:
case 39:
keyRight = false;
break;
//key S or DOWN
case 83:
case 40:
keyDown = false;
break;
//key Space
case 75:
case 32:
keySpace = false;
break;
//key Shift
case 16:
keyShift = false;
break;
}
e.preventDefault();
};
e.preventDefault() will stop users from accidentally hitting keys such as ctrl + L and losing focus from the window, or jumping the page with Space, for instance.
function updateShip() {
ship.update();
if (ship.hasDied) return;
if (keySpace) ship.shoot();
if (keyLeft && keyShift) ship.angle -= 0.1;
else if (keyLeft) ship.angle -= 0.05;
if (keyRight && keyShift) ship.angle += 0.1;
else if (keyRight) ship.angle += 0.05;
if (keyUp) {
ship.thrust.setLength(0.1);
ship.thrust.setAngle(ship.angle);
} else {
ship.vel.mul(0.94);
ship.thrust.setLength(0);
}
if (ship.pos.getX() > screenWidth) ship.pos.setX(0);
else if (ship.pos.getX() < 0) ship.pos.setX(screenWidth);
if (ship.pos.getY() > screenHeight) ship.pos.setY(0);
else if (ship.pos.getY() < 0) ship.pos.setY(screenHeight);
}
...etc
function checkDistanceCollision(obj1, obj2) {
var vx = obj1.pos.getX() - obj2.pos.getX();
var vy = obj1.pos.getY() - obj2.pos.getY();
var vec = Vec2D.create(vx, vy);
if (vec.getLength() < obj1.radius + obj2.radius) {
return true;
}
return false;
}
...etc
Once we have the ability to render a reusable object to a canvas and read / write a position that can be checked, we use that as a template to create other objects (particles, asteroids, other ships).
You can make interesting graphics with just basic shapes. We handle collision by assigning either an xWidth and yWidth + xOffset and yOffset, OR a radius. This again would be assigned to the object itself to keep track of.
Further Techniques
If we can control the rendering manually we can leave an ‘afterimage’ on our canvas before rendering the next frame as opposed to clearing it entirely. To do this, we can manipulate the canvas’ global alpha.
// Get the canvas element and its 2D rendering context
const canvas = document.getElementById('myCanvas');
const ctx = canvas.getContext('2d');
// Set the initial alpha value
let alpha = 0.1; // You can adjust this value to control the fading speed
// Function to create the afterimage effect
function createAfterimage() {
// Set a semi-transparent color for the shapes
ctx.fillStyle = `rgba(255, 255, 255, ${alpha})`;
// Fill a rectangle covering the entire canvas
ctx.fillRect(0, 0, canvas.width, canvas.height);
// Decrease alpha for the next frame
alpha *= 0.9; // You can adjust this multiplier for a different fade rate
// Request animation frame to update
requestAnimationFrame(createAfterimage);
}
// Call the function to start creating the afterimage effect
createAfterimage();
And a simple localStorage can be used to save scores.
function checkLocalScores() {
if (localStorage.getItem("rocks") != null) {
visualRocks = localStorage.getItem("rocks");
}
if (localStorage.getItem("deaths") != null) {
visualDeaths = localStorage.getItem("deaths");
}
if (localStorage.getItem("enemyShips") != null) {
visualEnemyShips = localStorage.getItem("enemyShips");
}
updateVisualStats();
}
function saveLocalScores() {
localStorage.setItem("rocks", visualRocks);
localStorage.setItem("deaths", visualDeaths);
localStorage.setItem("enemyShips", visualEnemyShips);
}
Researchers have recently released a new paper and subsequent model, “Simple and Controllable Music Generation”, where they highlight it “is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models”. What this essentially means in practice is the music generation can now be completed in less steps, and is getting more efficient as we make progress on various different types of models.
I expect AI to hit every industry in an increasingly rapid pace as more and more research becomes available and progress starts leapfrogging based on other models. MUSICGEN was trained with about 20K hours of unlicensed music, and the results are impressive.
Here are some interesting generations I thought sounded nice. As more models from massively trained datasets hit the public, we will see more community efforts and models as well just like with art.
Medium Model
I used the less performant medium model (1.5B parameters and approx 3.7 GB) to demonstrate how even on relatively poor hardware you could achieve reasonable results. Here is some lofi generated from the medium model.
Large Model
A step up is the 6.5 GB model. This produce slightly better sounding results.
What is that melody?
There is also a ‘Melody’ model that is a refined 1.5B parameter version.
Limitations
There are a few limitations on this model, namely the lack of vocals.
Limitations:
The model is not able to generate realistic vocals.
The model has been trained with English descriptions and will not perform as well in other languages.
The model does not perform equally well for all music styles and cultures.
The model sometimes generates end of songs, collapsing to silence.
However, future models and efforts will remedy these points. It’s only a matter of time before a trained vocal model is released with how fast machine learning advancements are accelerating.
Starbound has been one of my favorite games of all time, so I’m happy to say that I have the latest Starbound source code, last commit August 7th, 2019. I will not be explaining how I got these files. It is the actual source, not just a decompilation, and as such includes build scripts, unused stuff, old migration code, comments, a stored test player, etc.
Source Screenshots
The source has minimal comments, and the structure is reasonable. I found the code easy to read and understand, but perhaps that’s because I’ve been modding Starbound for years now and am familiar with its behavior.
Languages Breakdown (GitHub)
StarEnvironmentPainter.cpp
StarMixer.cpp (audio related)
StarTools.cpp
Building
And of course, we can build it. I compiled this version without Steam API or the Discord rich presence API, but those are easily included.
Funny Developer Comments
Here’s a look at some of the best (in my opinion) developer comments in the source. This is not intended to be a mockery, far from it, I’m ecstatic I can take a peek into the minds of the developers. Enjoy.
Example: Simple Re-implementation of Vapor Trail and Sitting Toolbar Usage
At some point during development, Chucklefish had the idea to add a vapor trail when the player was falling fast. I could’ve sworn I saw a post on their news about it back when the game was in beta, but I can’t find it now. Anyway, we can add a small snippet to restore it, as an example of further engine work Starbound can benefit from.
Allowing us to use our inventory while sitting down
Further Thoughts
Future work on the engine can lead to further modding capabilities and engine optimizations. There are many potential client side performance improvements that could be made without touching any network code. This would maintain compatibility with the vanilla client. The netcode could be updated as well, but this would break compatibility once major changes were made. If both (or more) parties are willing to use a modified client, any theoretical modification could be made. The possibilities are endless.
As of 2024, there now exists a few Starbound open source community projects with the aim of enhancing the base game’s experience. : )
AI will help developer efficiency, not replace it.
One of the most significant use cases I’ve found for AI in my development work is its ability to automate repetitive tasks, such as using a bunch of similarly named, grouped variables. I recently was creating a ‘Human’ class, and needed all body parts for variables. That was suggested and picked up almost immediately by Copilot after a couple lines, and the whole class was done in mere seconds vs a few minutes. This adds up and means that I can focus on other creative tasks, such as developing new features, creating new UI ideas or focusing on user feedback. The result is increased productivity and faster software development.
I imagine a future where one can describe the architecture of my Android app in as much detail as possible and then go in and clean up the resulting code manually to a specific vision. Developers will be fast tracked to a more active management role.