Articles

Genericizing character functionality for fast iteration

I have always had a passion for competitive multiplayer games, especially those that are ability based such as MOBAs. I decided to try my hand at making one as a solo project in Unreal. However, I quickly ran into a bit of a roadblock as my blueprints were getting very messy very fast. So, I went about developing subsystems to abstract and genericize core functionality so that my project would be clean and easy to debug and iterate on.

The first thing I created was a CharacterStats object, which I could use to both abstract character data, as well as verify if an object is a character. I attached the CharacterStats object to any actor that I wanted to classify as a character. It houses the functionality to initialize, “destroy” and update a character. I chose to manage characters in this way because of three factors. First, I can make any actor in the world a “character” without having to write new code. Second, as mentioned earlier, I could now easily check if an actor is a character by seeing if they had a CharacterStats object attached to them. This was especially useful for raycasting and collisions where I wanted to ignore characters/non-characters. Third, if I get to the stage of networking, I only need to worry about the CharacterStats object being synchronized as it serves as an abstraction layer for the actual character actor.

After creating the CharacterStats object, I thought about how I wanted to handle effects with durations that could be applied to a character, such as crowd control (CC) or damage over time/healing over time effects. In my pre-implementation planning I identified some major problems that I had to solve. First, I didn’t want CC effects of the same type to stack on top of each other. My solution to this was quite simple, store all CC and over time effects in two separate arrays. When a new effect is added to the CC array, parse the array for effects of the same type only keep the one with the strongest effect. The second, but similar, problem was characters being effected by multiple effects of the same name. My solution to this problem was similar to handling multiple effects of the same type, I simply attached a name tag to each effect and when that effect is added to an array that contains an effect with the same name, I refresh the duration on the existing one. Because all of my CC/over time effects were already stored in arrays updating them were easy. I store the arrays in the actors CharacterStats’ object then go through them and update each effect in the CharacterStats’ update function. With this functionality put into place, I now had a way to very quickly make new CC and over time effects without having to duplicate code.

Writing your core character functionality in this way, despite an initial time investment, not only allows for fast iteration down the line, but also makes your code easier to read and debug. However, I found that the time investment is not always worth it. Since I was working on a short time solo project I dumped so much time into genericizing my functionality that I ended up with very little content. This was all fine and good since this was for academic purposes but I think in a real world scenario I’d consider prioritizing content a bit more.

Using Bilinear Interpolation and Nearest Neighbor techniques to scale your images

When you zoom into an image or change your window’s resolution, you are trying to redraw whatever images were in the window without knowing all of the pixel information. In order to accurately render the scaled images, you must use information that you do know, and predict the information you do not. Two techniques commonly used for this are Bilinear Interpolation and Nearest Neighbor.

When faced with implementing these techniques, I decided to approach the task in a way that was easy to visualize and debug. I started by making a simple Pixel class, that would hold my color information and make image operations simple. It looks something like this:

class Pixel
{
    public:
    Pixel() : r(0), g(0), b(0), a(255) {}
    void operator=(const Pixel& rhs)
    {
        r = rhs.r;
        g = rhs.g;
        b = rhs.b;
        a = rhs.a;
    }

    unsigned char r;
    unsigned char g;
    unsigned char b;
    unsigned char a;
}

You can add more functionality of course, but for the sake of this tutorial let’s keep it nice and simple.

Now that we have our Pixel class setup, store your loaded image into a 2D vector of Pixels. This will allow for easy visualization of the algorithms I am covering today as well as future image processing algorithms. And now, without further ado lets get into the two resizing algorithms.

Nearest Neighbor:
Nearest Neighbor is a very simple algorithm. In order to get the missing pixel information, we simply round to get the closest corresponding pixel in the original image. Here are the variables we will need:
imageIn: the 2D vector of Pixels representing the original image
imageOut: the 2D vector of Pixels that will represent the resized image
width: will be the width of the resized image
height: will be the height of the resized image
xRatio: the ratio of the amount that the window has resized in the X direction
yRatio: the ratio of the amount that the window has resized in the Y direction

void NearestNeighbor(const std::vector& imageIn, std::vector& imageOut, int& width, int& height, double xRatio, double yRatio)
{
        //calculate the new image size based on how the screen was resized
	width = (int)(originalWidth / xRatio);
	height = (int)(originalHeight / yRatio);

	imageOut.resize(height);
	 for(int y = 0; y = imageIn.size())
			 yIndex = imageIn.size() - 1;

		 imageOut[y].resize(width);
		 for (int x = 0; x = imageIn[0].size())
				 xIndex = imageIn[0].size() - 1;

			 imageOut[y][x] = (imageIn[yIndex][xIndex]);
		 }
	 }
}

Bilinear Interpolation:
Bilinear Interpolation, although more complicated, will often generate a more accurate result than Nearest Neighbor. It takes the information of pixels near the pixel we are looking at then interpolates between them, giving us a great guess of what information that pixel should have. We will use the same inputs as we did for Nearest Neighbor:
imageIn: the 2D vector of Pixels representing the original image
imageOut: the 2D vector of Pixels that will represent the resized image
width: will be the width of the resized image
height: will be the height of the resized image
xRatio: the ratio of the amount that the window has resized in the X direction
yRatio: the ratio of the amount that the window has resized in the Y direction

void BilinearInterpolation(const std::vector& imageIn, std::vector& imageOut, int& width, int& height, double xRatio, double yRatio)
{
    //calculate new image size based on how the screen was resized
    width= (int)(originalWidth / xRatio);
    height= (int)(originalHeight / yRatio);

    imageOut.resize(height);
    for (int y = 0; y = imageIn.size())
            y1 = imageIn.size() - 2;
        if (y2 >= imageIn.size())
            y2 = imageIn.size() - 1;

        //the interpolation value for the y direction
        float beta = (float)(y2 - y1) / (float)(y - y1);

        imageOut[y].resize(width);
        for (int x = 0; x = imageIn[0].size())
                x1 = imageIn[0].size() - 2;
            if (x2 >= imageIn[0].size())
                x2 = imageIn[0].size() - 1;

            Pixel pixel;
            //the interpolation value for the x direction
            float alpha = (float)(x2 - x1) / (float)(x - x1);

            Pixel fxy1, fxy2;

            for (int i = 0; i < 3; ++i)
            {
                fxy1[i] = (unsigned char)((1.0f - alpha) * (imageIn[y1][x1])[i] + alpha * (imageIn[y1][x2])[i]);
                fxy2[i] = (unsigned char)((1.0f - alpha) * (imageIn[y2][x1])[i] + alpha * (imageIn[y2][x2])[i]);
            }

            for (int i = 0; i < 3; ++i)
            {
                pixel[i] = (unsigned char)((1.0f - beta) * fxy1[i] + beta * fxy2[i]);
            }

            imageOut[y][x] = pixel;
        }
    }
}

And that's that! Now all you have to do is convert the 2D vector of Pixels to whatever you will be passing in to your graphics pipeline. Have fun!

Using a custom memory manager to boost performance.

Writing a custom game engine is a great way to have full control over whatever game you are making. However, one of the major drawbacks of using a custom engine is the potential drop in performance when compared to a robust commercial engine. Today I wanted to share with you my implementation of a custom memory manager that solved our FPS performance problems.

While working on Event Horizon, we noticed that our FPS would drop significantly when loading and unloading a level that contained a lot of background tiles. This is because we were relying on the operating system to allocate and deallocate memory for each game object. When transitioning between levels, each object had to be new’d and delete’d even though most of them were being reused. Because there was so much reuse going on, I decided that the best way to solve our problem was to use a custom memory manager rather than relying on the operating system. The actual implementation of a custom memory manager, especially for a smaller project, is rather simple. Essentially, the idea is that at load time, you allocate a large chunk of memory and then draw from it instead of calling new. In the case of Event Horizon, I allocated a different chunk for each type of thing I wanted to store (GameObjects, Textures, etc.). Since GameObjects are typically the most used object type in a game, I will be focusing on them.

Lists::Lists()
{
   TextureList = new Texture*[MAX_TEXTURES];
   ObjectList.reserve(MAX_GAME_OBJECTS);
   MeshList.reserve(MAX_MESH);

   //200 is an arbitrary number of background tiles
   for(int i = 0; i < 200; ++i)    
   {       
      BackGround[i] = new GameObject();    
   } 
} 

I added to the GameObject class, a bool that I called mIsActive. When the GameObject was active, mIsActive is set to true, and false when it is not active. At load time, I allocated a very large std::vector of GameObjects, and set each GameObject’s mIsActive flag to false. Whenever I wanted to make a new GameObject, all I had to do was traverse the vector until I found a GameObject whose mIsActive flag was set to false.

 GameObject* Level::FindEmptyObject() 
{    
   for(int i = 0; i != gameLists->GetMaxGameObjects(); ++i)
   {
      if(gameLists->GetObjectList()[i]->ActiveFlag == 0)
         return gameLists->GetObjectList()[i];
   }
}

Then, I could take that GameObject, set its mIsActive flag to true, and update its other data as needed. Similarly, if I wanted to delete a GameObject, all I had to do was set it’s mIsActive flag to false since the engine would only draw GameObjects that were active. Clearing the lists at the end of each level was very simple as well. All I did was call the ClearObjects function which went through and set the mIsActive flag to false on each object.


void Lists::ClearObjects()
{
   for(int i = 0; i < MAX_GAME_OBJECTS; ++i)    
   {       
      ObjectList[i]->Destroy();
   }

   CollisionList.clear();

   for(int i = 0; i < 200; ++i)    
   {       
      BackGround[i]->Destroy();
   }
}

By doing this, I was able to force the engine to only use the operating system to manage memory at the very beginning and the very end of the game, effectively increasing our FPS by 20 to 40 based on the number of objects in the level.

Overall, if you notice that your game’s frame rate is suffering, implementing a custom memory manager can be an easy way to fix the problem. See you next time!