Project Categories
Project Setting | Academic |
Team Size | 1 |
Role(s) | Creator / Developer |
Languages | C++ HLSL |
Software/Tools Used | DirectX 11 (framework), Visual Studio (code) |
Status | Complete |
Time Period | Jan 2022 - Apr 2022 |
About
This project was an ongoing series of assignments for a class called "Foundations of Game Graphics Programming" for my last semester at RIT. We started with a very barebones DX11 starter template, which primarily handled the core windowing and input so that we could instead focus on actual graphics programming; all it could do was draw a single RGB triangle to the screen and read input.
We first started with implementing a mesh class, followed by vertex shading (with vertex colors), transforms, and eventually cameras. After that came importing 3D models and defining a material class to handle our own pixel shaders, lighting, textures, normal maps and cube maps, and finally physically-based rendering materials.
It was incredibly satisfying to see this project come to life over time, especially considering each step of the assignment was divided in a way that not only made sense but ensured each step took very little time to implement (each assignment took usually between 30 minutes to at most 2 hours, and that includes any problems or cleanup I did on the project in the meantime). It was also fun to hear the professor wanted to use my randomized pixel shader as an example for future assignments and to hear that he wanted to make some changes to the starter that were like some I made to it myself (like making texture loading and path retrieval available outside DXCore). I did try to do some more optional features with the project that weren't a part of any requirements, such as reflection maps, supporting dynamic light count, emissive maps and emission colors, and making various texture maps optional. in the standard shader.
For the final project, we had a choice of either making a small game-like experience or adding some sort of advanced feature to the project. I went the advanced feature route and chose a couple options because of how much I enjoyed working with the project. The options I chose were adding transparency and creating a toon shader. For transparency, I supported both alpha cutoff and alpha blending. For the toon shader I studied the professor's demo as well as existing Unity toon shaders (most were implemented with a mix of CG and HLSL), creating my own that additionally supported outlines on top of emissive, albedo, and specular maps. In theory it supports alpha, but it did have an issue where the rim light would apply to the entire interior (backface) of an object. It also does have a branching problem (where the shader will take different steps for different pixels). I decided these were in context minor issues as I had to focus on other projects for the end of the semester at the time.
Samples
These are some samples from the project.
Screenshots

Screenshot showing PBR, Standard, and Toon shaders implemented in the project
Code
Below is how lighting is handled in the toon shader. Lighting is simplified by just getting direction and attenuation if applicable, then ramping the calculated diffuse and specular values using a ramp map (which is essentially just 2-4 shades of gray). If there is no ramp map, it does the ramping manually, which involves branching at a pixel level, so ramp maps are of course preferred. It does currently have a problem where rim and outline are both branching paths at a pixel level.
// calculate lighting
float3 light = ambient * surface;
for (int i = 0; i < lightCount; i++)
{
float3 toLight = float3(0, 0, 0);
float attenuate = 1;
switch (lights[i].Type)
{
case LIGHT_TYPE_DIRECTIONAL:
toLight = normalize(lights[i].Direction);
break;
case LIGHT_TYPE_POINT:
toLight = normalize(lights[i].Position - input.worldPosition);
attenuate = getAttenuation(lights[i].Position, input.worldPosition, lights[i].Range);
break;
}
// applies the step-like effect of toon shading to the diffuse/specular of the lighting
float diffuse = 0;
float specular = 0;
if (hasRampDiffuse > 0)
diffuse = RampDiffuse.Sample(ClampSampler, float2(getDiffuse(normal, toLight), 0)).r;
else
diffuse = GetRampDiffuse(getDiffuse(normal, toLight));
if (hasRampSpecular > 0)
specular = RampSpecular.Sample(ClampSampler, float2(calculateSpecular(normal, toLight, view, specularValue, diffuse) * roughness, 0));
else
specular = GetRampSpecular(calculateSpecular(normal, toLight, view, specularValue, diffuse) * roughness);
light += (diffuse * surface.rgb + specular) * attenuate * lights[i].Intensity * lights[i].Color;
}
// get emission; use emissive map if there is one
float3 emit = float3(1, 1, 1);
if (hasEmissiveMap > 0)
emit = Emissive.Sample(BasicSampler, input.uv).rgb;
// calculate rim/outline value (i.e. whether there is any at this pixel)
float vDotN = (1 - dot(view, input.normal));
float rimValue = GetRampSpecular(vDotN * pow(light, rimCutoff));
float outlineValue = GetRampSpecular(vDotN * outlineThickness);
// return rim lighting if there is any; takes priority over outline
if (rimValue > 0)
return float4(light + (emit * emitAmount) + rimTint, alphaValue);
// return outline if there is any
if (outlineValue > 0)
return float4(outlineTint, alphaValue);
Below is how I handle texture loading for materials. The CreateWICTextureFromFile and texture pushing (which simply pushes the texture to an `std::unordered_map` with the `_type` as the name and the SRV as the texture) were demonstrated in class, but I made a few changes to the project to make the project more clean. First, texture loading was originally handled in the main `Game.cpp` class, because `DXCore::GetFullPathTo_Wide` was a private, non-static method. I made this a public static method so that it could be used anywhere in the project, like the `Material` class. Secondly, I made several `constexpr TEXTYPE_` defines which result in strings that match their `Texture2D` register names in the shaders. When calling `LoadTexture`, these are passed as the corresponding `_type` variable, and these are used when saving the texture to then pass it to the shader when activating it during the Draw period. Finally, with these definitions, I specify booleans in order to mark that the material has those texture maps, so that the shader can support not having various texture maps without having a bunch of shaders. Ideally I would actually create shader variants for optimization purposes, but that is out of scope for the project, and something like that could be automated (e.g. Unity shader variants on builds).
/// <summary>
/// Loads and adds a texture to the material
/// </summary>
/// <param name="_path">The path of the texture relative to the root where the executable is located</param>
/// <param name="_type">The type of texture this is (see TEXTYPE_{types}; should match shader Texture2D buffers)</param>
/// <param name="_device">The DirectX device</param>
/// <param name="_context">The DirectX context</param>
void Material::LoadTexture(const wchar_t* _path, const char* _type, ID3D11Device* _device, ID3D11DeviceContext* _context)
{
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> shaderResourceView;
DirectX::CreateWICTextureFromFile(_device, _context, DXCore::GetFullPathTo_Wide(_path).c_str(), 0, shaderResourceView.GetAddressOf());
PushTexture(_type, shaderResourceView);
if (_type == TEXTYPE_ALBEDO) hasAlbedoMap = true;
else if (_type == TEXTYPE_EMISSIVE) hasEmissiveMap = true;
else if (_type == TEXTYPE_SPECULAR) hasSpecularMap = true;
else if (_type == TEXTYPE_NORMAL) hasNormalMap = true;
else if (_type == TEXTYPE_REFLECTION) hasReflectionMap = true;
else if (_type == TEXTYPE_RAMPDIFFUSE) hasRampDiffuse = true;
else if (_type == TEXTYPE_RAMPSPECULAR) hasRampSpecular = true;
}
Places
Github Repo | Repository |