skip to Main Content

I’m writing a terrain renderer with C++ and OpenGL using nested grids and heightmaps, and am having trouble with the higher-detail (closer) grids looking blocky/terraced.

Initially I thought the problem was with the 8-bit heightmaps I was using, but 16-bit ones produce the same result (I’m using l3dt, World Machine and Photoshop to generate different maps).

My code needs to be abstracted from an engine pipeline so the heightmap is applied to the grids using transform feedback in a vertex shader:

void main()
{
    float texOffset = 1.0 / mapWidthTexels, mapOffset = scale / mapWidthWorld; //Size of a texel in [0, 1] coordinates and size of a quad in world space
    vec2 texCoord = (vertPos.xz * scale + offset) / mapWidthWorld + 0.5; //Texture coordinate to sample heightmap at. vertPos is the input vertex, scale is pow(2, i) where i is the nested grid number, offset is eye position
    position = vertPos * scale;

    if(vertPos.y == 0.0) //Y coordinate of the input vertex is used as a flag to tell if the vertex is bordering between nested grids
        position.y = texture(heightmap, texCoord).r; //If it's not, just sample the heightmap
    else
    {
        //Otherwise get the two adjacent heights and average them
        vec2 side = vec2(0.0);
        if(abs(vertPos.x) < abs(vertPos.z))
            side.x = mapOffset;
        else
            side.y = mapOffset;
        float a = texture(heightmap, texCoord + side).r, b = texture(heightmap, texCoord - side).r;
        position.y = (a + b) * 0.5;
    }

    float mapF = mapWidthWorld * 0.5;
    position.xz = clamp(position.xz + offset, -mapF, mapF) - offset; //Vertices outside of the heightmap are clamped, creating degenrate triangles
    position.y *= heightMultiplier; //Y component so far is in the [0, 1] range, now multiply it to create the desired height

    //Calculate normal
    float leftHeight = texture(heightmap, texCoord + vec2(-texOffset, 0.0)).r * heightMultiplier, rightHeight = texture(heightmap, texCoord + vec2(texOffset, 0.0)).r * heightMultiplier;
    float downHeight = texture(heightmap, texCoord + vec2(0.0, -texOffset)).r * heightMultiplier, upHeight = texture(heightmap, texCoord + vec2(0.0, texOffset)).r * heightMultiplier;
    normal = normalize(vec3(leftHeight - rightHeight, 2.0, upHeight - downHeight));

    tex = vertTex; //Pass through texture coordinates
}

RAW 16-bit heightmaps are loaded as such:

std::ifstream file(_path, std::ios::ate | std::ios::binary);
int size = file.tellg();
file.seekg(0, std::ios::beg);
m_heightmapWidth = sqrt(size / 2); //Assume 16-bit greyscale
unsigned short *data = new unsigned short[size / 2];
file.read(reinterpret_cast<char*>(data), size);

if (m_flip16bit) //Dirty endianness fix
{
    for (int i = 0; i < size / 2; i++)
        data[i] = (data[i] << 8) | ((data[i] >> 8) & 0xFF);
}

glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, m_heightmapWidth, m_heightmapWidth, 0, GL_RED, GL_UNSIGNED_SHORT, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
delete[] data;

Other formats are loaded similarly with stb_image.

The resulting terrain looks like this:
https://imgur.com/a/d8tDPGO

As you can see areas with little to know slope have this terraced appearance. What am I doing wrong?

2

Answers


  1. Chosen as BEST ANSWER

    Turns out l3dt's textures were the problem, parts that were meant to be underwater turned out terraced. Also, if the height range used in l3dt doesn't match heightMulptiplier in the shader artefacts can arise from that.


  2. RAW 16-bit heightmaps are loaded as such:

    [...]
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, m_heightmapWidth, m_heightmapWidth, 0, GL_RED, GL_UNSIGNED_SHORT, data);
                                   ^^^^^^
    

    Nope. The internalFormat parameter controls the format the texture is stored on the GPU, and GL_RED is just 8 Bit in any realistic scenario. You most likely want GL_R16 for a normalized 16Bit unsigned integer format.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search