0

I try to implement normal mapping into my vertex (later in my fragment shader, too). I added attributes for the needed tangents and bitangents (aNormalTangent, aNormalBiTangent), a maximum of 10 lights (uLightPos[10]) as uniform and an int uniform to globally disable/enable the use of normal mapping.

When I compile the shader program, I get no error. But if I try to find out the corresponding handle [with GL.GetAttribLocation(currentProgram, "aNormalTangent")], I always get -1 for the attributes aNormalTagent and aNormalBiTangent. Every other attribute location and every other uniform works.

Where is the error in my code? I tried commenting out parts, but I did not find the source of the error...

Here is my vertex shader code:

#version 330 in vec3 aPosition; in vec3 aColor; in vec2 aTexture; in vec3 aNormal; in vec3 aNormalTangent; in vec3 aNormalBiTangent; out vec4 vPosition; out vec4 vColor; out vec2 vTexture; out vec3 vNormal; // for shadows: out vec2 vTexCoordinate; out vec4 vShadowCoord; // for normal mapping: out vec3 vLightPosTanSpace[10]; uniform mat4 uMVP; uniform mat4 uM; uniform mat4 uMV; uniform mat4 uNormalMatrix; uniform mat4 uShadowMVP; uniform int uUseNormalMap; uniform vec4 uLightPos[10]; void main() { vShadowCoord = vec4(uShadowMVP * vec4(aPosition, 1.0)); vPosition = uM * vec4(aPosition, 1.0); vColor = vec4(aColor, 1.0); vTexture = aTexture; vNormal = normalize(vec3(uNormalMatrix * vec4(aNormal, 0.0))); // Normal mapping calculations: if(uUseNormalMap > 0) { mat3 mv3x3 = mat3(uMV[0].xyz, uMV[1].xyz, uMV[2].xyz); vec3 vertexNormal_cameraspace = mv3x3 * normalize(aNormal); vec3 vertexTangent_cameraspace = mv3x3 * normalize(aNormalTangent); vec3 vertexBitangent_cameraspace = mv3x3 * normalize(aNormalBiTangent); mat3 TBN = transpose(mat3(vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace)); vLightPosTanSpace[0] = TBN * vec3(uLightPos[0]); // --- 9 other array slots filled with vec3(0,0,0) for testing --- } else { vLightPosTanSpace = vec3[10]( vec3(0,0,0), vec3(0,0,0), vec3(0,0,0), vec3(0,0,0), vec3(0,0,0), vec3(0,0,0), vec3(0,0,0), vec3(0,0,0), vec3(0,0,0), vec3(0,0,0) ); } gl_Position = uMVP * vec4(aPosition, 1.0); } 
2
  • Please edit your question to include the fragment shader code. A return value of -1 from glGetAttribLocation is usually an indication that the variable is not used by (i.e. does not affect the output of) the program and has been elided. Commented Jan 5, 2018 at 10:00
  • 1
    Damn, @G.M. you are my hero! The fragment shader had one misspelling of a variable........ I thought because the attributes are only used in the vertex shader, I would not need to look at the fragment shader. Commented Jan 5, 2018 at 10:05

1 Answer 1

3

Okay, now - even though I got downvotes (for whatever reason), I wanted to share my final solution with you guys:

I decided to convert the normal (that I get from the normal map) to world space. Because all my other calculations happen in world space, I think that's the easiest solution (for now). I know that converting everything to tangent space can happen in the vertex shader and thus be less expensive, but right now I am not aiming for the best performance possible. I just want it to work. Cleaning and optimization will follow.

So, vertex shader:

#version 330 in vec3 aPosition; in vec3 aColor; in vec2 aTexture; in vec3 aNormal; in vec3 aNormalTangent; in vec3 aNormalBiTangent; out vec4 vPosition; out vec4 vColor; out vec2 vTexture; out vec3 vNormal; // for shadows: out vec2 vTexCoordinate; out vec4 vShadowCoord; // for normal mapping: out mat3 TBN; uniform mat4 uMVP; uniform mat4 uM; uniform mat4 uMV; uniform mat4 uNormalMatrix; // inverse transpose of model matrix uniform mat4 uShadowMVP; uniform int uUseNormalMap; void main() { vShadowCoord = vec4(uShadowMVP * vec4(aPosition, 1.0)); vPosition = uM * vec4(aPosition, 1.0); vColor = vec4(aColor, 1.0); vTexture = aTexture; vNormal = normalize(vec3(uNormalMatrix * vec4(aNormal, 0.0))); vec3 tangent = normalize(vec3(uM * vec4(aNormalTangent, 0.0))); vec3 biTangent = normalize(vec3(uM * vec4(aNormalBiTangent, 0.0))); vec3 normal = normalize(vec3(uM * vec4(aNormal, 0.0))); TBN = mat3(tangent.xyz, biTangent.xyz, normal.xyz); gl_Position = uMVP * vec4(aPosition, 1.0); } 

...and fragment shader:

vec3 normal = vec3(0, 0, 0); if(uUseNormalMap > 0) { // receive normal from normal map texture: normal = normalize(texture(uTextureNormalMap, vTexture).xyz * 2.0 - 1.0); // convert normal to world space by multiplying it with TBN matrix: normal = normalize(TBN * normal); } else { // if no normal map is available, use the normal from the // vertex shader instead: normal = vNormal; } vec4 colorComponentTotal = vec4(0,0,0,1); for(int i = 0; i < 10; i++) { if(uLightPos[i].w > -1) // is it a real light or just an empty dummy value? { vec3 lightPos = vec3(uLightPos[i]); vec4 lightColor = uLightColor[i]; vec3 lightTargetPos = vec3(uLightTargetPos[i]); vec3 lightVector = lightPos - vec3(vPosition); float distance = length(lightVector); lightVector = normalize(lightVector); float dotProductNormalLight = max(dot(normal, lightVector), 0.0); if(uLightPos[i].w > 0) // is it a directional light? { // calculate diffuse component depending on the light's distance, // with fixed intensity (8.0) diffuseComponent = calculateDiffuseComponent(dotProductNormalLight, distance, 8.0); // calculate light cone for directional light: diffuseComponent = calculateFallOff(diffuseComponent, lightVector, normalize(lightTargetPos)); } else { diffuseComponent = calculateDiffuseComponentPoint(dotProductNormalLight, distance, 2.0); } colorComponentTotal = mix(colorComponentTotal, lightColor, min(lightColor.w, diffuseComponent)); diffuseComponentTotal += diffuseComponent; } else { break; } } outputColor = (diffuseComponentTotal * mix(colorComponentTotal, vColor, 0.1) + ambient * vec4(1,1,1,1)) * texture(uTexture, vTexture); 

I hope this solution might help others. Cheers!

Sign up to request clarification or add additional context in comments.

2 Comments

Looks pretty good! I was curious, a lot of shaders I have written use uNormalMatrix as you have done so against normal, but not tangent/bitangent. While reviewing other people's approaches, I noticed there's been a lot more simply multiplying uM against all three ( learnopengl.com/Advanced-Lighting/Normal-Mapping ). Another pushes against uMV ( opengl-tutorial.org/intermediate-tutorials/… ). Just wondering if you know what the differences are all about? Thanks!
@MikeWeir as far as I remember the normal matrix is the inverse transpose of the model matrix and it is used because multiplying by the model matrix would sometimes result in the normals being transformed too strongly: lighthouse3d.com/tutorials/glsl-12-tutorial/the-normal-matrix You may have seen multiplications involving uMV because some people like every calculation to be in view space rather than in world space. If you do all your other calculations in view space then multiplying by the uMV makes sense. I prefer world space because it’s easier for me to understand.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.