gl_Position is a Homogeneous coordinates. Homogeneous coordinates are needed for perspective projection.
Note, if a vector vec4(x, y, z, 1.0) is multiplied by a perspective projection matrix, this results in a Homogeneous coordinates.
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).

This means the position in normalized device space is calculated like this:
vec3 ndc = gl_Position.xyz / gl_Position.w;
If you manually set the w component of gl_Position, this causes a reciprocal scaling of the position in normalized device space.
gl_Position = vec4(in_position, 1.0)Do you actually do it in the shader? No matrix transformations? Do you really meangl_Position = u_matrix * vec4(in_position, 1.0);? \$\endgroup\$gl_Position.wwill no longer be equal to1, and the perspective division will happen. Not sure if it helps or you ask about a completely different thing. \$\endgroup\$