0
$\begingroup$

I have two 32bit floats that I calculate in a pixel shader which represent a texture mask. What would be the best way to pack them into the alpha channel of a R16G16B16A16_FLOAT texture without losing too much precision?

I rarely do this type of bit manipulation so I'm not sure how to work around the exponent and the mantissa bits when doing the packing.

$\endgroup$
7
  • $\begingroup$ You want to pack two numbers into 16 bits? $\endgroup$ Commented Oct 18, 2024 at 5:03
  • $\begingroup$ Do the two numbers have a specific range? For example [0, 1] $\endgroup$ Commented Oct 18, 2024 at 7:56
  • $\begingroup$ @Thomas Yes the range should be in [0, 1] range $\endgroup$ Commented Oct 18, 2024 at 17:20
  • $\begingroup$ @DmitriUrbanowicz Yes, I have two 32bit floats that I would like to pack together into 16bits while losing as little precision as possible, I accept the loss of precision for my use case $\endgroup$ Commented Oct 18, 2024 at 17:24
  • $\begingroup$ There's no universal best solution for this. Depending on what you do with those numbers, you may get best result with either uniform fixed point representation, or non-linear encoding, or even introducing exponent bits into your 8-bit representation. So what are these numbers exactly in your case? $\endgroup$ Commented Oct 20, 2024 at 7:14

1 Answer 1

1
$\begingroup$

Just a quick note: The method outlined below is used across api's. There are "snorm" and "unorm" textures types used soley for encoding floating point numbers in the ranges [-1,1] and [0,1] using an 8 bit representation. Every API I have worked with have specific rules about those encodings but what I outlined below works for putting 8 bit floats in a rgba 16 bit texture and is the general conversion process API specs like vulkan recommend when storing 8 bit floats in these ranges. Also there are specific functions in most shader languages like glsl and hlsl that allow direct conversion between int and float without any type conversions. This is the reason I suggest a unsigned type for texture type rather then a float type.

What API? Also, I recommend using a 16bit unsigned texture type to avoid unwanted type conversions when reading the textures.

The basic idea is to multiply the floating point values by 255 then shift and "or" the two values into a single 16 bit value. That value would go into alpha channel. ie

uint16_t packed = static_cast<uint16_t>(low*255.0f) | (static_cast<uint16_t>(high*255.0f)<<8); 

Inside the shader read the value back using a type that will not reinterpret the bits such as an unsigned sampler. Extract the two values as 8 bit values, convert them to floats and divide by 255 to get back to a value between 0 and 1.

HLSL has a 16 bit half type you might be able use, but GLSL doesn't have universal support for 16 bit float types so it is generally better to store them in 16 bit integer texture types. The other channels can be handled in a similar way.

Basically you want the raw value read from the alpha channel so that the bits are exactly what you put in the texture. Reading a float 16 into a float 32 will go through a conversion that will reinterpret the bits.

$\endgroup$
2
  • $\begingroup$ The API is actually the Unreal Engine 5 RHI, so it will vary between DX12, DX11 and the various gen9 console APIs. Unfortunately I can't change the texture format here, it really has to be a R16G16B16A16_FLOAT $\endgroup$ Commented Oct 21, 2024 at 17:22
  • $\begingroup$ It is doable, but you basically have to hand write a conversion function. flt32 to uint (hlsl has a function for this) then a hand written 32 bit float representation to a 16 bit float representation then finally extract the values. You might also consider going to a 2 channel 32 bit format. Under unity I think it is called RGfloat or RGint. That would give relatively easy access to all 5 values. $\endgroup$ Commented Oct 21, 2024 at 18:03

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.