Just a quick note: The method outlined below is used across api's. There are "snorm" and "unorm" textures types used soley for encoding floating point numbers in the ranges [-1,1] and [0,1] using an 8 bit representation. Every API I have worked with have specific rules about those encodings but what I outlined below works for putting 8 bit floats in a rgba 16 bit texture and is the general conversion process API specs like vulkan recommend when storing 8 bit floats in these ranges. Also there are specific functions in most shader languages like glsl and hlsl that allow direct conversion between int and float without any type conversions. This is the reason I suggest a unsigned type for texture type rather then a float type.
What API? Also, I recommend using a 16bit unsigned texture type to avoid unwanted type conversions when reading the textures.
The basic idea is to multiply the floating point values by 255 then shift and "or" the two values into a single 16 bit value. That value would go into alpha channel. ie
uint16_t packed = static_cast<uint16_t>(low*255.0f) | (static_cast<uint16_t>(high*255.0f)<<8);
Inside the shader read the value back using a type that will not reinterpret the bits such as an unsigned sampler. Extract the two values as 8 bit values, convert them to floats and divide by 255 to get back to a value between 0 and 1.
HLSL has a 16 bit half type you might be able use, but GLSL doesn't have universal support for 16 bit float types so it is generally better to store them in 16 bit integer texture types. The other channels can be handled in a similar way.
Basically you want the raw value read from the alpha channel so that the bits are exactly what you put in the texture. Reading a float 16 into a float 32 will go through a conversion that will reinterpret the bits.