Setup: I have the following function in python, where x can get very large:
import numpy as np def function(x, pi): d = len(pi) output = 0 for r in range(d): output += pi[r] * np.exp(-x) return output Input description: x can be very large causing np.exp(-x) to evaluate to zero which results in the entire function being zero, and pi is just a vector of probabilities (e.g., [0.5, 0.5]).
Question: Is there a more stable way to implement this function such that it wouldn't lead to the output being zero? Thanks.
Edit: I have decided to give more details since it was asked in the comments. The entire function is
def entire_function(x_array, pi, r): d = len(pi) numerator = np.exp(-x_array[r]) denominator = 0 for r_prime in range(d): denominator += pi[r_prime] * np.exp(-x_array[r_prime]) return numerator / denominator Even trying to use np.log doesn't really help. For example:
a = np.array([np.exp(-900), np.exp(-800)]) print(np.log(a[0]+a[1])) This gives me -Inf. The summation in the denominator is the nasty part that is giving me trouble since it is preventing me from accessing the exponents (to make the computation more numerically stable). I guess this issue is similar to the logsumexp examples in machine learning with the extra pi[r] factors in front.
np.sum(pi) * np.exp(-x), but that doesn't help if x is so big that the result is too small for floating point to handle. The smallest storeable float is on the order of 1e-308: stackoverflow.com/questions/1835787/…. If you need to represent values smaller than that, you could think about storing the logarithm of the value rather than the value itself.exp(-x)being too small for a Python float ifxis large enough and positive, but you've got a few options: avoid having to evaluateexp(-x)by rearranging whatever overall math expression you are computing, transform the data you're working with somehow so that it lives in a different range, increase precision by using something like mpmath...