Your "32-byte Keccak hash" doesn't look like a canonical, binary Keccak output, but rather appears to be encoded with some Base64 variant—to me it appears to be the MIME version. Your comments on the matter confuse me whether that is the case or not; you should check your language or library's documentation.
If it's Base64 then it appears to be encoding a 18 bytes of raw binary output, which is 144 bits. This is a unusual output size. I would double check how your software is generating that output, and verify that all of that Base64 string really reflects a 144-bit Keccak output, and that there aren't some constant bytes being prefixed or suffixed to it. Again, your description and comments are not helpful in this regard.
Taking the modulo is fine in practice if you do it over a large enough input set. Appendix B of NIST SP 800-185 contains a recommendation that uses modulo to convert the output of an extendable output or variable length hash function into a near-uniform natural number $0 ≤ n < R$ for arbitrary $R$. The trick is that they make sure to do it over a hash output that is at least 128 bits longer than the number of bits needed to encode values in the output range:
- Compute $k = \lceil\log_2(R)\rceil + 128$, where $\lceil\log_2(R)\rceil$ is the number of whole bits needed to encode $R$ distinct values. For $R = 500$, this is $k = 9 + 128 = 137$.
- Hash your input to an output $Z$ with length of at least $k$ bits. Since your outputs are at least 144 bits, they meet this requirement.
- Convert $Z$ to some "bigint" data type (arbitrary precision integers, check library support in your language; e.g., in Java it's
BigInteger). The bits_to_integer algorithm they detail in the document just parses $Z$ as a binary numeral with least significant bits first, although that later detail is not critical (e.g., Java BigInteger parses them in the opposite order, and that's just fine). - Take $N = \mathit{bits\_to\_integer}(Z) \mod R$ as your result.
The trick that allows them to use modulo is that they're willing to accept a biased distribution if they can guarantee the bias is astronomically small. That's why they insist on taking a hash output 128 bits longer than the number of bits needed to encode $R$. Suppose we used $R = 500$ but $k = 9$ (instead of the recommended $k = 137$). The problem here is that since $2^k = 2^9 = 512$, when you take the modulo, 488 of the values have a probability of $1/512$ but 12 of them have a probability of $2/512$—they're twice as likely as the others.
Now suppose we take $k = 10$ instead, so $2^k = 1,024$. Now when you take modulo 500, there are 476 values in the range have a probability of $2/1,024$ but 24 of them have probability of $3/1,024$—they're only 50% more likely than the rest. For $k = 11$, $2^k = 2,048$ so you get 452 values with probability $4/2,048$ and 48 with $5/2,048$ (only 25% more likely), and so on. By the time you get to $k = 137$ the bias is negligible; footnote 12 of the appendix gives:
$$ |\mathrm{Prob}(N=t) - 1/R| ≤ 2^{-w} $$
...where $w ≥ \lceil\log_2(R)\rceil + 128$.