Introductory remarks
First, the high degree of the monomial leads to a very small integral. This tricks Mathematica's heuristic into thinking the integral might be zero and that the small value is a by-product of numerical error. The integrand is essential a polynomial, so the warning message is not particularly worrisome.
Second, Boole gets special treatment by NIntegrate and is better to use than HeavisideTheta in this case. So this definition would work faster:
f2[x1_, x2_, x3_] := x1^23 x2^45 x3^123 Boole[x1 >= 1/20 && x2 >= 1/20]
Basically, the Boole/HeavisideTheta factors simply restrict the domain. It seems better to just calculate the actual domain and omit those factors. Reduce can do this. Use strict inequalities to avoid cases for the bounding surfaces of the region.
Reduce[And @@ Cases[f[r1, r2, r3], HeavisideTheta[e_] :> e > 0] && 0 < r1 < 1 && 0 < r2 < 1 - r1 && 0 < r3 < 1 - r1 - r2, {r1, r2, r3} (* order according to the reverse order in N/Integrate *) ] (* 1/20 < r1 < 19/20 && 1/20 < r2 < 1 - r1 && 0 < r3 < 1 - r1 - r2 *)
We can convert this expression to an integration domain with this:
domain = Sequence @@ Cases[List @@ Reduce[And @@ Cases[ff @@ vars[3], HeavisideTheta[e_] :> e > 0] && 0 < r[1] < 1 && 0 < r[2] < 1 - r[1] && 0 < r[3] < 1 - r[1] - r[2], Reverse@vars[3]], (* pick order according to decreasing degree *) h_[a_, ___, b : Alternatives @@ vars[3], ___, c_] :> {b, a, c} ] (* Sequence[{r3, 0, 9/10}, {r2, 1/20, 1/20 (19 - 20 r3)}, {r1, 1/20, 1 - r2 - r3}] *)
Over this region, we can integrate using the integrand
f0[x1_, x2_, x3_] := x1^23 x2^45 x3^123
Next, high-dimensional integrals take a long time. If there are 100 sample points in each direction, a three-dimensional integral will require 1,000,000 function evaluations. One might try to deal with this by demanding less accuracy, but there are limits. For a ten-dimensional integral, sampling on a 10-point tensor grid leads to $10^{10}$ function evaluations.
With the integrand and integration domain in this shape, there are two approaches to consider, symbolic integration and numerical integration.
[Some timings through the timeAvg function. Code at end.)
Symbolic integration
The straightforward way takes a while:
exact = Integrate[f0[r1, r2, r3], Evaluate@domain]; // AbsoluteTiming // First N[exact, 30] (* 39.832165 2.82584140053428314442718045642*10^-78 *)
Iterating single integrals is faster:
Fold[Integrate, f0[r1, r2, r3], Reverse@{domain}] // AbsoluteTiming // First (* 4.913279 *)
Note: The order of the variables in domain matters. If instead of Reverse @ vars[3] in the Reduce command that computes domain above, we were to put simply vars[3], then the iterated Integrate takes 10+ seconds.
Numerical integration
The default method of NIntegrate is not that bad, with a precision of just under 9 digits or about half of MachinePrecision:
res = NIntegrate[f0[r1, r2, r3], Evaluate@domain]; // AbsoluteTiming // First (res - exact) / exact
NIntegrate::slwcon: ...
(* 1.409120 2.7603*10^-9 *)
There are two rules worth considering because they can be exact on polynomials, "GaussKronrodRule" and "ClenshawCurtisRule". (See tutorial/NIntegrateIntegrationRules.) There is also the "LobattoKronrodRule", but I'll leave that aside; it's similar in performance to "GaussKronrodRule" but not exact. Each has a "Points" suboption that controls the order of the rule. By setting this to appropriate values, one can speed up the convergence. By default, NIntegrate will use the "CartesianRule" on multidimensional integrals and one can specify an integration rule for each dimension, and for each one can specify a value for "Points" that is appropriate to the degree of the integrand with respect to that variable. To get an exact result one uses a setting of roughly 1/3 the degree for Gauss-Kronrod and 1/2 for Clenshaw-Curtis; in practice, one should set them lower for the sake of speed. Gauss-Kronrod works faster with the default "GlobalAdaptive" strategy and Clenshaw-Curtis works faster with the "LocalAdaptive" strategy. (See tutorial/NIntegrateIntegrationStrategies
Gauss-Kronrod:
Grid@Prepend[ Table[ Flatten@{pts, res = NIntegrate @@ {f0[r1, r2, r3], domain, Method -> {"GaussKronrodRule", "Points" -> pts}}; // timeAvg, res, (res - exact)/exact}, {pts, 5, 10} ], {"Points", "Timing", "Integral", "% Error"} ]

Clenshaw-Curtis:
Grid@Prepend[ Table[ Flatten@{pts, res = NIntegrate @@ {f0 @@ vars[3], domain, Method -> {"LocalAdaptive", Method -> {"ClenshawCurtisRule", "Points" -> pts}}}; // timeAvg, res, (res - exact)/exact}, {pts, 5, 35, 5} ], {"Points", "Timing", "Integral", "% Error"} ]

We can see that Clenshaw-Curtis is faster (see next example) for less precision ("Points" -> 15), which might make an okay estimate in some cases, than Gauss-Kronrod, but Gauss-Kronrod is faster for higher precisions.
Gauss-Kronrod with "LocalAdaptive" varying "Points"
We get an improvement with increasing the points for the higher degree variables and using the "LocalAdaptive" strategy. It seems for both the Gauss-Kronrod and Clenshaw-Curtis rules, "LocalAdaptive" results in much less recursive subdivision. This makes it faster, but also you cannot be as certain of the error. (PrecisionGoal seems to make no difference.)
Grid@Prepend[ Table[ Flatten@{StandardForm@{16 + pts, 8 + pts, pts}, res = NIntegrate @@ {f0[r1, r2, r3], domain, Method -> {"LocalAdaptive", Method -> {"CartesianRule", Method -> { {"GaussKronrodRule", "Points" -> 18 + pts}, {"GaussKronrodRule", "Points" -> 6 + pts}, {"GaussKronrodRule", "Points" -> pts}}}}}; // timeAvg, res, (res - exact)/exact}, {pts, 5, 10} ], {"Points", "Timing", "Integral", "% Error"} ]

Manual numerical integration
One can get a bit more speed by constructing a integrator specifically for this problem. It is amenable to this because the Gauss-Kronrod is a good way to integrate monomials and NIntegrate will supply all the data you need to do it. From the tutorial tutorial/NIntegrateIntegrationRules, you can see how to perform the time-integration step. After determining how many "Points" you want for each dimension and what , then
{absc, weights, errweights} = NIntegrate`GaussKronrodRuleData[npoints, precision]
returns the abcissas, the weights, and the error weights for the integral over the interval $[0,1]$. (For an integral over another interval, one can perform a substitution to transform the integral into this form.) Then the integral can be computed as
integral = (f /@ absc) . weights
For a multidimensional integral, one does a similar thing. Transform the integral to one over $[0,1]^n$. Then form Cartesian products of the abcissas and of the weights. One can use different numbers of npoints for each dimension. Next evaluate the function at the abcissas, multiply by the corresponding weights, and total up the result.
The function intrule[degrees, parameters, opts] (code below) does that. The argument degrees is a list of degrees of the variables of the monomial and parameters are the $p_i$ in the OP's question. It does no recursive subdivision or error tracking. For WorkingPrecision -> MachinePrecision it compiles the transformed integrand and uses packed arrays for speed. (It will also work efficiently with arbitrary precision numbers.) It computes the number of "Points" needed for an exact integral. The option "Scale" multiplies these numbers to yield a faster but less accurate result. The setting MinPoints -> Automatic prevents the smallest value for "Points" from being scaled -- this greatly improves the accuracy on the OP's example, at least. (When the numbers of "Points" is small, the error is sensitive to scaling.)
Grid@Prepend[ Table[ {sc, res = intrule[{23, 45, 123}, {1/20, 1/20, 0}, Method -> "GaussKronrodRule", Scale -> sc, MinPoints -> Automatic, WorkingPrecision -> MachinePrecision]; // timeAvg, res, (res - exact)/exact}, {sc, 0.5, 1., 0.1} ], {"Scale", "Timing", "Integral", "% Error"} ]

For "Scale" -> 1 there should be no truncation error; the error you see is round-off error. For scales above about 0.8, the truncation error on this example is less than MachinePrecision.
Code dump
timeAvg:
SetAttributes[timeAvg, HoldFirst] timeAvg[func_] := Do[If[# > 0.3, Return[#/5^i]] & @@ AbsoluteTiming@Do[func, {5^i}], {i, 0, 15}];
'intrule':
Please forgive for not having time to explain all the code. It kept growing because the NIntegrate framework made it easy to keep extending features. A generic description is given above.
ClearAll[intrule]; ClearAll["intrule`*"]; Options[intrule] = {WorkingPrecision -> 20, Scale -> 1, MinPoints -> 2, Method -> "GaussKronrodRule"}; intrule`points[deg_, "GaussKronrodRule"] := deg/3; intrule`points[deg_, "LobattoKronrodRule"] := 1 + deg/3; intrule`points[deg_, "ClenshawCurtisRule"] := (1 + deg)/2; intrule`points[deg_List, scale_, min_, method_] := With[{points0 = intrule`points[#, method] & /@ deg}, Ceiling[Max[scale #, min /. Automatic -> Min[points0, 2]]] & /@ points0]; intrule`ruledata["GaussKronrodRule"] := NIntegrate`GaussKronrodRuleData; intrule`ruledata["LobattoKronrodRule"] := NIntegrate`LobattoKronrodRuleData; intrule`ruledata["ClenshawCurtisRule"] := NIntegrate`ClenshawCurtisRuleData; intrule[deg_?(VectorQ[#, IntegerQ] &), param_?(VectorQ[#, NumericQ] &), OptionsPattern[]] /; Length[deg] == Length[param] := Module[{integrand, integrandC, domain, dim, vars0, order, map, points, absc, weights, errweights, sample, w, vals}, order = Ordering[deg, All, Greater]; vars0 = Array[K, dim = Length[deg]][[order]]; domain = Cases[List @@ Reduce[ And @@ Thread[vars0 - param > 0] && And @@ FoldList[ 0 < #2 < Last[#] - First[Cases[#, Alternatives @@ vars0]] &, 0 < First[vars0] < 1, Rest[vars0]], vars0[[order]]], h_[a_, ___, b : Alternatives @@ vars0, ___, c_] :> {b, a, c} ]; map = (* transforms the OP's function to a function over the unit cube *) Simplify[ vars0 //. Thread[vars0 -> Table[domain[[order[[i]], {2, 3}]].{1 - t, t} /. t -> Slot[i], {i, Length[deg]}]] ]; integrand = Evaluate[ Inner[Power, map, deg, Times] Simplify@ Det@D[map, {Array[Slot, dim]}]] &; integrandC = If[OptionValue[WorkingPrecision] === MachinePrecision, (* compile if MachinePrecision *) Compile @@ {vars0, integrand @@ vars0, RuntimeAttributes -> {Listable}, Parallelization -> True}, integrand]; points = With[{points0 = intrule`points[Exponent[integrand @@ vars0, #], OptionValue[Method]] & /@ vars0}, Ceiling[Max[OptionValue[Scale] #, OptionValue[MinPoints] /. Automatic -> Min[points0], 2]] & /@ points0]; Do[{absc[n], weights[n]} = Developer`ToPackedArray /@ (* a quick no-op if not MachinePrecision *) Most@intrule`ruledata[OptionValue[Method]][n, OptionValue[WorkingPrecision]], {n, points}]; sample = Transpose@Tuples[Table[absc[n], {n, points}]]; w = Times @@ Transpose@Tuples[Table[weights[n], {n, points}]]; vals = integrandC @@ sample; vals.w ]
Integrate[]?) I imagine you need this analytically, right? $\endgroup$Integrate; with the clarification it's unambiguous. $\endgroup$