I have a model with two between-participants predictors -- one continuous (a), and one categorical with two levels (b) -- and two within-participants predictors, both categorical with two levels (x and z). All of my categorical predictors have been dummy-coded (i.e. contrasts set to 0 and 1).
My regression model states that there is a significant interaction between x and z:
Call: lm(formula = y ~ a + b + x * z, data = df, contrasts = list(b = "contr.treatment", x = "contr.treatment", z = "contr.treatment")) Estimate Std. Error t value Pr(>|t|) (Intercept) 3.9392 0.1538 25.62 < 2e-16 *** a -0.2155 0.0821 -2.63 0.0091 ** bY -0.3525 0.1409 -2.50 0.0129 * xY 1.3770 0.1952 7.06 1.3e-11 *** zY 1.1740 0.1958 6.00 6.2e-09 *** xY:zY -0.5754 0.2755 -2.09 0.0376 * A quick plot of the data appears to show that there is a larger effect of z when x is not present, but that presence of z still contributes significantly even when x is present.

My question is, how can I statistically show whether the above is true (or not)?
I've read other answers that involve changing the reference when treatment coding -- I've tried using this method, everything comes out as significant and I'm unsure how to interpret that. My intuition says I probably can't do a t-test, even though it's an interaction of purely categorical predictors.
What is the recommended method to interpret interactions arising from a multiple regression analysis?