The presumption is that the raw data the OP has is normally distributed and that there are no outliers. It is very possible that the high value of the original dataset, approximately 589933, is an outlier of the dataset. Let's create a Quantile-Quantile plot of a randomly created dataset:
import numpy as np import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm n = 100 np.random.seed(0) df = pd.DataFrame({"price": np.random.normal(25000, 3000, n)}) qqplt = sm.qqplot(df["price"], line = 's',fit = True) plt.show()

However, we can completely skew this with one single outlier.
outlier = 600000 df.loc[n] = outlier qqplt = sm.qqplot(df["price"], line = 's',fit = True) plt.show()

Anytime we talk about outlier removal and it "doesn't feel right", we really need to take a step back to look at the data. As @kndahl suggest, using a package that includes heuristics and methods for data removal is good. Otherwise, gut feelings should be backed up with your own statistical analysis.
Finally, as to why 0 was still in the final dataset, let's take another look. We will add 0 to the dataset and run your outlier removal. First, we'll look at running your default outlier removal then we will first remove the extremely high $600,000 before running your outlier method.
## simulated data with 0 also added df.loc[n+1] = 0 df_w_o = df[np.abs(df.price-df.price.mean())<=(1*df.price.std())] print(f"With the high outlier of 600,000 still in the original dataset, the new range is \nMin:{df_w_o.price.min()}\nMax:{df_w_o.price.max()}") ## With the high outlier of 600,000 still in the original dataset, the new range is ## Min:0.0 ## Max:31809.263871962823 ## now lets remove the high outlier first before doing our outlier removal df = df.drop(n) df_w_o = df[np.abs(df.price-df.price.mean())<=(1*df.price.std())] print(f"\n\nWith the outlier of 600,000 removed prior to analyzing the data, the new range is \nMin:{df_w_o.price.min()}\nMax:{df_w_o.price.max()}") ## With the outlier of 600,000 removed prior to analyzing the data, the new range is ## Min:21241.61391985022 ## Max:28690.87204218316
In this simulated case, the high outlier skewed the statistics so much that 0 was in the range of one standard deviation. Once we scrubbed the data before processing, that 0 was removed. Related, this may be better on Cross Validated with a more complete dataset provided.
> mean+2*stdand< mean-2*stdin a normal distribution, two tailed.df_w_o = df[(df['z_score'] < 1) & (df['z_score'] > -1)]should bedf_w_o = df[(df['z_score'] < std) & (df['z_score'] > -std)]? My reasoning for using 1std is : since its a price set of a data for a narrow geographic area I assumed 1 time the std should be more accurate