PySpark Dataframe Groupby and Count Null Values
Referring to the solution link above, I am trying to apply the same logic but groupby("country") and getting the null count of another column, and I am getting a "column is not iterable" failure. Can someone help with this?
df7.groupby("country").agg(*(sum(col(c).isNull().cast("int")).alias(c) for c in columns))