DataFrames are column-oriented structures, meaning that adding a column to some rows is not a good idea. Instead, you could leverage the support for nullable values in DataFrames and instead of adding an extra column, add an optional value to a Row based on some criteria.
An example: Let's take a DF of users and pages:
val users = Seq("Alice" , "Bob", "Charly", "Dean", "Eve", "Flor", "Greta") val pages = (1 to 9).map(i => s"page_$i") val userPages = for {u <- users p <- pages} yield (u,p) val userPagesDF = sparkContext.parallelize(userPages).toDF("user","page") // a user defined function that takes the last digit from the page and uses it to calculate a "rank". It only ranks pages with a number higher than 7 val rankUDF = udf((p:String) => if (p.takeRight(1).toInt>7) "top" else null) // New DF with the extra column "rank", which contains values for only some rows val ranked = userPagesDF.withColumn("rank", topPage($"page")) ranked.show +-----+-------+----+ | user| page|rank| +-----+-------+----+ |Alice| page_1|null| |Alice| page_2|null| |Alice| page_3|null| |Alice| page_4|null| |Alice| page_5|null| |Alice| page_6|null| |Alice| page_7|null| |Alice| page_8| top| |Alice| page_9| top| | Bob| page_1|null| | Bob| page_2|null| | Bob| page_3|null| | Bob| page_4|null| | Bob| page_5|null| | Bob| page_6|null| | Bob| page_7|null| | Bob| page_8| top| | Bob| page_9| top| +-----+-------+----+ ranked.printSchema root |-- user: string (nullable = true) |-- page: string (nullable = true) |-- rank: string (nullable = true)
mapPartitions?mapPartitions, thats just the code I've stitched together from consulting the Google Gods and Spark docs for the last few hours.