If it's an RDD of (String, Seq[(Double, Double, Int)]) you could iterate over it with a standard map.
val data: RDD[(String, Seq[(Double, Double, Int)])] = _ //Your RDD Here data.map { case (key, value) => value.map { case (first, second, third) => first * second * third } }
I would consider a Dataframe or structuring your data in some other fashion as this is probably a pretty unwieldy way to structure your data.
You can find some information about dataframes/datasets here http://spark.apache.org/docs/latest/sql-programming-guide.html which might be better suited to your problem and would let you write more SQL like statements rather than maps if you are not comfortable with them.
Here is a complete and kind of dirty example
import org.apache.spark._ object SparkExample extends App { val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("App") val sess: SparkContext = new SparkContext(conf) val data: Seq[(String, Seq[(Double, Double, Int)])] = Seq[(String, Seq[(Double, Double, Int)])]( ("id1", Seq[(Double, Double, Int)]((1.1, 2.2, 3), (4.4, 5.5, 6), (7.7, 8.8, 9))), ("id2", Seq[(Double, Double, Int)]((10.10, 11.11, 12), (13.13, 14.14, 15))) ) val rdd: RDD[(String, Seq[(Double, Double, Int)])] = sess.parallelize(data) val d: Array[Seq[Double]] = rdd.map { case (key, value) => value.map { case (first, second, third) => first + second + third } }.collect() println(d.mkString(", ")) }