I came across Structured Streaming with Spark, it has an example of continuously consuming from an S3 bucket and writing processed results to a MySQL DB.
// Read data continuously from an S3 location val inputDF = spark.readStream.json("s3://logs") // Do operations using the standard DataFrame API and write to MySQL inputDF.groupBy($"action", window($"time", "1 hour")).count() .writeStream.format("jdbc") .start("jdbc:mysql//...") How can this be used with Spark Kafka Streaming?
val stream = KafkaUtils.createDirectStream[String, String]( ssc, PreferConsistent, Subscribe[String, String](topics, kafkaParams) ) Is there a way to combine these two examples without using stream.foreachRDD(rdd => {})?