I have a map reduce .scala file like this:
import org.apache.spark._ object WordCount { def main(args: Array[String]){ val inputDir = args(0) //val inputDir = "/Users/eksi/Desktop/sherlock.txt" val outputDir = args(1) //val outputDir = "/Users/eksi/Desktop/out.txt" val cnf = new SparkConf().setAppName("Example MapReduce Spark Job") val sc = new SparkContext(cnf) val textFile = sc.textFile(inputDir) val counts = textFile.flatMap(line => line.split(" ")) .map(word => (word, 1)) .reduceByKey(_ + _) counts.saveAsTextFile(outputDir) sc.stop() } } When I run my code, with setMaster("local[1]") parameters it works fine.
I want to put this code in a .jar and throw it to S3 to work with AWS EMR. Therefore, I use the following build.sbt to do so.
name := "word-count" version := "0.0.1" scalaVersion := "2.11.7" // additional libraries libraryDependencies ++= Seq( "org.apache.spark" % "spark-core_2.10" % "1.0.2" ) It generates a jar file, however none of my scala code is in there. What I see is just a manifest file when I extract the .jar
When I run sbt package this is what I get:
[myMacBook-Pro] > sbt package [info] Loading project definition from /Users/lele/bigdata/wordcount/project [info] Set current project to word-count (in build file:/Users/lele/bigdata/wordcount/) [info] Packaging /Users/lele/bigdata/wordcount/target/scala-2.11/word-count_2.11-0.0.1.jar ... [info] Done packaging. [success] Total time: 0 s, completed Jul 27, 2016 10:33:26 PM What should I do to create a proper jar file that works like
WordCount.jar WordCount
sbt clean compile packagefrom the terminal wherebuild.sbtlivesWordCount? Weird, I'd expect you'd only see that without thespark-coredependency. Have you looked at thesbt packagebuild log? Anything special?sbt clean compile package)scalaVersionis set to 2.11.7 while your dependency is cross versioned to_2.10. You should use"org.apache.spark" %% "spark-core" % "1.0.2"instead.