0

I am trying to run a simple scala program in IntelliJ.

My build.sbt looks like this:

name := "UseThis" version := "0.1" scalaVersion := "2.12.4" libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0" 

And my import code looks like this:

package WorkWork.FriendsByAge import com.sun.glass.ui.Window.Level import com.sun.istack.internal.logging.Logger import org.apache.spark._ import org.apache.spark.SparkContext._ import org.apache.log4j._ 

I don't get why the import fails. It tells me the dependency failed to load or wasn't found, but I put the line in the build.sbt as required. Is there some other step I need to have done? I've installed spark. Is it the version note at the end of the build line? I don't even know how to check what version I have of spark. I'm trying to teach myself Scala (not a noob though, I know Python, R, various flavors of SQL, C#) but my word even setting it up is nigh on impossible, and apparently getting it to even run is too. Any ideas?

1
  • Did you check that your version of spark works with 2.12.4? Is there even a spark in the wild out there that supports 2.12.4 already? They usually lag behind a little. Commented Feb 28, 2018 at 2:25

2 Answers 2

4

Take a look at this page here: Maven Central (Apache Spark core)

Unless you have set up some other repositories, the dependencies that are going to be loaded by sbt usually come from there.

There is a version column with numbers like 2.2.1, then there comes a scala column with numbers like 2.11, 2.10. In order for spark and scala to work together, you have to pick a valid combination from this table.

As of 28.Feb 2018, there are no versions of Spark that work with scala 2.12.4. The latest version of scala for which 1.2.0 works is 2.11. So, you will probably want to set scala version to 2.11.

Also note that the %% syntax in your SBT in

"org.apache.spark" %% "spark-core" % "1.2.0" 

will automatically append the suffix _2.12 to the artifact-id-part. Since there is no spark-core_2.12, it cannot resolve the dependency, and you can't import anything in your code.

By the way: there was a big difference between spark 1.2 and spark 1.3, and then there was again a big difference between 1.x and 2.x. Does it really have to be 1.2?

Sign up to request clarification or add additional context in comments.

4 Comments

I realized I was screwing that up too yeah :P man I hate scala but big data requires what it requires I guess
Y U hate scala, maaan! xD You don't even have a golden haskell badge yet, you're not allowed to hate scala until then ;)
Minor correction: the suffix is _2.12, not _2_12, and appended to the spark-core part, not to org.apache.spark.
@AlexeyRomanov Yes, indeed, why in the world would it append a scala version to the groupID... No Idea why I wrote that^^ Fixed. Thank you.
2

Its because Spark 1.2 is not available for Scala 2.12

https://mvnrepository.com/artifact/org.apache.spark/spark-core

1 Comment

a little bit explanation would be fruitful :)

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.