Spark_streaming_examples
HADOOP SCHOOL
by
4y ago
Create Spark Streaming Context: ========================================== scala: --------------- import org.apache.spark._ import org.apache.spark.streaming._ import org.apache.spark.streaming.StreamingContext._ // not necessary since Spark 1.3 // Create a local StreamingContext with two working thread and batch interval of 1 second. // The master requires 2 cores to prevent from a starvation scenario. val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount") val ssc = new StreamingContext(conf, Seconds(1)) // Create a DStream that will connect to hostname:port ..read more
Visit website
Spark_projects_build_commands
HADOOP SCHOOL
by
4y ago
How to Build Eclipse Project: -------------------------------------------- 1. Scala + Maven + Eclipse project -------------------------------------------- cd /home/orienit/spark/workspace/KalyanScalaProjectMaven mvn clean eclipse:eclipse mvn package scala -cp /home/orienit/spark/workspace/KalyanScalaProjectMaven/target/KalyanScalaProjectMaven-0.0.1-SNAPSHOT.jar com.orienit.scala.training.HelloWorld 2. Scala + Sbt + Eclipse project -------------------------------------------- cd /home/orienit/spark/workspace/KalyanScalaProjectSbt sbt clean eclipse sbt package scala -cp /home/orienit/spark ..read more
Visit website
Spark_mllib_examples
HADOOP SCHOOL
by
4y ago
---------------------------------------------- hadoop fs -put $SPARK_HOME/data data Basic Statistics: -------------------------- $SPARK_HOME/bin/run-example org.apache.spark.examples.mllib.MultivariateSummarizer --input $SPARK_HOME/data/mllib/sample_linear_regression_data.txt $SPARK_HOME/bin/run-example org.apache.spark.examples.mllib.Correlations --input $SPARK_HOME/data/mllib/sample_linear_regression_data.txt $SPARK_HOME/bin/run-example org.apache.spark.examples.mllib.RandomRDDGeneration Classification and regression ..read more
Visit website
Spark_hadoop_commands
HADOOP SCHOOL
by
4y ago
=================================================================== Hadoop Spark Examples using scala =================================================================== val key = sc.parallelize(List(1,2,3,4)) val value = sc.parallelize(List("a","b","c","d")) val mapRdd = key.zip(value) val tabData = mapRdd.map(x => x._1 + "\t" + x._2) val tupleData = mapRdd.map(x => (x._1, x._2)) tabData.saveAsTextFile("/home/orienit/spark/output/tabData") tabData.saveAsTextFile("file:///home/orienit/spark/output/tabData") tupleData.saveAsSequenceFile("/home/orienit/spark/output/tupleData") t ..read more
Visit website
Spark_graphx_examples
HADOOP SCHOOL
by
4y ago
------------------------------------------------ $SPARK_HOME/bin/run-example org.apache.spark.examples.graphx.LiveJournalPageRank file:/home/orienit/spark/input/graphx/pagerank.txt --numEPart=10 --output=file:/home/orienit/spark/output/pagerank-1 ------------------------------------------------ $SPARK_HOME/bin/run-example org.apache.spark.examples.graphx.Analytics pagerank file:/home/orienit/spark/input/graphx/pagerank.txt --numEPart=10 --output=file:/home/orienit/spark/output/pagerank-2 $SPARK_HOME/bin/run-example org.apache.spark.examples.graphx.Analytics cc file:/home/orienit/spark/inpu ..read more
Visit website
Spark_Day2_2
HADOOP SCHOOL
by
4y ago
--------------------------------------------------- Find the `average` of 1 to 10 using `aggregate` function? --------------------------------------------------- val list = List(1,2,3,4,5,6,7,8,9,10) val rdd = sc.parallelize(list, 2) --------------------------------------------------- val zeroValue = _ def seqOp(Unit, Int) : Unit = {} def combOp(Unit, Unit) : Unit = {} Note: `Unit` is the mainly expected `Return Type` --------------------------------------------------- Note: replce `Unit` with `(Int, Int)` all the places val zeroValue = (0, 0) def seqOp(res: (Int, Int), data: Int ..read more
Visit website
Spark_Day3_3
HADOOP SCHOOL
by
4y ago
--------------------------------------------------- val t1 = List((1, "kalyan"), (2, "raj"), (3, "venkat"), (4, "raju")) val t2 = List((1, 10000), (2, 20000), (3, 30000), (5, 50000)) val prdd1 = sc.parallelize(t1, 1) val prdd2 = sc.parallelize(t2, 1) --------------------------------------------------- scala> val t1 = List((1, "kalyan"), (2, "raj"), (3, "venkat"), (4, "raju")) t1: List[(Int, String)] = List((1,kalyan), (2,raj), (3,venkat), (4,raju)) scala> val t2 = List((1, 10000), (2, 20000), (3, 30000), (5, 50000)) t2: List[(Int, Int)] = List((1,10000), (2,20000), (3,30000), (5,50 ..read more
Visit website
SPARK_DAY1_PRACTICE
HADOOP SCHOOL
by
4y ago
Spark: --------------------- Apache Spark™ is a fast and general engine for large-scale data processing. Spark Libraries on Spark-Core: -------------------------------- 1. Spark SQL 2. Spark Streaming 3. Spark MLLib 4. Spark GraphX Spark Supports 4 Programming Languages: ------------------------------------------ Scala, Java, Python ,R How to Start the Spark in Command Line: -------------------------------------------- Scala => $SPARK_HOME/bin/spark-shell Python => $SPARK_HOME/bin/pyspark R => $SPARK_HOME/bin/sparkR Spark-2.x: -------------------------------------------- S ..read more
Visit website
Spark_day_3_practice
HADOOP SCHOOL
by
4y ago
------------------------------------------------------- def myfunc[T](index: Int, it: Iterator[T]): Iterator[String] = {  it.toList.map(x => s"index -> $index, value -> $x").toIterator } val names = List("raj", "venkat", "anil", "ravi", "sunil", "anvith", "rajesh", "kiran", "surya", "kalyan") val rdd1 = sc.parallelize(names, 2) val prdd1 = rdd1.map(x => (x.length, x)) ------------------------------------------------------- val r1 = sc.parallelize(List(1,2,3,4,5), 2) val p1 = r1.map(x => (x, 'a')) val p2 = r1.map(x => (x, 'b')) val p3 = r1.map(x => (x, 'c')) val p4 ..read more
Visit website
Spark_day_2_practice
HADOOP SCHOOL
by
4y ago
--------------------------------------------------- def myfunc[T](index: Int, it: Iterator[T]): Iterator[String] = {  it.toList.map(x => s"index -> $index, value -> $x").toIterator } val names = List("raj", "venkat", "anil", "ravi", "sunil", "anvith", "rajesh", "kiran", "surya", "kalyan") val rdd1 = sc.parallelize(names, 2) val nums = List(1, 2, 3, 4, 5) val rdd2 = sc.parallelize(nums, 2) --------------------------------------------------- cala> val names = List("raj", "venkat", "anil", "ravi", "sunil", "anvith", "rajesh", "kiran", "surya", "kalyan") names: List[String] = L ..read more
Visit website

Follow HADOOP SCHOOL on FeedSpot

Continue with Google
Continue with Apple
OR