日志分析
scala> import org.apache.spark.sql.types._scala> import org.apache.spark.sql.Rowscala> val logRDD = sc.textFile("hdfs://master:9000/student/2016113012/data/log.txt").map(_.split("#"))logRDD: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[2] at map at:21val schema = StructType( Array( StructField("ipAddress",StringType,true), StructField("clientIndentd",StringType,true), StructField("userId",StringType,true), StructField("dateTime",StringType,true), StructField("protocal",StringType,true), StructField("responseCode",StringType,true), StructField("contentSize",IntegerType,true) ))val rowRDD = logRDD.map(p => Row(p(0),p(1),p(2),p(3),p(4),p(5),p(6).toInt))val logDF = sqlContext.createDataFrame(rowRDD,schema)logDF.registerTempTable("logs")//统计访问文件大小的平均值,最大值,最小值scala> sqlContext.sql("select avg(contentSize),min(contentSize),max(contentSize) from logs").show()17/03/07 17:04:20 INFO ParseDriver: Parsing command: select avg(contentSize),min(contentSize),max(contentSize) from logs17/03/07 17:04:20 INFO ParseDriver: Parse Completed17/03/07 17:04:21 INFO FileInputFormat: Total input paths to process : 117/03/07 17:04:22 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id17/03/07 17:04:22 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id17/03/07 17:04:22 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap17/03/07 17:04:22 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition17/03/07 17:04:22 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id+------+----+----+| _c0| _c1| _c2|+------+----+----+|3506.0|2000|5554|+------+----+----+//统计响应代码的数量scala> sqlContext.sql("select responseCode,count(*) from logs group by responseCode").show()17/03/07 17:52:26 INFO ParseDriver: Parsing command: select responseCode,count(*) from logs group by responseCode17/03/07 17:52:26 INFO ParseDriver: Parse Completed+------------+---+ |responseCode|_c1|+------------+---+| 304| 1|| 200| 2|+------------+---+//统计大于1次的ip地址scala> sqlContext.sql("select ipAddress,count(1) as total from logs group by ipAddress having total > 1").show()17/03/07 17:55:20 INFO ParseDriver: Parsing command: select ipAddress,count(1) as total from logs group by ipAddress having total > 117/03/07 17:55:20 INFO ParseDriver: Parse Completed+----------+-----+ | ipAddress|total|+----------+-----+|10.0.0.153| 3|+----------+-----+
问题:如何将p(4)里面的继续切分