sql – 在Apache Spark Join中包含空值
发布时间:2020-05-24 11:08:51 所属栏目:MsSql 来源:互联网
导读:我想在Apache Spark连接中包含空值。 Spark默认情况下不包含null行。 这是默认的Spark行为。 val numbersDf = Seq( (123), (456), (null), ()).toDF(numbers)val lettersDf = Seq( (123, abc), (456, def), (nul
|
我想在Apache Spark连接中包含空值。 Spark默认情况下不包含null行。 这是默认的Spark行为。 val numbersDf = Seq(
("123"),("456"),(null),("")
).toDF("numbers")
val lettersDf = Seq(
("123","abc"),("456","def"),(null,"zzz"),("","hhh")
).toDF("numbers","letters")
val joinedDf = numbersDf.join(lettersDf,Seq("numbers"))
这是joinedDf.show()的输出: +-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | | hhh| +-------+-------+ 这是我想要的输出: +-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | | hhh| | null| zzz| +-------+-------+ 解决方法Scala提供了一个特殊的NULL安全等于运算符:numbersDf
.join(lettersDf,numbersDf("numbers") <=> lettersDf("numbers"))
.drop(lettersDf("numbers"))
+-------+-------+ |numbers|letters| +-------+-------+ | 123| abc| | 456| def| | null| zzz| | | hhh| +-------+-------+ 小心不要在Spark 1.5或更早版本中使用它。在Spark 1.6之前,它需要一个笛卡尔积(SPARK-11111 – 快速零安全连接)。 在Spark 2.3.0或更高版本中,您可以在PySpark中使用Column.eqNullSafe: numbers_df = sc.parallelize([
("123",),(None,)
]).toDF(["numbers"])
letters_df = sc.parallelize([
("123","hhh")
]).toDF(["numbers","letters"])
numbers_df.join(letters_df,numbers_df.numbers.eqNullSafe(letters_df.numbers))
+-------+-------+-------+ |numbers|numbers|letters| +-------+-------+-------+ | 456| 456| def| | null| null| zzz| | | | hhh| | 123| 123| abc| +-------+-------+-------+ 和SparkR中的%< =>%: numbers_df <- createDataFrame(data.frame(numbers = c("123","456",NA,"")))
letters_df <- createDataFrame(data.frame(
numbers = c("123",""),letters = c("abc","def","zzz","hhh")
))
head(join(numbers_df,letters_df,numbers_df$numbers %<=>% letters_df$numbers))
numbers numbers letters 1 456 456 def 2 <NA> <NA> zzz 3 hhh 4 123 123 abc 使用SQL(Spark 2.2.0),您可以使用IS NOT DISTINCT FROM: SELECT * FROM numbers JOIN letters ON numbers.numbers IS NOT DISTINCT FROM letters.numbers 这也可以与DataFrame API一起使用: numbersDf.alias("numbers")
.join(lettersDf.alias("letters"))
.where("numbers.numbers IS NOT DISTINCT FROM letters.numbers") (编辑:安卓应用网) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |
推荐文章
站长推荐
热点阅读
