在Spark Scala中重命名数据框的列名

我试图转换Spark-scala中所有的DataFrame的Headers / ColumnNames。 截至目前我拿出下面的代码,只取代一个单一的名字。 请帮忙。

for( i <- 0 to origCols.length - 1){df.withColumnRenamed(df.columns(i),df.columns(i).toLowerCase);} 

如果结构是平坦的:

 val df = Seq((1L, "a", "foo", 3.0)).toDF df.printSchema // root // |-- _1: long (nullable = false) // |-- _2: string (nullable = true) // |-- _3: string (nullable = true) // |-- _4: double (nullable = false) 

你可以做的最简单的事情就是使用toDF方法:

 val newNames = Seq("id", "x1", "x2", "x3") val dfRenamed = df.toDF(newNames: _*) dfRenamed.printSchema // root // |-- id: long (nullable = false) // |-- x1: string (nullable = true) // |-- x2: string (nullable = true) // |-- x3: double (nullable = false) 

如果要重命名单个列,可以使用alias select

 df.select($"_1".alias("x1")) 

这可以很容易地推广到多个列:

 val lookup = Map("_1" -> "foo", "_3" -> "bar") df.select(df.columns.map(c => col(c).as(lookup.getOrElse(c, c))): _*) 

或者用列withColumnRenamed

 df.withColumnRenamed("_1", "x1") 

foldLeft一起使用来重命名多个列:

 lookup.foldLeft(df)((acc, ca) => acc.withColumnRenamed(ca._1, ca._2)) 

使用嵌套结构( structs ),一个可能的select是通过select整个结构进行重命名:

 val nested = spark.read.json(sc.parallelize(Seq( """{"foobar": {"foo": {"bar": {"first": 1.0, "second": 2.0}}}, "id": 1}""" ))) nested.printSchema // root // |-- foobar: struct (nullable = true) // | |-- foo: struct (nullable = true) // | | |-- bar: struct (nullable = true) // | | | |-- first: double (nullable = true) // | | | |-- second: double (nullable = true) // |-- id: long (nullable = true) @transient val foobarRenamed = struct( struct( struct( $"foobar.foo.bar.first".as("x"), $"foobar.foo.bar.first".as("y") ).alias("point") ).alias("location") ).alias("record") nested.select(foobarRenamed, $"id").printSchema // root // |-- record: struct (nullable = false) // | |-- location: struct (nullable = false) // | | |-- point: struct (nullable = false) // | | | |-- x: double (nullable = true) // | | | |-- y: double (nullable = true) // |-- id: long (nullable = true) 

请注意,它可能会影响可nullability元数据。 另一种可能性是通过强制重命名:

 nested.select($"foobar".cast( "struct<location:struct<point:struct<x:double,y:double>>>" ).alias("record")).printSchema // root // |-- record: struct (nullable = true) // | |-- location: struct (nullable = true) // | | |-- point: struct (nullable = true) // | | | |-- x: double (nullable = true) // | | | |-- y: double (nullable = true) 

要么:

 import org.apache.spark.sql.types._ nested.select($"foobar".cast( StructType(Seq( StructField("location", StructType(Seq( StructField("point", StructType(Seq( StructField("x", DoubleType), StructField("y", DoubleType))))))))) ).alias("record")).printSchema // root // |-- record: struct (nullable = true) // | |-- location: struct (nullable = true) // | | |-- point: struct (nullable = true) // | | | |-- x: double (nullable = true) // | | | |-- y: double (nullable = true) 

对于那些对PySpark版本感兴趣的人:

 merchants_df_renamed = merchants_df.toDF( 'merchant_id', 'category', 'subcategory', 'merchant') merchants_df_renamed.printSchema() root |-- merchant_id: integer (nullable = true) |-- category: string (nullable = true) |-- subcategory: string (nullable = true) |-- merchant: string (nullable = true) 
 def aliasAllColumns(t: DataFrame, p: String = "", s: String = ""): DataFrame = { t.select( t.columns.map { c => t.col(c).as( p + c + s) } : _* ) }