PySpark — Flatten JSON/Struct Data Frame dynamically

Flatten JSON data dynamically in tabular structure

Subham Khandelwal
3 min readOct 7, 2022

We always have use cases where we have to flatten the complex JSON/Struct Data Frame into flattened simple Data Frame just like the example below:

example.this.that => example_this_that

Flatten JSON/Struct Data Frame Data

Following code snippet does the exact job dynamically. No manual effort required to expand the data structure or to determine the schema.

Lets first create an example Data Frame for the job

# Lets create an Example Data Frame to hold JSON data# Example Data Frame with column having JSON data
_data = [
['EMP001', '{"dept" : "account", "fname": "Ramesh", "lname": "Singh", "skills": ["excel", "tally", "word"]}'],
['EMP002', '{"dept" : "sales", "fname": "Siv", "lname": "Kumar", "skills": ["biking", "sales"]}'],
['EMP003', '{"dept" : "hr", "fname": "MS Raghvan", "skills": ["communication", "soft-skills"], "hobbies" : {"cycling": "expert", "computers":"basic"}}']
# Columns for the data
_cols = ['emp_no', 'raw_data']
# Lets create the raw Data Frame
df_raw = spark.createDataFrame(data = _data, schema = _cols)
# Determine the schema of the JSON payload from the column
json_schema_df = row: row.raw_data))
json_schema = json_schema_df.schema
# Apply the schema to payload to read the data
from pyspark.sql.functions import from_json
df_details = df_raw.withColumn("emp_details", from_json(df_raw["raw_data"], json_schema)).drop("raw_data"), False)
Create Example Data Frame

Create Python function to do the magic

# Python function to flatten the data dynamically
from pyspark.sql import DataFrame
# Create outer method to return the flattened Data Frame
def flatten_json_df(_df: DataFrame) -> DataFrame:
# List to hold the dynamically generated column names
flattened_col_list = []

# Inner method to iterate over Data Frame to generate the column list
def get_flattened_cols(df: DataFrame, struct_col: str = None) -> None:
for col in df.columns:
if df.schema[col].dataType.typeName() != 'struct':
if struct_col is None:
flattened_col_list.append(f"{col} as {col.replace('.','_')}")
t = struct_col + "." + col
flattened_col_list.append(f"{t} as {t.replace('.','_')}")
chained_col = struct_col +"."+ col if struct_col is not None else col
get_flattened_cols(".*"), chained_col)

# Call the inner Method

# Return the flattened Data Frame
return _df.selectExpr(flattened_col_list)
Python function to do the magic

Now, lets run our example Data Frame against the Python Method to get the flattened Data Frame

# Generate the flattened DF
flattened_df = flatten_json_df(df_details)
# Print Schema
Flattened Data Frame

Now if you are new to Spark, PySpark or want to learn more — I teach Big Data, Spark, Data Engineering & Data Warehousing on my YouTube Channel — Ease With Data

YouTube — Tutorials

In case we want to explode the Array data further

# In case we now want to explode the Array/List field - emp_details_skills
from pyspark.sql.functions import explode
flattened_df.withColumn("skills", explode("emp_details_skills")).drop("emp_details_skills").show()
Exploded Data Frame

Checkout the complete iPython Notebook on Github —

Checkout the EaseWithApacheSpark series —

Wish to Buy me a Coffee: Buy Subham a Coffee