Can Spark read .png or .pdf file? The answer is YES. Spark can read almost any type of file as binary file into data frame.
Spark has a binaryFile in-built format to load any Binary file and store the content as binary. The BLOB or binary content can be later written back to appropriate file format as per requirement.
Lets read some binary files quickly for demonstration. Files we are going to read
%%shls -lhtr dataset/files
Lets read one .png file to check the output of the data frame
# Lets read a .png filedf_spark_png = spark \
We read all .png files from path
# Lets read all .png filedf_spark_png = spark \
Can we read a PDF file? Yes
# We can even read PDF filesdf_spark_pdf = spark \
Can we read a TXT file as Binary ? Yes
# We can even read Text files as binary filesdf_spark_txt = spark \
So, now can we write back the files from binary content ? Yes
# Lets generate the text file back from the binary content
byte_content = df_spark_txt.select("content").collect()# Lets write the byte content as file back
with open("dataset/new_example.txt", "wb") as f:
As demonstrated, Spark can read any file as binary for storage. Later we can write the binary content back to respective file format as per usage.
Check out the iPython Notebook on Github — https://github.com/subhamkharwal/ease-with-apache-spark/blob/master/13_binary_files.ipynb
Check out PySpark Series on Medium — https://subhamkharwal.medium.com/learnbigdata101-spark-series-940160ff4d30