The Future of Data Engineering, and the Tale Of Two Data Warehouses.

I am going to peer into the crystal-ball, the seeing stone, looking into the murky future of Data Engineering to see what mysteries it holds. I’ve seen a story, a tale of two Data Warehouses, I’ve seen Machine Learning, Streams, Distributed Systems, Storage, the eternal SQL. A lot has changed in the world of Data Engineering in the last few years, but a lot has not changed in the data world as well. Articles about the end of ETL the rise ELT, Hadoop being dead, new data paradigms, no code data flows, managed services, yet very little has actually changed, or it does at a snails pace. Yet, inevitably the store and future of data engineering can be told through the tale of two data warehouses.

Read more

Big Data File Showdown – Part 2 – ORC with Python.

In part 1 of the big data file formats we reviewed Parquet vs Avro. It was apparent from the start that the two file formats were built for different things. Avro is clearly a complex row structured file format used in communication and transactions, where schema is king and nested structures are no problem. Parquet on the other hand has risen to the top with the popularity of Spark, is columnar based storage and is well suited to structured and tabular type data. But, lest the annals of inter-webs call us uncouth and forgetful, we must add ORC file format to the list.

Read more

Intro to Spark ML Pipelines for Data Engineers

Don’t you like stuff for free? Don’t you like it when stuff I just handed to you? I mean when is that last time you didn’t want to get a free t-shirt. How about 20 bucks in the mail from you Grandma? That’s kinda what Pipelines are in Spark ML. The Apache Spark ML library is probably one of the easiest ways to get started into Machine Learning. Leaving all the fancy stuff to the Data Scientist is fine, Data Engineers are more interested in the end-to-end. The Pipeline, and the Spark ML API’s provide a straight froward path to building ML Pipelines that lower the bar for entry into ML. So, set right up, come get your free ML Pipeline.

Read more

Converting CSVs to Parquets… with Python and Scala.

With parquet taking over the big data world, as it should, and csv files being that third wheel that just will never go away…. it’s becoming more and more common to see the repetitive task of converting csv files into parquets. There are lots of reasons to do this, compression, fast reads, integrations with tools like Spark, better schema handling, and the list goes on. Today I want to see how many ways I can figure out how to simple convert an existing csv file to a parquet with Python, Scala, and whatever else there is out there. And how simple and fast each option is. Let’s do it!

Read more