This page provides you with instructions on how to extract data from Amazon S3 CSV and load it into Delta Lake on Databricks. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Amazon S3?
Amazon S3 (Simple Storage Service) provides cloud-based object storage through a web service interface. You can use S3 to store and retrieve any amount of data, at any time, from anywhere on the web. S3 objects, which may be structured in any way, are stored in resources called buckets. One common use is to store files in comma-separated values (CSV) format, in which each record consists of multiple values separated by commas.
What is Delta Lake?
Delta Lake is an open source storage layer that sits on top of existing data lake file storage, such AWS S3, Azure Data Lake Storage, or HDFS. It uses versioned Apache Parquet files to store data, and a transaction log to keep track of commits, to provide capabilities like ACID transactions, data versioning, and audit history.
Getting CSV data out of S3
AWS has both a REST API and command-line utilities that you can use to get at resources stored in the platform. To retrieve objects you need to know the object and host names, as well as your AWS authorization information.
Preparing CSV data
If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in each table, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them.
Loading data into Delta Lake on Databricks
To create a Delta table, you can use existing Apache Spark SQL code and change the format from
delta. Once you have a Delta table, you can write data into it using Apache Spark's Structured Streaming API. The Delta Lake transaction log guarantees exactly-once processing, even when there are other streams or batch queries running concurrently against the table. By default, streams run in append mode, which adds new records to the table. Databricks provides quickstart documentation that explains the whole process.
Other data warehouse options
Delta Lake on Databricks is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, or Snowflake, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Panoply, and To S3.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Amazon S3 CSV to Delta Lake on Databricks automatically. With just a few clicks, Stitch starts extracting your Amazon S3 CSV data, structuring it in a way that's optimized for analysis, and inserting that data into your Delta Lake on Databricks data warehouse.