Nature of ETL and Spark
satya - 10/25/2023, 11:41:37 AM
Following is largely a conversation summary with ChatGPT on a variety of ETL related questions
Following is largely a conversation summary with ChatGPT on a variety of ETL related questions
satya - 10/25/2023, 11:46:18 AM
Here is a quick list of deep questions about Spark and by extension ETL in general
- What is the best way to store a large file for processing with Spark?
- How can I copy a CSV file onto S3 in Parquet format?
- How do you handle errors when copying to S3 in this process?
- What are a couple of techniques for idempotency of S3 Parquet writes?
- What is the difference between a file, object, and a row in S3?
- Can you give a few examples of "files" and "folders" in S3 where folders are part of the key?
- List my questions so far.
- If processing a file in Spark requires an API call for each record, how do you go about doing it? Or if that is expensive, how do you avoid it?
- What is partitioning the data in Spark? Can you give a few examples?
- How are these partitions used during processing?
- If I have only 1 partition, is it processed on only one node?
- What is memoization?
- How does Spark caching work?
- What are the best practices for storing Spark results in an RDBMS?
- When storing the results of Spark in a database, how do you handle commits and rollbacks?
- How are commits and rollbacks handled when storing an RDD in a relational database, and each record needs an explicit commit?
- What are the best practices for storing Spark results in an RDBMS?
- Specifically, when you store an RDD in a relational database, what is the best approach if each record needs an explicit commit?
- How do I handle storing errors during ETL operations using Spark, and how do I report and handle these errors afterward?
- If you found that some records are in error while ETL via Spark, what is the best way to handle and report those errors and handling them afterward? Do you store them in a database? Can you use a UI to handle those records?
- If processing a file in Spark requires an API call for each record, how do you go about doing it? Or if that is expensive, how do you avoid it?
- What is partitioning the data in Spark? Can you give a few examples?
- How are these partitions used during processing?
- If I have only 1 partition, is it processed on only one node?
- What is memoization?
- How does Spark caching work?
- Can you list my questions one more time please?
- Can you list them without their categorization?
- If you found that some records are in error while ETL via Spark, what is the best way to handle and report those errors and handling them afterward? Do you store them in a database? Can you use a UI to handle those records?
- Does the "foreachPartition" commit or rollback the entire "partition"?
- What if the entire RDD needs to be in the same file on S3?
- When multiple partitions are used to write an RDD, is the eventual file has one name or multiple names?
- How do I read a file that is in multiple partitions on S3 into a partitioned RDD in Spark?