I Analyze Data - Best Practices for Implementing a Data Lake in Amazon S3 (Level 200)

preview_player
Показать описание
Flexibility is key when building and scaling data lakes, and by choosing the right storage architecture, you can have the agility necessary to quickly experiment and migrate with the latest analytics solutions. In this session, we explore the best practices for building a data lake on Amazon S3, which allow you to leverage an entire array of AWS, open-source, and third-party analytics tools, helping you remain at the cutting edge. We explore use cases for analytics tools, including Amazon EMR and AWS Glue, and query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon S3 Glacier Select.

Kumar Nachiketa is a Storage Partner Solutions Architect at AWS for APJ, based in Singapore. With over 13 years in the data storage industry, He loves helping partners and customers in making a better decision on their data storage strategy and build a robust solution. He has recently been working with a partner across APJ to create innovative data storage practices.

Subscribe:

#AWS #AWSSummit #AWSEvents
Рекомендации по теме
Комментарии
Автор

The demo in the end helped me understand it better. Thanks

LovyGupta
Автор

This demo's are complete naive ...you should demo something that shows how you can deploy from one environment to another environment, you are just showing using the console after giving so much gyan about lakeformation. So much manual on the demo, completely useless demo.

navinsai