AWS Tutorials - Data Quality Check using AWS Glue DataBrew

preview_player
Показать описание

Maintaining data quality is very important for the data platform. Bad data can break ETL jobs. It can crash dashboards and reports. It can hit accuracy of the machine learning models due to bias and error. AWS Glue DataBrew Data Profile jobs can be used for data quality checks. One can define data quality rules and validate data against it. Learning how to use Data Quality Rules in AWS Glue DataBrew to validate data quality.
Рекомендации по теме
Комментарии
Автор

Thanks a lot, very good tutorial, the way you take a topic an explain with sample is excellent

SpiritOfIndiaaa
Автор

Thanks, very comprehensive overview of the quality checking in DataBrew.

smmike
Автор

Very impressive, I have been looking at data validation frameworks and think this would be great fit. The 2 open source libraries I checked are:
1) Great Expectations: Found it tough to configure, has a steep learning curve
2) PyDeequ: Not as up to date as the Scala verison(Deequ). Also, comminity is not so active

Having said that I have few queries about databrew, Kindly provide your thoughts:
1) We have thousands of etl processes (both bacth and real time). Do you think databrew can handle that scale?
2) Anomoly detection: can databrew handle this? If not is there any alternative approach you could suggest?
3) As we onboard new sources, I want the data validation framework should easily be extendible. For example, just add the rule set and it should be able to ahndle any new source. Do you think storing rules in some datastore (ex:dynamodb ) is a better idea than doing it in databrew. Databrew can just look dynamodb to check the rules defined and rule against incoming data
4) if certian check is not available, can we customize to handle that logic? In case of open source librarry like Great expectation etc) it can be ahndled. Another idea is if it cant be handled in datbrew then using step function(conditional statements) to trigger databrew vs some other validation mechanism

jeety
Автор

I'm looking for the most code-light (a short Python Lambda function is ok and assumed) way to set up a process so when a CSV file is dropped into my S3 bucket/incoming folder, the file will automatically be validated using a DQ Ruleset I would manually build earlier in console. For any given Lambda call (I assume triggered by a file dropped into our S3 bucket) If possible, I'd like the Lambda to instruct the DQ Ruleset to run but not wait for it to finish (Step function?). Wanting to output a log file of which rows/columns failed to my S3 bucket/reports folder (Using some kind of trigger that fires from a DQ Ruleset finishing execution?). Again, it is important that the process be fully automated because hundreds of files per day with hundreds of thousands of rows will be dropped into our S3 bucket/incoming folder every day via a different automated process. End goal is merely to let client know if their file does not fit rules. No need to save or clean data. I realize I may be asking a lot, so please feel free to only share the best high level path of which AWS services to use in which order. Thank you!

scotter
Автор

Thank you for the tutorial which can have understanding on the overall about the DQ part. Whether having possible to view the detail records which is succeeded or fail?

shokyeeyong
Автор

Great.. Do have any plans to make a video on aws glue and apache hudi integration?

vishalchavhan
Автор

where you have placed this code and how it is connected with this data brew profile job

nishaKulhari-wj
Автор

Thanks for the clear explanation!
I've seen that there isn't a simple way to create a rule which allows a column to match a certain value and at the same time that it could be null.

For instance, consider a column "age" of int values.
I would like to create a rule with the following checks:
1- Age must be between 0 and 100
2- Age can have missing values

The problem here is that check 1 fails if there is a missing value and there isn't a selection in the databrew menu for check 2.

Have you found some way to accomplish this task?

sergiozavota
Автор

This was very nicely explained! Thank yo so much :)

Is it possible to have a rule set where new data file should verify against some existing redshift table data ?
For example, let's say we are getting orders data from kinesis into S3 and we need to verify inventory information which is in dynamodb and whenever the inventory is lower than a threshold value we want to run a particular pipeline. Can we do this ?

spandans
Автор

This is perfect. We have thousands of datasets where we need to perform DQ checks and send reports. Is it possible to automate or create the rules programmatically instead of using the console? Something like create rules in a yaml/csv file?

ladakshay
Автор

That was extremely helpful, thank you!

MahmoudAtef
Автор

Its nice explaination any training you will give I am looking to training pls help me ...

veerachegu
Автор

Please made video on pydeequ with Glue -> without using EMR

BounceBackTrader
Автор

Can you pls give training for aws glue we are 5 members looking for training

veerachegu