How to Validate Data in a Kafka Stream using QuerySurge

preview_player
Показать описание
This video demonstrates how to validate data from a Kafka steam using QuerySurge and KSQL.

QuerySurge is the smart Data Testing solution that automates the data validation and testing of Big Data, Data Warehouses, Business Intelligence Reports and Enterprise Applications with full DevOps functionality for continuous testing.

Kafka is an open source software from the Apache open source community that provides a framework for storing, reading and analyzing streaming data.

Kafka was originally created at LinkedIn, where it played a part in analyzing the connections between their millions of professional users in order to build networks between people. It was given open source status and passed to the Apache Foundation – which coordinates and oversees development of open source software – in 2011.

What is Kafka used for?
In order to stay competitive, businesses today rely increasingly on real-time data analysis allowing them to gain faster insights and quicker response times. Real-time insights allow businesses or organisations to make predictions about what they should stock, promote, or pull from the shelves, based on the most up-to-date information possible.

Traditionally, data has been processed and transmitted across networks in “batches”. This is down to limitations in the pipeline – the speed at which CPUs can handle the calculations involved in reading and transferring information, or at which sensors can detect data. As this interview points out, these “bottlenecks” in our ability to process data have existed since humans first began to record and exchange information in written records.

Due to its distributed nature and the streamlined way it manages incoming data, Kafka is capable of operating very quickly – large clusters can be capable of monitoring and reacting to millions of changes to a dataset every second. This means it becomes possible to start working with – and reacting to – streaming data in real-time.

Kafka was originally designed to track the behaviour of visitors to large, busy websites (such as LinkedIn). By analysing the clickstream data (how the user navigates the site and what functionality they use) of every session, a greater understanding of user behaviour is achievable. This makes it possible to predict which news articles, or products for sale, a visitor might be interested in.

Since then, Kafka has become widely used, and it is an integral part of the stack at Spotify, Netflix, Uber, Goldman Sachs, Paypal and CloudFlare, which all use it to process streaming data and understand customer, or system, behavior. In fact, according to their website, one out of five Fortune 500 businesses uses Kafka to some extent.

How does Kafka work?
Apache takes information – which can be read from a huge number of data sources – and organizes it into “topics”. As a very simple example, one of these data sources could be a transactional log where a grocery store records every sale.

Kafka would process this stream of information and make “topics” – which could be “number of apples sold”, or “number of sales between 1pm and 2pm” which could be analysed by anyone needing insights into the data.

This may sound similar to how a conventional database lets you store or sort information, but in the case of Kafka it would be suitable for a national chain of grocery stores processing thousands of apple sales every minute.

This is achieved using a function known as a Producer, which is an interface between applications (e.g. the software which is monitoring the grocery stores structured but unsorted transaction database) and the topics – Kafka’s own database of ordered, segmented data, known as the Kafka Topic Log.

Often this data stream will be used to fill data lakes such as Hadoop’s distributed databases or to feed real-time processing pipelines like Spark or Storm.

Another interface – known as the Consumer – enables topic logs to be read, and the information stored in them passed onto other applications which might need it – for example, the grocery store’s system for renewing depleted stock, or discarding out-of-date items.

When you put its components together with the other common elements of a Big Data analytics framework, Kafka works by forming the “central nervous system” that the data passes through input and capture applications, data processing engines and storage lakes.

For more information on QuerySurge, please visit:
Рекомендации по теме