filmov
tv
Ingest from DB to HDFS with Real-Time Dashboards - Big Data App Template
Показать описание
Abstract:
To make critical business decisions in real time, many businesses today rely on a variety of data, which arrives in large volumes. Variety and volume together make big data applications complex operations. Big data applications require businesses to combine transactional data with structured, semi-structured, and unstructured data for deep and holistic insights. And, time is of the essence: to derive the most valuable insights and drive key decisions, large amounts of data have to be continuously ingested into Hadoop data lakes as well as other destinations. As a result, data ingestion poses the first challenge for businesses, which must be overcome before embarking on data analysis.
With its various Application Templates for ingestion, DataTorrent allows users to:
Ingest vast amounts of data with enterprise-grade operability and performance guarantees provided by its underlying Apache Apex framework. Those guarantees include fault tolerance, linear scalability, high throughput, low latency, and end-to-end exactly-once processing.
Quickly launch template applications to ingest raw data, while also providing an easy and iterative way to add business logic and such processing logic as parse, dedupe, filter, transform, enrich, and more to ingestion pipelines.
Visualize various metrics on throughput, latency and app data in real-time throughout execution.
Template description:
Database to HDFS Sync app-template polls for data records from Database to bring it into Hadoop/Big Data lake for downstream processing and archival. This template will be available for download in early 2017.
Presenter:
Yogi /Devendra Vyavahare, Committer at Apache Apex and Engineer at DataTorrent.
To make critical business decisions in real time, many businesses today rely on a variety of data, which arrives in large volumes. Variety and volume together make big data applications complex operations. Big data applications require businesses to combine transactional data with structured, semi-structured, and unstructured data for deep and holistic insights. And, time is of the essence: to derive the most valuable insights and drive key decisions, large amounts of data have to be continuously ingested into Hadoop data lakes as well as other destinations. As a result, data ingestion poses the first challenge for businesses, which must be overcome before embarking on data analysis.
With its various Application Templates for ingestion, DataTorrent allows users to:
Ingest vast amounts of data with enterprise-grade operability and performance guarantees provided by its underlying Apache Apex framework. Those guarantees include fault tolerance, linear scalability, high throughput, low latency, and end-to-end exactly-once processing.
Quickly launch template applications to ingest raw data, while also providing an easy and iterative way to add business logic and such processing logic as parse, dedupe, filter, transform, enrich, and more to ingestion pipelines.
Visualize various metrics on throughput, latency and app data in real-time throughout execution.
Template description:
Database to HDFS Sync app-template polls for data records from Database to bring it into Hadoop/Big Data lake for downstream processing and archival. This template will be available for download in early 2017.
Presenter:
Yogi /Devendra Vyavahare, Committer at Apache Apex and Engineer at DataTorrent.