Streaming Analytics with Analytic Models (R, Spark MLlib, H20, PMML)

In March 2016, I had a talk at Voxxed Zurich about “How to Apply Machine Learning and Big Data Analytics to Real Time Processing”.

Finding Insights with R, H20, Apache Spark MLlib, PMML and TIBCO Spotfire

Big Data” is currently a big hype. Large amounts of historical data are stored in Hadoop or other platforms. Business Intelligence tools and statistical computing are used to draw new knowledge and to find patterns from this data, for example for promotions, cross-selling or fraud detection. The key challenge is how these findings can be integrated from historical data into new transactions in real time to make customers happy, increase revenue or prevent fraud.

Putting Analytic Models into Action via Event Processing and Streaming Analytics

Fast Data” via stream processing is the solution to embed patterns – which were obtained from analyzing historical data – into future transactions in real-time. The following slide deck uses several real world success stories to explain the concepts behind stream processing and its relation to Apache Hadoop and other big data platforms. I discuss how patterns and statistical models of R, Apache Spark MLlib, H20, and other technologies can be integrated into real-time processing using open source stream processing frameworks (such as Apache Storm, Spark Streaming or Flink) or products (such as IBM InfoSphere Streams or TIBCO StreamBase). A live demo showed the complete development lifecycle combining analytics with TIBCO Spotfire, machine learning via R and stream processing via TIBCO StreamBase and TIBCO Live Datamart.

Slide Deck from Voxxed Zurich 2016

Here is the slide deck:

Click on the button to load the content from www.slideshare.net.

Load content

Kai Waehner

bridging the gap between technical innovation and business value for real-time data streaming, processing and analytics

Recent Posts

How Penske Logistics Transforms Fleet Intelligence with Data Streaming and AI

Real-time visibility has become essential in logistics. As supply chains grow more complex, providers must…

1 day ago

Data Streaming Meets the SAP Ecosystem and Databricks – Insights from SAP Sapphire Madrid

SAP Sapphire 2025 in Madrid brought together global SAP users, partners, and technology leaders to…

6 days ago

Agentic AI with the Agent2Agent Protocol (A2A) and MCP using Apache Kafka as Event Broker

Agentic AI is emerging as a powerful pattern for building autonomous, intelligent, and collaborative systems.…

1 week ago

Powering Fantasy Sports at Scale: How Dream11 Uses Apache Kafka for Real-Time Gaming

Fantasy sports has evolved into a data-driven, real-time digital industry with high stakes and massive…

2 weeks ago

Databricks and Confluent Leading Data and AI Architectures – What About Snowflake, BigQuery, and Friends?

Confluent, Databricks, and Snowflake are trusted by thousands of enterprises to power critical workloads—each with…

3 weeks ago

Databricks and Confluent in the World of Enterprise Software (with SAP as Example)

Enterprise data lives in complex ecosystems—SAP, Oracle, Salesforce, ServiceNow, IBM Mainframes, and more. This article…

3 weeks ago