About this event
This technical session examines real-world challenges and patterns in building distributed stream processing systems. We focus on scalability, fault tolerance, and latency trade-offs through a concrete case study, using specific frameworks like Apache Storm as supporting tools to illustrate production concepts.
Learn practical patterns for distributed stream processing at scale:
Perfect for: Data engineers building distributed streaming systems who need production-proven patterns.
----------------------------------------------------------------------------------------------------------------------
1. Stream Processing: Past and Now (4 minutes)
2. Distributed Stream Processing Fundamentals (5 minutes)
3. Architectural Patterns (6 minutes)
Common architecture: Kafka → Stream Processor → Sink (DB, Lake, Dashboard)
4. Designing for Scale (6 minutes)
Hosted by
Hasan's career includes Data Engineering, where he has: • Designed and optimised 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀 and cloud storage architectures. • Built 𝗹𝗼𝘄-𝗹𝗮𝘁𝗲𝗻𝗰𝘆 𝗱𝗮𝘁𝗮 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 to support real-time applications and analytics dashboards. • Developed AI/ML-based solutions, including 𝗟𝗦𝗧𝗠 𝗺𝗼𝗱𝗲𝗹𝘀 and 𝗿𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 to enhance user engagement. • Collaborated across teams to drive actionable insights, ensuring data solutions align with business goals.
Harsha is a user-first GTM specialist at Datazip, transforming early-stage startups from zero to one. With a knack for technical market strategy and a startup enthusiast's mindset, she bridges the gap between innovative solutions and meaningful market adoption.
OLake is an open-source data ingestion tool available on GitHub, developed by Datazip, Inc. Its primary function is to replicate data from transactional databases and streaming platforms (like PostgreSQL, MySQL, MongoDB, Oracle, and Kafka) into open data lakehouse formats, like Apache Iceberg.