Confluent Current 2023 is one of the largest Kafka gatherings in the world. We look at some of the top sessions happening at this year’s event.
The Confluent Current 2023 conference is right around the corner, bringing together some of the key thinkers and companies in the Kafka space for two days of sessions and networking. The conference, held in San Jose, California, on September 26 and 27, will include over 100 sessions discussing the evolution of data streaming and the future of work in this space with the integration of new AI technologies.
It is an opportune time for data streaming in general, with technologies such as Apache Kafka and Flink continuing to see growth in adoption across businesses of all shapes and sizes. At the same time, data streaming is being utilized in more applications than ever before, with industries such as finance, healthcare, and the public sector investing more resources into real-time data streaming.
Discussion of AI is expected to be a key topic of conversation at Current 2023, and how businesses can utilize data streaming processes to get the most out of generative AI pursuits. In addition, the next stages of Kafka, Flink, and other data streaming tools will be previewed at Current, enabling business leaders and engineers to better understand what is coming in the pipeline.
Streaming into the Future (Sep 26, 8.45am)
Jay Kreps, co-founder and CEO at Confluent and Shaun Clowes, CPO at Confluent, will be joined by Joseph Foster of NASA, Girish Rao of Warner Bros, and Daniel Sternberg of Notion to kick off the event with a discussion on the evolution and impact of data streaming platforms.
Kafka, Flink, and Beyond (Sep 27, 9:00am)
The next day, there will be another early morning keynote with some of the leaders in the Kafka and Flink communities discussing recent contributions to both open-source platforms and upcoming improvements. Ismael Juma and Martijn Visser of Confluent will be joined by Satish Duggana of Uber and Tobias Nothaft of BMW for this keynote, which will also explore some of the innovative applications which have been recently built in the data streaming space.
Top 10 Key Sessions
Analytics: The Final Data Frontier (Sep 26, 10.30am)
In this breakout session, Tim Berglund, VP of developer relations at StarTree, will posit that the next step for businesses is to allow users outside the organization to access and utilize data, through the use of Apache Pinot. In this discussion, Bergland will preview the power of Pinot as a tool to use alongside Apache Kafka to improve the speed of filtering, grouping, and aggregating results.
The Top 5 Mistakes Engineers Make When Implementing Apache Flink (Sep 26, 11:30am)
Robert Metzger, PMC chair for Apache Flink and Sharon Xie, founding engineer at Decodable, will go over some of the top mistakes engineers make at various stages with the implementation and maintenance of Apache Flink, alongside best practices to avoid making those same mistakes.
Need for Speed: Machine Learning in the Era of Real-Time (Sep 26, 1.30pm)
Oli Makhasoeva, director of developer relations at Bytewax will explore the solutions for consumers who want data to be fast, fresh, and cheap. The talk will touch on how machine learning systems are advancing and should be adopted by businesses to improve the delivery of data.
Architecting Scalable IoT Systems with MQTT and Kafka (Sep 26, 3:00pm)
Christian Meinerding, CEO of HiveMQ, discusses the company’s integration of MQTT protocol for IoT messaging with Apache Kafka, using a recent project with Rimac’s Hypercar Platform to illustrate the value of integrating both platforms. It is one of the few events on the agenda which has IoT as the focus, as the technology has slipped off a bit in prominence over the past few years.
Off-Label Data Mesh: A Prescription for Healthier Data (Sep 26, 4.00pm)
Data mesh is still a relatively new technology in the data processing world, and as such not many organizations know the true value of implementing data mesh technology in both analytical and operational domains. In this talk, Adam Bellemare, staff technologist at Confluent, talks about some of the successes he has seen with the implementation of data mesh and some of the features that are key to its success, such as event streams. The discussion will also look at some of the social and technical hurdles which come up when implementing data mesh, and how to avoid them.
Deeply Declarative Data Pipelines (Sep 27, 10:30am)
In this session, Ryanne Dolan, senior staff software engineer at LinkedIn, explores just how declarative streaming data pipelines can be on Kubernetes. From there, Dolan will see how far engineers can go adding more and more operators onto the stack.
Robinhood’s Kafkaproxy: Decoupling Kafka Consumer Logic from Application Business Logic (Sep 27, 11:30am)
Apache Kafka is Robinhood’s most mission-critical infrastructure, responsible for every part of the company’s app including stock and crypto trading, self-clearing, market data, and data science. In this session, Tony Chen and Mun Yong Jang, software engineers at Robinhood go over some of the practices and proxies that the company has set up to efficiently manage Kafka inside a growing organization.
From Raw Data to an Interactive Data App in an Hour: Powered by Snowpark Python (Sep 27, 1:30pm)
Developing interactive data apps usually requires expertise in a variety of platforms and programming languages, however with Snowpark Python, data practitioners have a way to build end-to-end data pipelines and data applications from scratch using Python. Vino Duraisamy, developer advocate at Snowflake, will take listeners through a practical lessons on how to build a variety of data applications using this platform, and show businesses how to flip their development model to one that more users can access.
Kafkastrophies: What CashApp has Learned From Solving Kafka Related Incidents in Production (Sep 27, 2:30pm)
Another major consumer app, Cash App, also uses Apache Kafka for a lot of its payments, stock, and Bitcoin processes. Hamdan Javeed and David Purcell, software engineers at Block Inc, discuss some of the high-severity incidents Cash App has encountered in which Kafka has played a role, providing in-depth analysis of how the team was alerted, resolved the issue, and implemented new tools and processes to prevent it from happening again.
Seek and Destroy Kafka Under Replication (Sep 27, 3:30pm)
In this session Edoardo Comar, software developer at IBM, explains the necessity of Apache Kafka to ensure that topics are fully replicated in synch. Edoardo will show how IBM measures and has evolved its Kafka configuration, with the aim of creating the best possible user experience.
Save the Date
As we said at the top, Confluent Current 2023 will take place on September 26-27. A wide variety of speakers will be on the stage during those two days, from organizations such as Alibaba, AWS, Databricks, NASA, Netflix, and Uber. Registration for the event is open until September 25.