CROZ is working on the maintenance and further expansion of the clients existing Data Lake project to enable reducing emissions using data.
The fourth Schema Registry blog is out! Read more about performance and functional testing!
A third blog from our new blog series about the Apicurio schema registry!
All about transporting schema ID through headers and payload in combination with magic byte.
A six-part blog series about the Apicurio schema registry! Click and read why Schema Registry!
With the Plus level Confluent partnership, we are looking forward to helping your organization overcome all obstacles on the real-time stream processing journey.
Check out which modifications and improvements a new major release of Apache Kafka 3.0.0 brings to the table.
The main goal was to design a Data Catalog for a Swiss insurance company in a way it’s fully aligned with the central Data Governance repository.
Apache Kafka is a distributed streaming platform for high-performance data pipelines, streaming analytics, data integration and mission-critical applications.
In this blog post we will present an interesting architecture that combines proven IBM technologies with Open Source big data technologies.