Apache Kafka Data

Making the most of your data with Apache Kafka

Humans have been storing information since the beginning of time, with governments, libraries, hospitals and many other organisations developing complex databases. Fast forward to current day and computerised database systems have taken over, allowing us to store more data than we know what to do with. With these databases continuously growing, there comes a need to analyse, monitor and store this data more appropriately.

 

Enter Apache Kafka. Kafka is an open-source streaming platform which was developed in 2010 by LinkedIn with the aim of providing users with a unified, high-throughput, low- latency platform for handling real-time data feeds. A messaging system similar to RabbitMQ, ActiveMQ and MQseries, Kafka can surpass 1.1 trillion messages per day, and boasts a design that is heavily influenced by transaction logs.

 

How is Kafka used?

Apache Kafka has grown rapidly over the years and has broadened their set of business and technical objectives with usage growing significantly in a number of industries and common use cases. A survey of Kafka users showed that 1 in 4 respondents are employees of businesses producing more than $1 billion in annual sales, and of these respondents more than 15% are processing over a billion messages a day. In a nutshell, Apache Kafka is big news for big business integration and has a wide range of applications.

Today, more than a third of the Fortune 500 are using Kafka in production, companies we use every day:

LinkedIn: Kafka is used at LinkedIn for activity data and operational metrics to power the LinkedIn newsfeed, LinkedIn today and offline analytics.
Twitter: Kafka is used as part of Twitters' stream-processing infrastructure, that handles 5 billion sessions a day in real time.
Netflix: With more than 700 billion messages ingested on an average day, Netflix currently operates 36 Kafka clusters, consisting of 4,000+ broker instances.

 

Solution requirement

High volumes of data flowing through a system can contain important meta-data. This information can include customer patterns and behaviours that are useful to businesses, and are often ignored due to the traditional high cost it takes to gather and analyse. Kafka specialises in handling this data and meta-data in a highly scalable and cost-efficient manner, enabling businesses in Healthcare, Retail, Finance and more to make informed business decisions based on otherwise ignored information.

Let’s say an insurance company have identified that, after rolling out a new digital quote and policy purchase application, many of their app users were using the quote functions, but were ultimately not purchasing policies. The insurance company wants to identify, in real-time, those customers that were using the app for large numbers of quotes, to enable their sales centre to call the app user to help with quoting, offering further enticements to purchase, and otherwise encourage the purchase.

With many possible solution options, an Apache Kafka solution can provide a lightweight, low-risk, and scalable solution for augmenting the existing insurance quote/purchase systems with targeted real-time business intelligence.

 

How does it work?

In order to produce an insurance quote, an app user fills in information regarding the policy in the app and clicks quote, at which point the app sends data to the core quote system. IntegrationWorks' solution listens into the quote data via an intermediary system and publishes quote information to a Kafka topic. We then use a Kafka 'KTable' where the stream of quote data is summarised by app user by quote frequency.

The result of this is a constantly updated running log, displaying the most active user with most quotes in a tuneable window of time. This information is provided to the sales centre who are then able to contact the most engaged app users, encouraging them to purchase policies.

Kafka Solution

 

IntegrationWorks' Digital Futures

IntegrationWorks provides our clients with a strategic and an award-winning approach to integration which means we're focused on driving and supporting our clients' transformational journeys. Working in dynamic organisational and technological environments allows us (and often requires us) to push the boundaries of digital business with new ways of working with emerging technologies. We're able to stay ahead in the fast-evolving digital world because we encourage and support our team to stay ahead.

Our Apache Kafka solution is a great example of how a combination of transformational clients paired with ahead-of-the curve IntegrationWorkers, expands horizons and accelerates understanding of new digital business opportunities.

Want to use, store and manage your business' data more effectively? To chat to us about Apache Kafka or our other Digital Futures solutions, send us an email or connect with our CTO Ian Vanstone on LinkedIn.

 

Share on Social: