Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

7/5/2018

Reading time:21 mins

Real-time Data Integration with Kafka and Cassandra (Ewen Cheslack-Po…

by DataStax

No notes for slideFirst I want to quickly introduce myself so you know where I’m coming from. I’m an engineer at Confluent, a company founded by the co-creators of Apache Kafka, and we’re building what we call a stream data platform to help companies capture and leverage all their real-time data. I’m also a committer on the Apache Kafka project and the lead at Confluent on the Connect project, which itself is part of the open source Apache Kafka project.More types of data stores with specialized functionality – e.g. rise of NoSQL systems handling document-oriented and columnar stores. A lot more sources of data. Rise of secondary data stores and indexes – e.g. Elasticsearch for efficient text-based queries, graph DBs for graph-oriented queries, time series databases. A lot more destinations for data, and a lot of transformations along the way to those destinations. Real-time: data needs to be moved between these systems continuously and at low latency. Unfortunately, as you build up large, complex data pipelines in an ad hoc fashion by connecting different data systems that need copies of the same data with one-off connectors for those systems, or build out custom connectors for stream processing frameworks to handle different sources and sinks of streaming data, we end up with a giant, unmaintainable mess. This mess has a huge impact on productivity and agility once you get past just a few systems. Adding any new data storage system or stream processing job requires carefully tracking down all the downstream systems that might be affected, which may require coordinating with dozens of teams and code spread across many repositories. Trying to change one data source’s data format can impact many downstream systems, yet there’s no simple way to discover how these jobs are related. This is a real problem that we’re seeing across a variety of companies today. We need to do something to simplify this picture. While Confluent is working to build out a number of tools to help with these challenges, today I want to focus on how we can standardize and simplify constructing these data pipelines so that, at a minimum, we reduce operational complexity and make it easier to discover and understand the full data pipeline and dependencies.We refer to this problem as data integration – by which we broadly mean making sure data gets to all the right places. We need to be able to collect data from a diverse set of sources and then feed it to several downstream applications and systems for processing. This problem isn’t a new one. There were legacy solutions to this problem but the approach of copying data in an ad-hoc way across applications just does not scale anymore. Today data is in motion and it needs to move in real-time and at scale.I want to start by highlighting some anti-patterns we observe in how people are tackling this problem today. One-off tools – connect any two given specific systems. High complexity, operational overhead Designed to be too specific – n^2 connectors Overly-generic data copying tools – make few assumptions, connect any and all inputs and outputs, and do a bunch of intermediate transformation as well. Try to do too much – E, T, and L with weak interfaces Too abstract – difficult/impossible to make guarantees even when connecting right pairs of systems Stream processing tools for data integration Overkill for simple EL workloads Weaker connector ecosystem – focus is rightly on T Generic, weak interfaces as found in generic data copying tools result in difficult to understand semantics and guaranteesWhen we get too specific, handling everything ad hoc, we end up with a ton of different tools for every connection, often times many different tools for doing transformations, and probably the worst case – a lot of different tools that do *all* of ETL for specific systems. If we have too little separation of concerns, we end up in situations where we use the stream processing framework for literally every step even though they use a specific model that doesn’t map well to ingesting or exporting data from many types of systems. Alternatively, we use overly generic data copying & transformation tools. These tools are so abstract that they can’t provide many guarantees and become overly complex, requiring you to learn a dozen concepts just to setup a simple pipeline. What we really need is a separation of concerns in ET&L. One step towards getting to a separation of concerns is being able to decouple the E, T, and L steps. Kafka, when used as shown here, can help us do that. The vision of Kafka when originally built at LinkedIn was for it to act as a common hub for real-time data. When streaming data from data stores like RDBMS or K/V store, we produce data into Kafka, making it available to as many downstream consumers as want it. Save data to other systems like secondary indexes and batch storage systems, which are implemented with consumers. Stream processing frameworks and custom consumer apps fit in by being both consumers and producers – reading data from Kafka transforming it, and then possibly publishing derived data back into Kafka. Using this model can simplify the problem as we’re now always interacting with Kafka. To set some context, I want to just quickly list a few of the features that make it possible for Kafka to handle data at this scale. We’ll come back to many of these properties when looking at Kafka Connect. At its core, pub/sub messaging system rethought as distributed commit log. Based on an append-only and sequentially accessed log, which results in very high performance reading and writing data. Extends the model to a *partitioned stream* model for a single logical topic of data, which allows for distribution of data on the brokers and parallelism in both writes and reads. In order to still provide organization and ordering within a single partition, it guarantees ordering within each partition and uses keys to determine which partition to put data in. As part of its append-only approach, it decouples data consumption from data retention policy, e.g. retaining data for 7 days or until we have 1TB in a topic. This both gets rid of individual message acking and allows multiple consumption of the same data, i.e. pub/sub, by simply tracking offsets in the stream. Because data is split across partitions, we can also parallelize consumption and make it elastically scalable with Kafka’s unique automatically balanced consumer groups. But what exactly is Kafka? At high level, "just" another pub/sub message queue A few key features make it scale to handle the requirements of a stream data platform Multiple consumers can read the same data, and can be at different offsets in the log. Consuming data doesn't delete it from the log. Instead, Kafka use time- or data size- based retention. Your data will stick around for, e.g., 7 days or until you have 100GB. This retention policy is simple and avoids having to keep accounting info for individual messages.Topics are partitioned so they can scale across multiple servers Partitions are also replicated for fault toleranceAs I mentioned before, Kafka is multi-subscriber where the same topic can be consumed by multiple groups of consumers where each consumer group can subscribe to read the full copy of data. Furthermore, every consumer group can have multiple consumer processes distributed over several machines and Kafka takes care of assigning the partitions of the subscribed topics evenly amongst the consumer processes in a group so that at all times, every partition of a subscribed topic is being consumed by some consumer process within the group. In addition to being easy to scale, consumption is also fault tolerant. If one fails, the other ones automatically rebalance to pick up the load of the failed consumer instance. So it is operationally cheap to consume large amounts of data. Given all these properties, it’s easy to see how Kafka can fit this central role as the hub for all your realtime data, and we can simplify the original image of our data pipeline. However, with the regular Kafka clients, we’re still leaving quite a bit on the table – each connection in the image still requires its own tool or Kafka application to get data to or from Kafka. Each tool uses these relatively low-level clients and has to implement many common features. Today, I want to introduce you to Kafka Connect, Kafka’s new large-scale, streaming data import/export tool that drastically simplifies the construction, maintenance, and monitoring of these data pipelines. Kafka Connect is part of the Apache Kafka project, open source under the Apache license, and ships with Kafka. It’s a framework for building connectors between other data systems and Kafka, and the associated runtime to run these connectors in a distributed, fault tolerant manner at scale. Goals: Focus – copying only Batteries included – framework does all the common stuff so connector developers can focus specifically on details that need to be customized for their system. This covers a lot more than many connector developers realize: beyond managing the producer or consumer, it includes challenges like scalability, recovery from faults and reasoning about delivery guarantees, serialization, connector control, monitoring for ops, and more. Standardize – configuration, status and connector control, monitoring, etc. Parallelism, scalability, fault tolerance built-in, without a lot of effort from connector developers or users. Scale – in two ways. First, scale individual connectors to copy as much data as possible – ingest an entire database rather than one table at a time. Second, scale up to organization-wide data pipelines or down to development, testing, or just copying a single log file into Kafka With these goals in mind, let’s explore the design of Kafka Connect to see how it fulfills these.At it’s core, Kafka connect is pretty simple. It has source connectors which copy data from another system into Kafka, and sink connectors that copy data from Kafka into a destination system. Here I’ve shown a couple of examples. The source and sink systems don’t necessarily have to naturally match Kafka’s data model exactly. However, we do need to be able to translate data between the two. For example, we might load data from a database in a source connector. By using a timestamp column associated with each row, we can effectively generate an ordered stream of events that are then produced into Kafka. To store data into HDFS, we might load data from one or more topics in Kafka and then write it in sequence to files in an HDFS directory, rotating files periodically. Although Kafka Connect is designed around streaming data, because Kafka acts as a good buffer between streaming and batch systems, we can use it here to load data into HDFS. Neither of these systems map directly to Kafka’s model, but both can be adapted to the concepts of streams with offsets. More about this in a minute. The most important design point for Kafka Connect is that one half of a connection is always Kafka – the destination for sources, or the source of data for sink connectors. This allows the framework to handle the common functionality of connectors while maintaining the ability to automatically provide scalability, fault tolerance, and delivery guarantees without requiring a lot of effort from connector developers. This key assumption is what makes it possible for Kafka Connect to get a better set of tradeoffs than the systems I mentioned earlier.So now, coming back to the model that connectors need to map to. Just as Kafka’s data model enables certain features around scalability, Kafka Connect’s data model can as well. Kafka Connect requires every connector to map to a “partitioned stream” model. The basic idea is a generalization of Kafka’s data model of topics and partitions. This mapping is defined by the input system for the connector – the source system for source connectors, and Kafka topics for sink connectors -- and has the following: A set of partitions which divide the whole set of data logically. Unlike Kafka, the number of partitions can potentially be very large and may be more dynamic than we would expect with Kafka. Each partition contains an ordered sequence of events/messages. Under the hood these are key/value pairs with byte[], but Kafka Connect requires that they can be converted into a generic data API Each event/message has a unique offset representing its position in the partition. Since the mapping is determined by the input system, these offsets must be meaningful to that system – these may be quite different from the Kafka offsets you’re used to. To give a more concrete example, we can revisit the database example from earlier. Previously I only showed a single table, but if we consider the database as a whole, we can apply this model to copy the entire database. We partition by table, delivering each into its own Kafka topic. Each event represents a row that we’ve inserted into the database. The offsets are IDs or timestamps, or even more complex representations like a combination of ID and timestamp. Although there isn’t *actually* a stream for each table, we can effectively construct one by querying the database and ordering results according to specific rules. As a result of this model, we can see a few properties emerging: First, we have a built-in concept of parallelism, a requirement for automatically providing scalable data copying. We’re going to be able to distribute processing of partitions across multiple hosts. Second, this model encourages making copying broad by default – partitioned streams should cover the largest logical collection of data. Finally, offsets provide an easy way to track which data has been processed and which still needs to be copied. In some cases, mapping from the native data model to streams may not be simple; however, a bit of effort in creating this mapping pays off by providing a common framework and implementation for tracking which data has been copied. Again, we’ll revisit this a bit later, but this allows the framework to handle a lot of the heavy lifting with regards to delivery semantics. Partitioned streams are the logical data model, but they don’t directly map to physical parallelism, or threads, in Kafka Connect. In the case of the database connector, a direct mapping might seem reasonable. However, some connectors will have a much larger number of partitions that are much finer-grained. For example, consider a connector for collecting metrics data – each metric might be considered its own partition, resulting in tens of thousands of partitions for even a small set of application servers. However, we do want to exploit the parallelism provided by partitions. Connectors do this by assigning partitions to tasks. Tasks are, simply, threads of control given to the connector code which perform the actual copying of data. Each connector is given a thread it can use to monitor the input system for the active set of partitions. Remember that this set can be dynamic, so continuous monitoring is sometimes needed to detect changes to the set of partitions. When there are changes, the connector notifies the framework so it can reconfigure the current set of tasks. Then, each task is given a dedicated thread for processing. The connector assigns a subset of partitions to each task and the task is the one that actually copies the data for that partition. Given the assignment, the connector implementer handles the reading or writing data from that set of partitions. And how do we decide how many tasks to generate? That’s up to the user, and it’s the primary way to control the total resources used by the connector. Since each task corresponds to a thread, the user can choose to dynamically increase or decrease the maximum number of tasks the connector may create in order to scale resource usage up or down. So now we have some set of threads, but where do they actually execute? Kafka Connect has two modes of execution.Standalone mode works as a single process. This is really easy to get started with, easy to configure. We like this because it scales down really easily and stays local for testing. It’s also great for connectors that really only make sense on a single node – for example, processing log files, where you need to read the data off the local file system. If you’ve used systems like logstash or flume, this mode should look familiar. It’s commonly referred to as either standalone or agent mode.In contrast, distributed mode can scale up while providing distribution and fault tolerance. Recall that each connector or task is a thread, and we’re considering each to be approximately equal in terms of resource usage. Connectors and tasks are auto-balanced across workers. Failures automatically handled by redistributing work, and you can easily scale the cluster up or down by adding more workers. Cool implementation note: reuses group membership functionality of consumer groups. Note how if you replace “worker” with “consumer” and “task” with “topic partition”, the things it is doing look largely the same: assigning tasks to workers, detecting when a worker is added or fails, and rebalancing the work. Kafka already provides support for doing a lot of this, so by leveraging the existing implementation and coordinating through Kafka’s group functionality (with internal data stored in Kafka topics), Kafka Connect can provide this functionality in a relatively small code footprint. Finally, note that Kafka Connect does not own the process management at all. We don’t want to make assumptions about using Mesos, YARN, or any other tool because that would unnecessarily limit Kafka Connect’s usage. Kafka Connect will work out of the box in any of these cluster management systems, or with orchestration tools, or if you just manage your processes with your own tooling.In contrast, distributed mode can scale up while providing distribution and fault tolerance. Recall that each connector or task is a thread, and we’re considering each to be approximately equal in terms of resource usage. Connectors and tasks are auto-balanced across workers. Failures automatically handled by redistributing work, and you can easily scale the cluster up or down by adding more workers. Cool implementation note: reuses group membership functionality of consumer groups. Note how if you replace “worker” with “consumer” and “task” with “topic partition”, the things it is doing look largely the same: assigning tasks to workers, detecting when a worker is added or fails, and rebalancing the work. Kafka already provides support for doing a lot of this, so by leveraging the existing implementation and coordinating through Kafka’s group functionality (with internal data stored in Kafka topics), Kafka Connect can provide this functionality in a relatively small code footprint. Finally, note that Kafka Connect does not own the process management at all. We don’t want to make assumptions about using Mesos, YARN, or any other tool because that would unnecessarily limit Kafka Connect’s usage. Kafka Connect will work out of the box in any of these cluster management systems, or with orchestration tools, or if you just manage your processes with your own tooling. In contrast, distributed mode can scale up while providing distribution and fault tolerance. Recall that each connector or task is a thread, and we’re considering each to be approximately equal in terms of resource usage. Connectors and tasks are auto-balanced across workers. Failures automatically handled by redistributing work, and you can easily scale the cluster up or down by adding more workers. Cool implementation note: reuses group membership functionality of consumer groups. Note how if you replace “worker” with “consumer” and “task” with “topic partition”, the things it is doing look largely the same: assigning tasks to workers, detecting when a worker is added or fails, and rebalancing the work. Kafka already provides support for doing a lot of this, so by leveraging the existing implementation and coordinating through Kafka’s group functionality (with internal data stored in Kafka topics), Kafka Connect can provide this functionality in a relatively small code footprint. All of this functionality can be accessed via REST API – submit connectors, see their status, update configs, and so on. Finally, note that Kafka Connect does not own the process management at all. We don’t want to make assumptions about using Mesos, YARN, or any other tool because that would unnecessarily limit Kafka Connect’s usage. Kafka Connect will work out of the box in any of these cluster management systems, or with orchestration tools, or if you just manage your processes with your own tooling. I want to mention two important features that also simplify both connector developer’s and user’s lives. The first feature is offset management, which provides for standardized data delivery guarantees. Delivery guarantees are actually rarely provided in many other systems. They generally offer some sort of best effort, but unreliable, delivery. Ironically, stream processing frameworks often do a better job than tools specifically designed for data copying. Kafka Connect handles offset checkpointing for connectors, and this fits in as a natural extension to Kafka’s offset commit functionality. For sources this works with offsets that have complex structure (e.g. timestamps + autoincrementing IDs in a database) and requires no implementation support from the connector beyond defining the offsets and being able to start reading from a saved offset. For sinks, we can leverage Kafka’s existing offset functionality, but in order to ensure data is completely written, sinks must also support a flush operation. Commits are automatically processed periodically. By default, this mode of managing offsets will provide at least once delivery; internally both sources and sinks are simply flushing all data to the output and the committing offsets. Note that some connectors will opt out of this functionality in order to provide even stronger guarantees. For example, the HDFS connector manages its own offsets because (carefully) tracking them in HDFS along with the data allows for exactly-once delivery. The second feature I want to mention are converters. Serialization formats may seem like a minor detail, but not separating the details of data serialization in Kafka from the details of source or sink systems results in a lot of inefficiency: A lot of code for doing simple data conversions are duplicated across a large number of ad hoc connector implementations. Each connector ultimately contains its own set of serialization options as it is used in more environments – JSON, Avro, Thrift, protobufs, and more. Much like the serializers in Kafka’s producer and consumer, the Converters abstract away the details of serialization. Converters are different because they guarantee data is transformed to a common data API defined by Kafka Connect. This API supports both schema and schemaless data, common primitive data types, complex types like structs, and logical type extensions. By sharing this API, connectors write one set of translation code and Converters handle format-specific details. For example, the JDBC connector can easily be used to produce either JSON or Avro to Kafka, without any format-specific code in the connector. Kafka Connect provides the framework, but I want to spend a few minutes describing the current state of the connector ecosystem. While the framework ships with Apache Kafka, connectors use a federated approach to development. Confluent helped kick off connector development with a few key open source connectors – JDBC for importing data from any relational database and HDFS, for exactly once delivery of data into HDFS and Hive. Confluent will be continuing to add more open source connectors. We’ve also started tracking connectors that the community has been developing on a page we’re calling the Connector Hub. We’ve already got a dozen or so connectors, and more are popping up every week. We’ll be working to make this index as useful to users as possible, offering information about the current state of the connector implementations and feature sets. With all these pieces you can see how we can tie together Kafka and Kafka Connect with stream processing frameworks and applications to not only simplify building these data pipelines and solve data integration challenges, but also transform how your company manages its data pipelines. Kafka provides the central hub for real-time data and Kafka Connect simplifies operationalization: one service to maintain, common metrics, common monitoring, and agnostic to your choice of process and cluster management. You can centrally managed Kafka Connect cluster running in distributed mode, and accessed via REST API, allowing your ops team to provide data integration as a service to your entire organization. For developers who want to build a complex data pipeline, they can submit jobs to copy data into and out of Kafka – it’s zero coding (assuming a connector is available) Then, they can easily leverage either the traditional clients or stream processing frameworks to transform that data. The output is stored back into another Kafka topic or served up directly. As a side benefit, standardizing on Kafka encourages reuse of existing data (both raw and transformed). Providing this service not only makes it easy to build your *own* complex data pipeline, it encourages other people in the org to build on top of your existing work. Confluent Platform also provides additional tools that make this setup even more powerful. For example, the schema registry controls the format of data in each topic, and besides ensuring data quality and compatibility, it also encourages decoupling of teams by allowing anyone to discover what data is in a topic, grab its schema, and immediately start utilizing that data without ever adding coordination overhead with another team. A stream data platform built around Kafka and Kafka Connect allows you to scale to handle your entire organization’s real-time data, while maintaining simple management and easy operationalization of your data pipeline.With that, I’d like to say thank for listening and I’d be happy to take any questions.

Illustration Image

No notes for slide

First I want to quickly introduce myself so you know where I’m coming from. I’m an engineer at Confluent, a company founded by the co-creators of Apache Kafka, and we’re building what we call a stream data platform to help companies capture and leverage all their real-time data. I’m also a committer on the Apache Kafka project and the lead at Confluent on the Connect project, which itself is part of the open source Apache Kafka project.More types of data stores with specialized functionality – e.g. rise of NoSQL systems handling document-oriented and columnar stores. A lot more sources of data.
Rise of secondary data stores and indexes – e.g. Elasticsearch for efficient text-based queries, graph DBs for graph-oriented queries, time series databases. A lot more destinations for data, and a lot of transformations along the way to those destinations.
Real-time: data needs to be moved between these systems continuously and at low latency.
Unfortunately, as you build up large, complex data pipelines in an ad hoc fashion by connecting different data systems that need copies of the same data with one-off connectors for those systems, or build out custom connectors for stream processing frameworks to handle different sources and sinks of streaming data, we end up with a giant, unmaintainable mess.

This mess has a huge impact on productivity and agility once you get past just a few systems. Adding any new data storage system or stream processing job requires carefully tracking down all the downstream systems that might be affected, which may require coordinating with dozens of teams and code spread across many repositories. Trying to change one data source’s data format can impact many downstream systems, yet there’s no simple way to discover how these jobs are related.

This is a real problem that we’re seeing across a variety of companies today. We need to do something to simplify this picture. While Confluent is working to build out a number of tools to help with these challenges, today I want to focus on how we can standardize and simplify constructing these data pipelines so that, at a minimum, we reduce operational complexity and make it easier to discover and understand the full data pipeline and dependencies.

We refer to this problem as data integration – by which we broadly mean making sure data gets to all the right places. We need to be able to collect data from a diverse set of sources and then feed it to several downstream applications and systems for processing.

This problem isn’t a new one. There were legacy solutions to this problem but the approach of copying data in an ad-hoc way across applications just does not scale anymore. Today data is in motion and it needs to move in real-time and at scale.

I want to start by highlighting some anti-patterns we observe in how people are tackling this problem today.

One-off tools – connect any two given specific systems.
High complexity, operational overhead
Designed to be too specific – n^2 connectors
Overly-generic data copying tools – make few assumptions, connect any and all inputs and outputs, and do a bunch of intermediate transformation as well.
Try to do too much – E, T, and L with weak interfaces
Too abstract – difficult/impossible to make guarantees even when connecting right pairs of systems
Stream processing tools for data integration
Overkill for simple EL workloads
Weaker connector ecosystem – focus is rightly on T
Generic, weak interfaces as found in generic data copying tools result in difficult to understand semantics and guarantees

When we get too specific, handling everything ad hoc, we end up with a ton of different tools for every connection, often times many different tools for doing transformations, and probably the worst case – a lot of different tools that do *all* of ETL for specific systems.
If we have too little separation of concerns, we end up in situations where we use the stream processing framework for literally every step even though they use a specific model that doesn’t map well to ingesting or exporting data from many types of systems. Alternatively, we use overly generic data copying & transformation tools. These tools are so abstract that they can’t provide many guarantees and become overly complex, requiring you to learn a dozen concepts just to setup a simple pipeline.
What we really need is a separation of concerns in ET&L.
One step towards getting to a separation of concerns is being able to decouple the E, T, and L steps. Kafka, when used as shown here, can help us do that.
The vision of Kafka when originally built at LinkedIn was for it to act as a common hub for real-time data.
When streaming data from data stores like RDBMS or K/V store, we produce data into Kafka, making it available to as many downstream consumers as want it.
Save data to other systems like secondary indexes and batch storage systems, which are implemented with consumers.
Stream processing frameworks and custom consumer apps fit in by being both consumers and producers – reading data from Kafka transforming it, and then possibly publishing derived data back into Kafka.
Using this model can simplify the problem as we’re now always interacting with Kafka.
To set some context, I want to just quickly list a few of the features that make it possible for Kafka to handle data at this scale. We’ll come back to many of these properties when looking at Kafka Connect.

At its core, pub/sub messaging system rethought as distributed commit log.
Based on an append-only and sequentially accessed log, which results in very high performance reading and writing data.
Extends the model to a *partitioned stream* model for a single logical topic of data, which allows for distribution of data on the brokers and parallelism in both writes and reads. In order to still provide organization and ordering within a single partition, it guarantees ordering within each partition and uses keys to determine which partition to put data in.
As part of its append-only approach, it decouples data consumption from data retention policy, e.g. retaining data for 7 days or until we have 1TB in a topic. This both gets rid of individual message acking and allows multiple consumption of the same data, i.e. pub/sub, by simply tracking offsets in the stream.
Because data is split across partitions, we can also parallelize consumption and make it elastically scalable with Kafka’s unique automatically balanced consumer groups.

But what exactly is Kafka?

At high level, "just" another pub/sub message queue
A few key features make it scale to handle the requirements of a stream data platform

Multiple consumers can read the same data, and can be at different offsets in the log. Consuming data doesn't delete it from the log. Instead, Kafka use time- or data size- based retention. Your data will stick around for, e.g., 7 days or until you have 100GB. This retention policy is simple and avoids having to keep accounting info for individual messages.

Topics are partitioned so they can scale across multiple servers
Partitions are also replicated for fault toleranceAs I mentioned before, Kafka is multi-subscriber where the same topic can be consumed by multiple groups of consumers where each consumer group can subscribe to read the full copy of data. Furthermore, every consumer group can have multiple consumer processes distributed over several machines and Kafka takes care of assigning the partitions of the subscribed topics evenly amongst the consumer processes in a group so that at all times, every partition of a subscribed topic is being consumed by some consumer process within the group.

In addition to being easy to scale, consumption is also fault tolerant. If one fails, the other ones automatically rebalance to pick up the load of the failed consumer instance. So it is operationally cheap to consume large amounts of data.

Given all these properties, it’s easy to see how Kafka can fit this central role as the hub for all your realtime data, and we can simplify the original image of our data pipeline. However, with the regular Kafka clients, we’re still leaving quite a bit on the table – each connection in the image still requires its own tool or Kafka application to get data to or from Kafka. Each tool uses these relatively low-level clients and has to implement many common features.
Today, I want to introduce you to Kafka Connect, Kafka’s new large-scale, streaming data import/export tool that drastically simplifies the construction, maintenance, and monitoring of these data pipelines.

Kafka Connect is part of the Apache Kafka project, open source under the Apache license, and ships with Kafka. It’s a framework for building connectors between other data systems and Kafka, and the associated runtime to run these connectors in a distributed, fault tolerant manner at scale.

Goals:

Focus – copying only
Batteries included – framework does all the common stuff so connector developers can focus specifically on details that need to be customized for their system. This covers a lot more than many connector developers realize: beyond managing the producer or consumer, it includes challenges like scalability, recovery from faults and reasoning about delivery guarantees, serialization, connector control, monitoring for ops, and more.
Standardize – configuration, status and connector control, monitoring, etc.
Parallelism, scalability, fault tolerance built-in, without a lot of effort from connector developers or users.
Scale – in two ways. First, scale individual connectors to copy as much data as possible – ingest an entire database rather than one table at a time. Second, scale up to organization-wide data pipelines or down to development, testing, or just copying a single log file into Kafka

With these goals in mind, let’s explore the design of Kafka Connect to see how it fulfills these.

At it’s core, Kafka connect is pretty simple. It has source connectors which copy data from another system into Kafka, and sink connectors that copy data from Kafka into a destination system.

Here I’ve shown a couple of examples. The source and sink systems don’t necessarily have to naturally match Kafka’s data model exactly. However, we do need to be able to translate data between the two. For example, we might load data from a database in a source connector. By using a timestamp column associated with each row, we can effectively generate an ordered stream of events that are then produced into Kafka. To store data into HDFS, we might load data from one or more topics in Kafka and then write it in sequence to files in an HDFS directory, rotating files periodically. Although Kafka Connect is designed around streaming data, because Kafka acts as a good buffer between streaming and batch systems, we can use it here to load data into HDFS. Neither of these systems map directly to Kafka’s model, but both can be adapted to the concepts of streams with offsets. More about this in a minute.

The most important design point for Kafka Connect is that one half of a connection is always Kafka – the destination for sources, or the source of data for sink connectors. This allows the framework to handle the common functionality of connectors while maintaining the ability to automatically provide scalability, fault tolerance, and delivery guarantees without requiring a lot of effort from connector developers. This key assumption is what makes it possible for Kafka Connect to get a better set of tradeoffs than the systems I mentioned earlier.

So now, coming back to the model that connectors need to map to. Just as Kafka’s data model enables certain features around scalability, Kafka Connect’s data model can as well.

Kafka Connect requires every connector to map to a “partitioned stream” model. The basic idea is a generalization of Kafka’s data model of topics and partitions. This mapping is defined by the input system for the connector – the source system for source connectors, and Kafka topics for sink connectors -- and has the following:

A set of partitions which divide the whole set of data logically. Unlike Kafka, the number of partitions can potentially be very large and may be more dynamic than we would expect with Kafka.
Each partition contains an ordered sequence of events/messages. Under the hood these are key/value pairs with byte[], but Kafka Connect requires that they can be converted into a generic data API
Each event/message has a unique offset representing its position in the partition. Since the mapping is determined by the input system, these offsets must be meaningful to that system – these may be quite different from the Kafka offsets you’re used to.

To give a more concrete example, we can revisit the database example from earlier. Previously I only showed a single table, but if we consider the database as a whole, we can apply this model to copy the entire database. We partition by table, delivering each into its own Kafka topic. Each event represents a row that we’ve inserted into the database. The offsets are IDs or timestamps, or even more complex representations like a combination of ID and timestamp. Although there isn’t *actually* a stream for each table, we can effectively construct one by querying the database and ordering results according to specific rules.

As a result of this model, we can see a few properties emerging:

First, we have a built-in concept of parallelism, a requirement for automatically providing scalable data copying. We’re going to be able to distribute processing of partitions across multiple hosts.
Second, this model encourages making copying broad by default – partitioned streams should cover the largest logical collection of data.
Finally, offsets provide an easy way to track which data has been processed and which still needs to be copied. In some cases, mapping from the native data model to streams may not be simple; however, a bit of effort in creating this mapping pays off by providing a common framework and implementation for tracking which data has been copied. Again, we’ll revisit this a bit later, but this allows the framework to handle a lot of the heavy lifting with regards to delivery semantics.

Partitioned streams are the logical data model, but they don’t directly map to physical parallelism, or threads, in Kafka Connect. In the case of the database connector, a direct mapping might seem reasonable. However, some connectors will have a much larger number of partitions that are much finer-grained. For example, consider a connector for collecting metrics data – each metric might be considered its own partition, resulting in tens of thousands of partitions for even a small set of application servers.

However, we do want to exploit the parallelism provided by partitions. Connectors do this by assigning partitions to tasks. Tasks are, simply, threads of control given to the connector code which perform the actual copying of data.
Each connector is given a thread it can use to monitor the input system for the active set of partitions. Remember that this set can be dynamic, so continuous monitoring is sometimes needed to detect changes to the set of partitions. When there are changes, the connector notifies the framework so it can reconfigure the current set of tasks.
Then, each task is given a dedicated thread for processing. The connector assigns a subset of partitions to each task and the task is the one that actually copies the data for that partition. Given the assignment, the connector implementer handles the reading or writing data from that set of partitions.
And how do we decide how many tasks to generate? That’s up to the user, and it’s the primary way to control the total resources used by the connector. Since each task corresponds to a thread, the user can choose to dynamically increase or decrease the maximum number of tasks the connector may create in order to scale resource usage up or down.

So now we have some set of threads, but where do they actually execute? Kafka Connect has two modes of execution.

Standalone mode works as a single process. This is really easy to get started with, easy to configure.

We like this because it scales down really easily and stays local for testing. It’s also great for connectors that really only make sense on a single node – for example, processing log files, where you need to read the data off the local file system.

If you’ve used systems like logstash or flume, this mode should look familiar. It’s commonly referred to as either standalone or agent mode.

In contrast, distributed mode can scale up while providing distribution and fault tolerance.

Recall that each connector or task is a thread, and we’re considering each to be approximately equal in terms of resource usage.
Connectors and tasks are auto-balanced across workers. Failures automatically handled by redistributing work, and you can easily scale the cluster up or down by adding more workers.
Cool implementation note: reuses group membership functionality of consumer groups. Note how if you replace “worker” with “consumer” and “task” with “topic partition”, the things it is doing look largely the same: assigning tasks to workers, detecting when a worker is added or fails, and rebalancing the work. Kafka already provides support for doing a lot of this, so by leveraging the existing implementation and coordinating through Kafka’s group functionality (with internal data stored in Kafka topics), Kafka Connect can provide this functionality in a relatively small code footprint.
Finally, note that Kafka Connect does not own the process management at all. We don’t want to make assumptions about using Mesos, YARN, or any other tool because that would unnecessarily limit Kafka Connect’s usage. Kafka Connect will work out of the box in any of these cluster management systems, or with orchestration tools, or if you just manage your processes with your own tooling.

In contrast, distributed mode can scale up while providing distribution and fault tolerance.

Recall that each connector or task is a thread, and we’re considering each to be approximately equal in terms of resource usage.
Connectors and tasks are auto-balanced across workers. Failures automatically handled by redistributing work, and you can easily scale the cluster up or down by adding more workers.
Cool implementation note: reuses group membership functionality of consumer groups. Note how if you replace “worker” with “consumer” and “task” with “topic partition”, the things it is doing look largely the same: assigning tasks to workers, detecting when a worker is added or fails, and rebalancing the work. Kafka already provides support for doing a lot of this, so by leveraging the existing implementation and coordinating through Kafka’s group functionality (with internal data stored in Kafka topics), Kafka Connect can provide this functionality in a relatively small code footprint.
Finally, note that Kafka Connect does not own the process management at all. We don’t want to make assumptions about using Mesos, YARN, or any other tool because that would unnecessarily limit Kafka Connect’s usage. Kafka Connect will work out of the box in any of these cluster management systems, or with orchestration tools, or if you just manage your processes with your own tooling.

In contrast, distributed mode can scale up while providing distribution and fault tolerance.

Recall that each connector or task is a thread, and we’re considering each to be approximately equal in terms of resource usage.
Connectors and tasks are auto-balanced across workers. Failures automatically handled by redistributing work, and you can easily scale the cluster up or down by adding more workers.
Cool implementation note: reuses group membership functionality of consumer groups. Note how if you replace “worker” with “consumer” and “task” with “topic partition”, the things it is doing look largely the same: assigning tasks to workers, detecting when a worker is added or fails, and rebalancing the work. Kafka already provides support for doing a lot of this, so by leveraging the existing implementation and coordinating through Kafka’s group functionality (with internal data stored in Kafka topics), Kafka Connect can provide this functionality in a relatively small code footprint.
All of this functionality can be accessed via REST API – submit connectors, see their status, update configs, and so on.
Finally, note that Kafka Connect does not own the process management at all. We don’t want to make assumptions about using Mesos, YARN, or any other tool because that would unnecessarily limit Kafka Connect’s usage. Kafka Connect will work out of the box in any of these cluster management systems, or with orchestration tools, or if you just manage your processes with your own tooling.

I want to mention two important features that also simplify both connector developer’s and user’s lives.

The first feature is offset management, which provides for standardized data delivery guarantees. Delivery guarantees are actually rarely provided in many other systems. They generally offer some sort of best effort, but unreliable, delivery. Ironically, stream processing frameworks often do a better job than tools specifically designed for data copying.

Kafka Connect handles offset checkpointing for connectors, and this fits in as a natural extension to Kafka’s offset commit functionality. For sources this works with offsets that have complex structure (e.g. timestamps + autoincrementing IDs in a database) and requires no implementation support from the connector beyond defining the offsets and being able to start reading from a saved offset. For sinks, we can leverage Kafka’s existing offset functionality, but in order to ensure data is completely written, sinks must also support a flush operation. Commits are automatically processed periodically. By default, this mode of managing offsets will provide at least once delivery; internally both sources and sinks are simply flushing all data to the output and the committing offsets.

Note that some connectors will opt out of this functionality in order to provide even stronger guarantees. For example, the HDFS connector manages its own offsets because (carefully) tracking them in HDFS along with the data allows for exactly-once delivery.

The second feature I want to mention are converters. Serialization formats may seem like a minor detail, but not separating the details of data serialization in Kafka from the details of source or sink systems results in a lot of inefficiency:

A lot of code for doing simple data conversions are duplicated across a large number of ad hoc connector implementations.
Each connector ultimately contains its own set of serialization options as it is used in more environments – JSON, Avro, Thrift, protobufs, and more.

Much like the serializers in Kafka’s producer and consumer, the Converters abstract away the details of serialization. Converters are different because they guarantee data is transformed to a common data API defined by Kafka Connect. This API supports both schema and schemaless data, common primitive data types, complex types like structs, and logical type extensions. By sharing this API, connectors write one set of translation code and Converters handle format-specific details. For example, the JDBC connector can easily be used to produce either JSON or Avro to Kafka, without any format-specific code in the connector.

Kafka Connect provides the framework, but I want to spend a few minutes describing the current state of the connector ecosystem. While the framework ships with Apache Kafka, connectors use a federated approach to development. Confluent helped kick off connector development with a few key open source connectors – JDBC for importing data from any relational database and HDFS, for exactly once delivery of data into HDFS and Hive. Confluent will be continuing to add more open source connectors.

We’ve also started tracking connectors that the community has been developing on a page we’re calling the Connector Hub. We’ve already got a dozen or so connectors, and more are popping up every week. We’ll be working to make this index as useful to users as possible, offering information about the current state of the connector implementations and feature sets.

With all these pieces you can see how we can tie together Kafka and Kafka Connect with stream processing frameworks and applications to not only simplify building these data pipelines and solve data integration challenges, but also transform how your company manages its data pipelines.

Kafka provides the central hub for real-time data and Kafka Connect simplifies operationalization: one service to maintain, common metrics, common monitoring, and agnostic to your choice of process and cluster management.

You can centrally managed Kafka Connect cluster running in distributed mode, and accessed via REST API, allowing your ops team to provide data integration as a service to your entire organization.
For developers who want to build a complex data pipeline, they can submit jobs to copy data into and out of Kafka – it’s zero coding (assuming a connector is available)
Then, they can easily leverage either the traditional clients or stream processing frameworks to transform that data. The output is stored back into another Kafka topic or served up directly.
As a side benefit, standardizing on Kafka encourages reuse of existing data (both raw and transformed). Providing this service not only makes it easy to build your *own* complex data pipeline, it encourages other people in the org to build on top of your existing work.
Confluent Platform also provides additional tools that make this setup even more powerful. For example, the schema registry controls the format of data in each topic, and besides ensuring data quality and compatibility, it also encourages decoupling of teams by allowing anyone to discover what data is in a topic, grab its schema, and immediately start utilizing that data without ever adding coordination overhead with another team.

A stream data platform built around Kafka and Kafka Connect allows you to scale to handle your entire organization’s real-time data, while maintaining simple management and easy operationalization of your data pipeline.

With that, I’d like to say thank for listening and I’d be happy to take any questions.

Related Articles

analytics
streaming
visualization

Keen - Event Streaming Platform

John Doe

2/3/2024

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

cassandra