Migrating from ActiveMQ to Kafka

Lydtech

Sector

Finance

The Client

Is a leader in global payments connecting customers and financial institutions around the globe

Deliverables

    Migration of the messaging broker from ActiveMQ to Kafka for their Event Driven Architecture

The Challenge

A greenfield global payments system was built utilising an event-driven microservices architecture. The messaging broker selected for the system at the outset was ActiveMQ. The team successfully implemented and deployed the system to Production, which was responsible for moving high volumes of payments between bank accounts each day. As the system grew along with the user client base, a decision was taken based on the client’s requirements to migrate the messaging broker from ActiveMQ to Kafka in order to leverage the scalability, availability, and throughput that Kafka offers.

What We Did

Evaluation Phase

We undertook a rigorous evaluation of Kafka to understand if it was the right fit. This involved researching the client’s requirements for the event driven system in order to ascertain whether Kafka would be the right technology choice. Following this we performed a spike on a single flow in the system:

  • Introduced Kafka alongside ActiveMQ enabling a direct comparison between the two.
  • Exhaustively tested all aspects of the flow.
  • Undertook performance testing against the benchmark provided by ActiveMQ.
  • Performed resilience testing to demonstrate the behaviour during Kafka failure scenarios.

At this stage we were able to answer all the critical questions for our client in order to make the informed decision that the migration should proceed. Among these we could demonstrate zero message loss and message deduplication, and that the resilience and performance non-functional requirements could be satisfied.

Guaranteeing No Message Loss

One of the key questions to answer was whether we could guarantee that no messages would be lost. To achieve this our microservices were architected to use the transactional outbox pattern. A single transaction spans all the writes to the database within a flow, so any entity writes or updates are performed atomically with the necessary outbound messages being written to an outbox database table. The outbox table is polled by a process which writes the events to Kafka. The team exhaustively tested this architecture to verify messages were not lost.

Planning Phase

As the evaluation progressed a migration plan was formulated for the roll out of Kafka that ensured:

  • No downtime for the system.
  • No big bang change - services can be migrated and rolled out independently of others.
  • No interruption in processing at deployment time as a service transitions from ActiveMQ to Kafka.

Once the plan was signed off by all relevant parties, the migration was undertaken by the team.

The Outcomes

Comprehensive Migration Plan

All project stakeholders understood the scope, timescales and impact of the work. The plan ensured no big bang changes were required.

Extensive Testing

Extensive testing throughout ensured no unexpected surprises. Comprehensive automated testing harness developed.

Zero Downtime

The messaging broker was successfully migrated. No downtime in Production. No message loss. No disruption to system users.