Using Apache Kafka to implement event-driven microservices

preview_player
Показать описание
Detailed example application which uses Apache Kafka Streams API and the Confluent Platform.

Source code available on GitHub:

See also:

1st part 00:00 → "What’s so special about Kafka”
2nd part 01:46 → “The proof of concept”
3rd part 03:01 → “The diagram”
4th part 05:04 → “The streams join”
5th part 07:39 → “Event-driven”
6th part 08:32 → “Command Query Responsibility Segregation”
7th part 09:33 → “No event sourcing”
8th part 13:07 → “Starting the Confluent Platform”
9th part 19:29 → “Running the PoC application”
10th part 28:47 → “2nd run, without sleep statements”
Рекомендации по теме
Комментарии
Автор

Great video, very detailed. Thanks for the content

CarlosBotelhoPaulaFilho
Автор

Event sourcing doesn't mean you have to calculate the entire history every time on the fly...

You can definitely keep occasional state snapshots and use them to make calculations more efficient, as long as you can always recalculate everything from the original raw events (for example if your snapshot is lost)

Event sourcing just means that a persistent event log is your source of truth for your state.

Not more, not less.

To make what you built an event sourced system all that's missing is a persistence of the transactions, so that you can always replay the entire history to get the current state again

Klayhamn
visit shbcf.ru