Kafka Consumer Offsets Explained

preview_player
Показать описание
We will be looking into Kafka consumer offsets in this video. This will let us know how apache kafka is managing the offsets of different partitions.

Following topics will be discussed.
- What are consumer Offsets?
- How they are maintained ?
- Message Reprocess or Miss scenario.
- Types of commits of offsets.

Following points will be discussed.
- What is consumer offsets.
- Consumer offset are stored in special topic __consumer_offsets
- Its responsibility of consumer to produce offset message to above topic.
- After rebalance new consumer pick current offset from this topic.
- We can reprocess or miss messages.

We will also be discussing types of commits:
- Auto commit
- Manual commit
- Sync commit.
- Async commit.

If you like the video please like share subscribe.
Рекомендации по теме
Комментарии
Автор

Thank you so much from your video. You make me understand clearly in Kafka.

elighteloy
Автор

@Aman Bro your kafka series helped me to crack difficult interviews in difficult situations ..You helped me to clear my concepts!!!! Please make more videos ... Appreciated

MohitSaini-tvgo
Автор

Really nice I have seen many videos none were clear on these details

raphy
Автор

can you please explain at 8:32 how can a commit be at an offset 10 when the processing is happening for the messages at the earlier offsets?

nikhil-zzmr
Автор

Hi
Your videos are so informative and easy to understand.Thanks a lot.
Can you please make a video of scaling tips and how to reduce load on consumers using configs.

lohithreddy
Автор

Great video, in a concise manner. Answered most of my doubts.
Best of Luck Aman!!

manugoel
Автор

Please make such a playlist on SQS Also

ayushgarg
Автор

Hello Tech Aman, you said in case of manual commit, you can commit the messages individually once particular message processed, but as we know that kafka always keeps the status of largest offset committes, suppose mesage with 2, 3 offset hsn't processed and message with offset 4 gets processed we commit the offset 4, in that case 2, 3 offset will also be considered as committed. how we can remove this issue ?

rohitkumarnode
Автор

Also one more doubt i have, how we can consume same message twice upon two successive consumer poll, i have this requirement where if previous messages didn't get processed and i have not committed i want to receive those same messages again, but it seems to consume next batch of messages because somehow kafka remembers that which offsets was consumed during previous poll. In Simple words how can we reset the in memory offsets ?

rohitkumarnode
Автор

I am trying to override auto.offset.reset in cosumer.properties and connect-standalone.properties file but it is not overriding. Can you please tell me how to fix this and i have added the in connect-standalone file.

satishkuppam
Автор

Hi, how do we handle the duplicates values in Kafka?

sudippandit
Автор

Any code repo for implementation of above?

InderjeetSingh
Автор

I have tried saving the offset in kafka itself in my spark streaming app using commitAsync API. However it is not syncing on time or immediately in kafka internal topic __consume_offset because of that spark is processing duplicates records whenever it restarted even though doing graceful restart . Could you please give some pointers to fix ?

saurav
Автор

I am not able to listen for the particular topic @kafkalistener Is not working

sudheerkumaratru
Автор

Not good content... It can be better way to explain...

ManishTiwari-orzt
Автор

Hi, how do we handle the duplicates values in Kafka?

sudippandit