Chunk large complex PDFs to summarize using LLM

preview_player
Показать описание
In this video, I talk about a technique to do context aware chunking of large PDFs and then summarize the content using map-reduce framework(implemented through Langchain)

References:

Рекомендации по теме
Комментарии
Автор

oh man.. thanks for your videos! They are precious gold! I love the way you think and teach!

fabsync
Автор

Man, very great explaination. I m constantly visiting your channel for great new tutorials @@@:)

onqount
Автор

Excellent, nice idea and very well explained! Thanks!

ianmatthews
Автор

Thanks Sir. Actually I am working on some project based on it and find difficult for me to find materials to understand the concept practically. After watching this video, I understand and implement it successfully and step ahead.
At last Thanks Sir for this video.

aarshmehtani
Автор

Maybe for image, how does GPT4 multimodal models work ?

timtensor
Автор

Hi Rajib, Thanks for making this video. It has been really helpful as I try to build a RAG system for a B2B use case. However, I did try setting up the Adobe API but I must say it's not too easy as I am getting stuck at various steps. I am not able to get 201 response code. Can you please share the steps you followed to setup the API? Regards, Bilal

bilalzahoor
Автор

Hi Rajib,

Really insightful video. Especially the Extract API for the context-aware extraction of text from PDF.
Are you aware of any open-source alternatives for the Extract API?

Regards,
Dev

elephant
Автор

Hi, Could you please share your LinkedIn profile ? I am doing the same PoC, I need some clarification.

PrabakaranSPpraba
Автор

Thank you, can you give your LinkedIn handle

vikasrajpurohit