World's Most Accurate RAG? LangChain/Pinecone, LlamaIndex or EyeLevel

preview_player
Показать описание

The winner emerged with 98% accuracy including on difficult PDFs with tables and diagrams.

A must see for engineers trying to build hallucination free RAG on their documents.

The full report is here.

The source files are here if you want to replicate the test.
Рекомендации по теме
Комментарии
Автор

While this sounds like fantastic news, LlamaIndex "out of the box" would provide poor results to a lot of solutions if you don't build smarter retriever / pipelines to create a more robust and usable RAG solution.

I'd love to see the code testbed for all three of these tests, otherwise I've got no actual evidence that you used a highly performing solution in both Langchain and LlamaIndex _and_ beat it. I mean, if I said "my Ford Focus is faster than your Audi RS6" and show you them both off the line...but fail to show you the engine rebuild in the Ford Focus then I'm sorta misrepresenting the results.

cmcocktails
Автор

Is it a GroundX advertisement? As in reality, everything depends on configuration. 😅
Any way, good video. 👍

adammobile
Автор

You compare apples with pears.
GroundX looks to be a sophisticated RAG application, while you use the default / very basic components and settings for both Langchain and LlamaIndex RAG.
The comparison is misleading this way...

attilavass
Автор

I already stoped the video at 00:37, so you work at EyeLevel, and you are going to tell us wich RAG is the more accurate from LangChain/Pinecone, LlamaIndex or EyeLevel, let me They are all the same? Or you going to say EyeLevel is the best like everyone says about the project they work with? 🤣🤣🤣

kiiikoooPT
Автор

no source code. make this video no sense

andrew.derevo
Автор

This makes no sense, since it all depends on the Pipeline, on the model and the prompts provided. There is not such thing as better framework in terms of accuracy. Fail video man, explain it better

Zetakoner