SciAgents Graph Reasoning: Stanford vs MIT #ai

preview_player
Показать описание
Can LLMs generate novel research ideas and are they better than human ideas? Stanford Univ provides answers.

And we explore and discover new SciAgents that do automatic scientific discovery through multi-agent intelligent graph reasoning by MIT.

Both brand new science pre-prints are fascinating and full of new insights (and Python code implementations) for our own multi-agent systems.

All rights w/ Authors:
Can LLMs Generate Novel Research Ideas? (Stanford Univ)
A Large-Scale Human Study with 100+ NLP Researchers

SCIAGENTS: AUTOMATING SCIENTIFIC DISCOVERY THROUGH
MULTI-AGENT INTELLIGENT GRAPH REASONING (MIT)

00:00 AI Agents by Stanford and MIT - for Science
00:45 Result of Stanford: AI versus Human Idea Generation
03:00 Stanford and MIT both work with Ai Agents on Science
03:30 Stanford AI process explained: Idea generation agent
15:25 LLM-as-a-judge fail to evaluate Research ideas
18:10 MIT Multi-Agent Knowledge Graph Process
18:36 SciAgents w/ ontological Knowledge Graphs
27:41 How AI generates new ideas from a knowledge graph?
29:23 Adaptive multi-agent framework for Research by MIT
34:29 Autonomous Agentic Modelling of SciAgents
38:58 2 GitHub repos and multiple Python Notebooks (free)

#aiagents
#massachusettsinstituteoftechnology
#harvard
Рекомендации по теме
Комментарии
Автор

Wait a minute: "Discover AI". Good name for rebranding the channel <3

jomangrabx
Автор

I really like seeing that lots of people recognizing the importance of drawing valid knowledge graphs for AI. Thanks again and I love your new channel name :)

깐돌엄마-ge
Автор

Another interesting area to look at is the phenomenon exhibited by patients of corpus collosiotomy (treatment for epilepsy involving the cutting of the corpus collosum) who display surprisingly LLM-like behavior when asked questions in certain specific contexts. Super interesting that those patients in those scenarios give the exact same sort of nonsensical answers as a hallucinating LLM. Happy to do a collab paper if you're interested!

s.m.mustafaakailvi
Автор

This is awesome! Thanks for sharing. Very exciting to see what MIT is doing with graphs 💜

andydataguy
Автор

It seems you got confused as well with token windows ;-). It is about the output token length. While Gemini can have input of 1 Mio tokens the output is very limited only 8192 tokens. I ran in this issue many times, but sometimes I could solve it with map & reduce. For example writing an ebook chapter by chapter put the already written parts in the input again. Worked quite well, but burned "some" tokens.

DannyGerst
Автор

Very interesting papers, thanks for sharing! @code4AI do you have hands-on experience with this GraphReasoning methodology and with the Microsoft GraphRAG solution? I am experimenting with integrating AI into my second brain and I started with GraphRAG, but maybe this MIT solution can works better.

attilalukacs
Автор

Can you do a review of the intersection of symbolic logic & LLM models? I haven't seen much (or any) work in this area myself and was wondering if you had found anything during your literature surveys/reviews?

s.m.mustafaakailvi
Автор

I have a big question here. I'm working at a medical facility and the chief scientist is really dismissing my ideas of creating graphs because he thinks that chaining many instances of the same LLM are not able to generate an outcome better than the single llm just in term of quality. I disagree. Is there any hard evidence that graphs are more efficient in terms of outcome? I'm talking about quantity here.

MGeeify
Автор

An automated paper mill of junk ideas. Maybe interesting from the tooling point of view. Boring people, bereft of their own ideas should not be given any resources.

pensiveintrovert
join shbcf.ru