Summarize research papers with Gemma

preview_player
Показать описание
See Google's Gemma 3 in action as it summarizes and analyzes popular research papers. This video highlights Gemma's ability to handle long contexts, extract key information, and provide translations, streamlining the research process and making scientific insights more accessible.

Speaker: Tatiana Matejovicova
Products Mentioned: Gemma
Рекомендации по теме
Комментарии
Автор

What is the rationale behind setting the temperature to 0 here?

EDIT: OK, I did some digging. The most likely reason why they set the temperature to "0" is simply for reproducibility for that video (this means they can test it once in advance and only have to record once). While 0-temperature makes the model deterministic and decreases hallucinations, it also usually decreases its performance. The recommendations for low hallucinations and good performance are temperature values around 0.1 to 0.5. (Cao et al., 2023; Zhang et al., 2024). But according to them, the connections between temperature, performance, and hallucinations are not only model-dependent but also task-dependent. Interestingly, for some models, a very low temperature (0 or 0.1) even increased the hallucination rate compared to moderately low values (such as 0.3 or 0.5).

Mede_N
Автор

can it handle math equations and images properly ? One important thing that is lacking in this PDF analyzer are- it misses important inferences from images and equations

BibhatsuKuiri
Автор

I wish there was more examples with the 1B model, especially locally using an MCP or something

stanislav
Автор

Which app are you using in this demo? Is Research assistant something you made in Vertex AI or AI Studio?

kimguerrette
Автор

I was working on something very similar! Very com demo!

lgmuk
Автор

So, I assume the summaries were correct? But how the user can verify the information quickly?

Hukkinen
Автор

why would I use gemma instead of notebooklm?

Ali-kdx
Автор

What is the difference between Gemma and Gemini? That’s unclear

maraisdekker
Автор

Interesting. But I'd use notebooklm for such a task.

TorstenWerner
Автор

That time for inference though. I wonder how this compares to other open-sourced models. For anyone wondering -- this is still faster than OpenAI's fastest model gpt-4o-mini. OpenAI's inference time is awful

alexisdamnit
Автор

Please decrease the rate limits for gemma 3 on aistudio, specifically requests per day (RPD).

coding-master-shayan
Автор

while on AI studio what's stopping us from using large models ... gemma's use case tutorials better be focused on running them on PCs with smaller computes and GPUs ...

philips
Автор

I just remembered my English classes precis writing.
I could finally compare with mine 😂😂😂

diptipriya
Автор

Fix long chats performance in AI Studio first!

bpavuk
join shbcf.ru