Generative AI with Google Cloud: Tuning Foundation Models in Vertex Generative AI Studio

preview_player
Показать описание
Tuning foundation models can greatly improve their accuracy and task-orientation. But with so many tuning options, how do you know which one is right for your use case? And what are potential performance and cost implications?

Join the experts for this session, where we’ll cover:
- The different tuning options currently offered for foundation models on Vertex AI
- How to launch a tuning job with Vertex Generative AI Studio
- How to evaluate and compare the results of tuned models

You’ll also have time for live Q&A, so bring your questions!

Рекомендации по теме
Комментарии
Автор

00:06 Discussion on tuning foundational models and its complexities
02:10 Introduction to tuning large language models
06:08 Different models like T5, Bison, Chat Bison, and DSS are available for generative AI.
08:00 Advantage of fine-tuning for model customization and personalization.
11:48 Tuning involves adding adaptation or optimizing layer activations
13:44 Adapted tuning allows model optimization without additional costs
17:30 Tuning smaller models from teacher model rationale
19:04 Tuning foundation models in Vertex Generative AI Studio
22:17 Fine-tuning with reward function and feedback for model refinement
23:50 Developing policy for language model responses
27:11 Adapter tuning is essential for tuning foundational models in Vertex AI Studio.
29:00 RLM helps optimize model performance with human feedback
32:45 Tuning foundation models involves pre-end LLM, fine-tuning, and adapter tuning.
34:42 Creating and tuning adapter models for task-specific datasets.
38:32 Key considerations for fine-tuning AI models
40:18 Using text to SQL with Code Bon model to generate queries for big query engine.
44:10 Model stored in GCB parameter decides the path, input data sets uploaded for tuning
46:15 Google Cloud stores artifacts related to the model in GCS bucket.
50:07 Fine-tuning and embedding concepts explained
51:53 Feedback for upcoming sessions and recommendations for customization
55:29 The decision to fine-tune or tweak the prompts depends on business stability and specific needs.
57:14 Incremental tuning recommended for new data and huge data volumes.
1:00:55 Tuning foundation models and using XAI with Google Cloud.

BillionaireBites
Автор

@35:38 json format is confusing. First example has 'Input_text' and 'output_text' only but second example has 'context' as well. What is the right format? is this intentional?

sowrabhsanathkumar
Автор

Are the slides from this talk available anywhere?

SteigerMiller
Автор

I thought about getting a cert in google and thought Microsoft, but I chose ibm because the future isn't one type of machine learning it's all.

WeylandLabs
Автор

Very confusing presentation and discussion. They are talking about too many things quickly.

shyamkadari