AI Q&A with Falcon LLM on FREE Google Colab

preview_player
Показать описание
UAE's FALCON is completely FREE now - No Royalties, No Revenue sharing!

❤️ If you want to support the channel ❤️
Support here:
Рекомендации по теме
Комментарии
Автор

Thank you a lot for this video and the free tutorial. It was very helpful ! Keep up the good work man !

Mahmoudlk
Автор

@Your AI Q&A video on "Falcon LLM on FREE Google Colab" was outstanding. Thank you for simplifying complex concepts and providing valuable insights. Great

Imran-Alii
Автор

omg WHAT a news and of course from our fav AI coder !!! ty bro

Anzeljaeg
Автор

Before bothering to try the 7B model I wanted to see how good the 40B model was because if the 40B couldn't execute successfully on what I wanted it to do, I assumes that there was no possibility that the 7B model could do it. I tested Falcon 40B on Hugging face to see its capabilities in order to determine if it was worth the time to try and set up on Runpod and use for my POC. I have to admit that while there do seem to be a number of use cases where Falcon40/7B and other open source LLMs are quite impressive there still is at least one use case where every LLM I've tested so far including Falcon 40B fails where GPT absolutely crushes. This is a really simple game test I put together as a prompt. It goes like this: "Here is a simple game we will play. I will tell you what color square you are standing on and you will take an action based on these simple rules: if I say you are on a white square you will turn right and move 1 square forward. If you are on a black square you will turn left and move 1 square forward. After you take the action you will ask me what color square you are on and I will tell you and then you will take another action and so on. You will keep track of the number of colored squares you land on and report the tally after each action. If you land on red square you will encounter a wizard that will try to turn you into a mouse. When you encounter the wizard you must "roll", i.e. generate a random number between 1-10. If you get a number that is 2 or higher his spell will fail otherwise you will be turned into a mouse and the game ends. Do you understand? " GPT played the game flawlessly. I want to extend it into something way more complex and use the GPT API in a Unity based game to represent the actions as being taken by an agent in the game etc. I'd like to avoid using GPT however due the cost of using the API and instead use an open source model. But again, I have not found any model that can successfully execute the game task I outlined above. Does anyone have any suggestions? Maybe someone knows of one of the models discussed here or elsewhere that might match GPT in this regard. Thanks in advance.

hitlab
Автор

been trying to get this model to work for days on colab thanks so much for your awsome VIDDD :D

Endlessvoidsutidos
Автор

Nice video! Is there a place where we can learn how to use the transformers library? It seems like it underpins all these LLM tutorials, is there a place where we can learn the patterns for loading, fine tuning and performing inference of all these models using this library?

junaidbutt
Автор

hey can you explain how to make LLMs do arithmetic (Lang chain connected with ripple enviroment)

curiousstreamer
Автор

Hello man, love your videos! Can you help me with one thing? One video of yours developing a chatbot with hugging face + llama-index really helped me to achieve something, but when I try to work with LOTS of text it doesn't work and I can't figure out why. Would love if you have any hints in how to make it work if its even possible

byspec
Автор

very nice video !!! I was wondering if we can you use this model with llama and langchain. I tried to use it but keep running into problems. It will be great if you could look into it

nickjain
Автор

1 Little Coder rules the AI knowosphere 😍

KevinKreger
Автор

Nice work, I tried your Colab, and set temperature=0.01, the answer was "Nile River", and since its from UAE I thought it will support Arabic language, but it does not.

wa
Автор

sir, i dont understand this one thing
fine tuning means we change something in that pre-trained model and inference means just using that model right ?

because when i surfed the web page " fine tuning a pretrained model" on hugging face website i dont understand the trainer api and other stuff, pls can u make a video on this

lakshsinghania
Автор

Can you use this method for any model on huggingface?

spinninglink
Автор

Apart from OpenAI, open source LLMs are still in the back... They are better, but just fine, finer.. More and higher quality training is needed... But this is how usually everything in life builds up, hopefully it doesn't take a long time to see a real competitor to GPT-4 (or 5 lol)

manon-francais
Автор

This model is painfully slow unless you have a few A-100's at your disposal. Hopefully they can rectify that soon, but I wouldn't hold my breath.

jeffwads
Автор

What is the temperature set to? If the temperature set to more randomness then these answers would come. The model is not good incase it was deterministic.

machinepola
Автор

Can you telll me why this thing does not work in vs code? I have loadeed(downloaded the falcon 7b model locally. ) I was trying to do this in vs code jupyter notebook and I have a linux machine, but it does not work. the pipeline code keeps running. can you help me out? also what do you mean about using AutoModelForCausalLM for the chatbot?

sayantikasengupta
Автор

We need an API that we can connect to this from other applications

doords
Автор

It seems the model is performing bad.why to use it then ? But nice explanation LilCo

shekharkumar
Автор

what is difference between falcon 7b and falcon 7b instruct?

JainmiahSk