Все публикации

Halloween Stories via Streamlit, Langchain, Python, and LocalAI (or OpenAI) with Text to Speech!

Mistral 7B LLM AI Leaderboard: Nvidia RTX A4500 GPU 20GB Where does prosumer/enterprise land?

LocalAI LLM Tuning: WTH is Flash Attention? What are the effects on memory and performance? Llama3.2

Mistral 7B LLM AI Leaderboard: Unboxing an Nvidia RTX 4090 Windforce 24GB Can it break 100 TPS?

Mistral 7B LLM AI Leaderboard: The King of the Leaderboard? Nvidia RTX 3090 Vision 24GB throw down!

Mistral 7B LLM AI Leaderboard: Unboxing an Nvidia RTX 4070Ti Super 16GB and giving it run!

4070Ti Super 16GB vs Mistral 7B 0.3 FP16 AI/LLM Leaderboard

Mistral 7B LLM AI Leaderboard: GPU Contender Nvidia RTX 4060Ti 16GB

Live Testing - Mistral Small Instruct 2409 vs Mistral Large Instruct 2407

Mistral 7B LLM AI Leaderboard: GPU Contender Nvidia Tesla M40 24GB

Mistral 7B LLM AI Leaderboard: GPU Contender Nvidia GTX 1660

Mistral 7B LLM AI Leaderboard: Rules of Engagement and first GPU contender Nvidia Quadro P2000

Mistral 7B LLM AI Leaderboard: Baseline Testing Q3,Q4,Q5,Q6,Q8, and FP16 CPU Inference i9-9820X

Mistral 7B LLM AI Leaderboard: Baseline Testing Q3 CPU Inference i9-9820X

LocalAI LLM Testing: Part 2 Network Distributed Inference Llama 3.1 405B Q2 in the Lab!

LocalAI LLM Testing: Distributed Inference on a network? Llama 3.1 70B on Multi GPUs/Multiple Nodes

LocalAI LLM Testing: Llama 3.1 8B Q8 Showdown - M40 24GB vs 4060Ti 16GB vs A4500 20GB vs 3090 24GB

LocalAI LLM Testing: How many 16GB 4060TI's does it take to run Llama 3 70B Q4

LocalAI LLM Testing: Can 6 Nvidia A4500's Take on the WizardLM 2 8x22b?

What's on the Robotf-AI Workbench Today?

LocalAI LLM Testing: i9 CPU vs Tesla M40 vs 4060Ti vs A4500

LocalAI Testing: Viewer Question LLM context size, & quant testing with 2x 4060 Ti's 16GB VRAM

LocalAI LLM Single vs Multi GPU Testing scaling to 6x 4060TI 16GB GPUS