Все публикации

AI Server Thread Inference CPU Speed Impact - Threadripper vs EPYC

3090 vs 4090 Local AI Server LLM Inference Speed Comparison on Ollama

4090 Local AI Server Benchmarks

NVIDIA Nemotron 70b Local AI Testing - The BEST Open Source LLM?

INSANE Homelab Storage Server

Mini Home Server Flash NAS 1 year review - LincStation N1

Ollama WEBUI Home Server AI Tools - Setup Self Hosted AI Vision + AI Web Search

Llama 3.2 3b Review Self Hosted Ai Testing on Ollama - Open Source LLM Review

QWEN 2.5 72b Benchmarked - World's Best Open Source Ai Model?

How To Fix Bent CPU Socket Pins Easy!

Local Ai Models on Quadro P2000 - Homelab testing Gemma Ai, Qwen2, Smollm, Phi 3.5, Llama 3.1

Homelab AI Server Multi GPU Benchmarks - Dual 4090s + 1070ti added in (CRAZY Results!)

Homelab Al Server Multi GPU Benchmarks - Multiple 3090s and 3060ti mixed PCIe VRAM Performance

Reflection 70b Ai Model Update. Is it Broken? What is going on here?

REFLECTION Llama3.1 70b Tested on Ollama Home Ai Server - Best Ai LLM?

AMD Threadripper 7995wx vs Google Chrome TABS!

Proxmox Ai Homelab all-in-one Home Server - Docker Dockge LXC with GPU Passthrough

7995WX Threadripper Build - World's Fastest PC?

JBODs go BLINK in the Homelab #digitalspaceport

GROK 2 vs. LLAMA 3.1 - Cloud vs Home Server Ai Testing

Homelab Networking Winning

Is VENTOY Safe for YOUR Home server?

Ollama AI Home Server ULTIMATE Setup Guide

INSANE Ollama AI Home Server - Quad 3090 Hardware Build, Costs, Tips and Tricks