The HARD Truth About Hosting Your Own LLMs

Cole Medin

The HARD Truth About Hosting Your Own LLMs

1 year ago - 14:43

host ALL your AI locally

NetworkChuck

host ALL your AI locally

1 year ago - 24:20

The Best Self-Hosted AI Tools You Can Actually Run in Your Home Lab

VirtualizationHowto

The Best Self-Hosted AI Tools You Can Actually Run in Your Home Lab

1 month ago - 15:23

What is Ollama? Running Local LLMs Made Simple

IBM Technology

What is Ollama? Running Local LLMs Made Simple

8 months ago - 7:14

All You Need To Know About Running LLMs Locally

bycloud

All You Need To Know About Running LLMs Locally

1 year ago - 10:30

The Ultimate Guide to Local AI and AI Agents (The Future is Here)

Cole Medin

The Ultimate Guide to Local AI and AI Agents (The Future is Here)

6 months ago - 2:38:37

Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

Dave's Garage

Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

1 year ago - 15:05

Host a Private AI Server at Home with Proxmox Ollama and OpenWebUI

VirtualizationHowto

Host a Private AI Server at Home with Proxmox Ollama and OpenWebUI

7 months ago - 19:16

THIS is the REAL DEAL 🤯 for local LLMs

Alex Ziskind

THIS is the REAL DEAL 🤯 for local LLMs

3 months ago - 11:03

Micro Center A.I. Tips | How to Set Up A Local A.I. LLM

Micro Center

Micro Center A.I. Tips | How to Set Up A Local A.I. LLM

1 year ago - 0:59

I’m changing how I use AI (Open WebUI + LiteLLM)

NetworkChuck

I’m changing how I use AI (Open WebUI + LiteLLM)

9 months ago - 24:28

Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE

Tech With Tim

Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE

11 months ago - 14:02

Local AI has a Secret Weakness

NetworkChuck

Local AI has a Secret Weakness

8 months ago - 1:06

Run Deepseek R1 at Home on Hardware from $250 to $25,000: From Installation to Questions

Dave's Garage

Run Deepseek R1 at Home on Hardware from $250 to $25,000: From Installation to Questions

10 months ago - 12:42

100% Local NotebookLM Clone Built on Ollama, n8n + Supabase #n8n #supabase #notebooklm #ollama #rag

The AI Automators

100% Local NotebookLM Clone Built on Ollama, n8n + Supabase #n8n #supabase #notebooklm #ollama #rag

5 months ago - 0:42

Self-Hosting LLMs: Architect's Guide to When & How

InfoQ

Self-Hosting LLMs: Architect's Guide to When & How

8 months ago - 39:49

How I Saved $700/Month With Self Hosting

Hasan Aboul Hasan

How I Saved $700/Month With Self Hosting

7 months ago - 6:11

Replace Your Expensive Cloud Tools With These (Self-Hostable) Alternatives

Simon Høiberg

Replace Your Expensive Cloud Tools With These (Self-Hostable) Alternatives

9 months ago - 7:23

#3-Deployment Of Huggingface OpenSource LLM Models In AWS Sagemakers With Endpoints

Krish Naik

#3-Deployment Of Huggingface OpenSource LLM Models In AWS Sagemakers With Endpoints

1 year ago - 22:32

OpenAI's nightmare: Deepseek R1 on a Raspberry Pi

Jeff Geerling

OpenAI's nightmare: Deepseek R1 on a Raspberry Pi

10 months ago - 4:18

Deploy ANY Open-Source LLM with Ollama on an AWS EC2 + GPU in 10 Min  (Llama-3.1, Gemma-2 etc.)

Developers Digest

Deploy ANY Open-Source LLM with Ollama on an AWS EC2 + GPU in 10 Min (Llama-3.1, Gemma-2 etc.)

1 year ago - 9:57

Never Install DeepSeek r1 Locally before Watching This!

Aivoxy

Never Install DeepSeek r1 Locally before Watching This!

10 months ago - 0:28

How to Host Your Own LLM on a Budget: DigitalOcean Guide

STARTUP HAKK

How to Host Your Own LLM on a Budget: DigitalOcean Guide

1 year ago - 0:54

M4 Mac mini as a Home Server

SPACE DESIGN WAREHOUSE

M4 Mac mini as a Home Server

11 months ago - 7:22

How to run LLMs locally [beginner-friendly]

IndividualKex

How to run LLMs locally [beginner-friendly]

11 months ago - 0:59

How To Run Private & Uncensored LLMs Offline | Dolphin Llama 3

Global Science Network

How To Run Private & Uncensored LLMs Offline | Dolphin Llama 3

10 months ago - 14:31

Build a private, self-hosted LLM server with Proxmox, PCle passthrough, Ollama, Open WebUI & NixOS

Tailscale

Build a private, self-hosted LLM server with Proxmox, PCle passthrough, Ollama, Open WebUI & NixOS

8 months ago - 36:09

Building and hosting LLM based applications using GCP serverless stack

GDG Cloud Pittsburgh

Building and hosting LLM based applications using GCP serverless stack

2 years ago - 1:15:29

OpenLLM: Powerful Yet Efficient LLM Hosting #llm #deployment

Joydeep Bhattacharjee

OpenLLM: Powerful Yet Efficient LLM Hosting #llm #deployment

1 year ago - 0:48

How I Got a 100% Free Lifetime Server (And You Can Too!)

CyberFlow

How I Got a 100% Free Lifetime Server (And You Can Too!)

11 months ago - 4:23

How to build a GPU Server for AI & Deep Learning I Watch the Full Video | TheMVP

mvp insight

How to build a GPU Server for AI & Deep Learning I Watch the Full Video | TheMVP

1 year ago - 0:48

Cheap mini runs a 70B LLM 🤯

Alex Ziskind

Cheap mini runs a 70B LLM 🤯

1 year ago - 11:22

How to self-host and hyperscale AI with Nvidia NIM

Fireship

How to self-host and hyperscale AI with Nvidia NIM

1 year ago - 6:44

GEC Hosting LLM Fellowship Gathering #ClergyAppreication| 10:00 am EST

GOD-ENCOUNTER CHURCH TV

GEC Hosting LLM Fellowship Gathering #ClergyAppreication| 10:00 am EST

Streamed 3 years ago - 5:42:19

Put Ai Deep Learning Server with 8 x RTX 4090 🔥#ai #deeplearning #ailearning

Hardware Plug

Put Ai Deep Learning Server with 8 x RTX 4090 🔥#ai #deeplearning #ailearning

2 years ago - 0:15

Exploring Cost-Effective LLM Solutions

Abhinav Gupta

Exploring Cost-Effective LLM Solutions

10 months ago - 0:53

Create Your Own Private Local AI Cloud Stack in Under 20 Minutes

Cole Medin

Create Your Own Private Local AI Cloud Stack in Under 20 Minutes

9 months ago - 16:49

Everything in Ollama is Local, Right?? #llm #localai #ollama

Matt Williams

Everything in Ollama is Local, Right?? #llm #localai #ollama

1 year ago - 0:50

run AI on your laptop....it's PRIVATE!!

NetworkChuck

run AI on your laptop....it's PRIVATE!!

1 year ago - 0:40