@AZisk

No sponsor on this video, well, except the amazing members of the channel. You can also join here: https://www.youtube.com/channel/UCajiMK_CY9icRhLepS8_3ug/join

@MaxTechOfficial

Congrats on 300k subs!

@tarangsharma2898

My God!! that's a good editing(Zooming in & Zooming out focusing on text). Keep up! Great video!

@mehregankbi

Ever since the model was released, i knew this vid was coming. tonight I checked the feed for this vid on ur channel, couldn't find it and five mins later, it's on my home feed.

@mortysmith2192

The brain bit is the cutest thing I’ve ever seen 😊

@AsisVendrell

Now I'm really excited to make full use of my M1 max with 64Gb of RAM, choosed the 43Gb deepseek-r1:70b version, and it is amazing to see it working locally even though is notably slower than the web, this is fantasy come true!

@youranonymous931

You have one of the best channels on YouTube. I dont usually "join" a channel, but week by week you always amaze me.

@chandebrec5856

You are not installing "DeepSeek R1" on all those Macs. You're installing a smaller model, based on a Llama or Qwen model, that is distilled from R1.

@nisarg254

I am from India 
its 4am here saw your video and just installed Lm studio 
Man thanks for making this video

@davehachey3888

Hey, thanks.  Great introduction to running LLMs on Mac hardware.

@ChristinaWarren

Was literally setting this up earlier today on my M3 Max and was curious about perf on other machine types so this is awesome! Great video as always!

@RTXON-h1h

FINALLY incredible video I love AI on mac + software engineer with deepseek R1 to make a local AI work is really worth it next time try to destroy your M4 max 128gb with the largest llm deepseek R1 you can put

@JohnLark-k9y

Excited to see you use MLX and talk about quantizisation, as a macbookpro m3 Pro chip, i want to look a little into MLX and quantisiation, would be amazing if you did some videos on those!

@GetzAI

Definitely worth double! 😃

@andrewmerrin

You are truly doing gods work! I’ve been thinking about exactly this since the models dropped.

@SamMeechWard

I'm just going to binge watch all your llmv videos now

@envt

I love your video's, this one was a bit too fast and chaotic for me. Keep up the good work 😊

@macsoyyo

I’ve run 14b version ( almost 10 Gb) ollama in a MacBookPro M2 16Gb . It runs slow but ok. Impressive results, to be honest

@beauslim

You read my mind and made the video I was looking for.

@EliWSeiei

20:48 When analyzing ram usage and power, I recommend macmon, you can even “brew install macmon”, it gives you power usage individual and total, ram usage, swap, e & p core usage