LM Studio
@lmstudio-ai.bsky.social
📤 482
📥 1
📝 10
Download and run local LLMs on your computer 👾
http://lmstudio.ai
reposted by
LM Studio
Docker
5 months ago
🚀 LM Studio now works out of the box with the Docker MCP Toolkit! Skip the messy configs—connect MCP servers in one click to LM Studio. 🛠️ Build agents easily & securely with Docker. 🔗
docs.docker.com/ai/mcp-catal...
#DockerAI
#MCP
#DevTools
#LMStudio
loading . . .
MCP Catalog and Toolkit
Learn about Docker's MCP catalog on Docker Hub
https://docs.docker.com/ai/mcp-catalog-and-toolkit/
0
8
1
reposted by
LM Studio
murat
5 months ago
yuppp :)
github.com/lmstudio-ai/...
loading . . .
lms chat by mayfer · Pull Request #227 · lmstudio-ai/lms
LMS cli chat mode Examples: lms chat Start chat REPL with default loaded model lms chat modelname Start chat REPL with specific model lms chat -p "Your prompt" Print lms response ...
https://github.com/lmstudio-ai/lms/pull/227
0
3
1
reposted by
LM Studio
Federico Viticci
9 months ago
M3 Ultra Mac Studio: "Up to 16.9x faster token generation using an LLM with hundreds of billions of parameters in LM Studio when compared to Mac Studio with M1 Ultra" 😳
www.macstories.net/news/apple-r...
loading . . .
Apple Reveals New Mac Studio Powered by M4 Max and M3 Ultra
Today, Apple revealed the new Mac Studio featuring both M3 Ultra and M4 Max options. It’s an odd assortment on its face, so let’s take a closer look at what’s going on. As with the original Mac Studio...
https://www.macstories.net/news/apple-reveals-new-mac-studio-powered-by-m4-max-and-m3-ultra/
1
15
1
reposted by
LM Studio
Oliver H.G. Mason 📉
10 months ago
I was expecting this to take me a couple of hours to set up (with me getting annoyed about undocumented requirements on Github), but no, a few mins including the mini-model download. LM Studio is very beginner friendly. 🥔⚖️⭕⭕
2
8
1
reposted by
LM Studio
Luiz Persechini
10 months ago
E tai, o Modelo de IA Deepseek rodando no meu MacBook Air M3 16Gb a 6 tokens por segundo LOCALMENTE. É muito simples, basta baixar o LM Studio , procurar pelo modelo, baixar (5GB) e rodar. Vc tem uma IA de ponta 100% offline rodando no seu computador Surreal.
3
50
7
reposted by
LM Studio
Zed
11 months ago
🚀 Zed v0.170 is out! Zed's assistant can now be configured to run models from @lmstudio. 1. Install LM Studio 2. Download models in the app 3. Run the server via `lms server start` 4. Configure LM Studio in Zed's assistant configuration panel 5. Pick your model in Zed's assistant
1
41
7
reposted by
LM Studio
Florent Daudens
11 months ago
Run DeepSeek R1 locally with
@lmstudio-ai.bsky.social
🥰🥰🥰
0
8
1
reposted by
LM Studio
Giles
11 months ago
Guess who just figured out how to interact with a local
#LLM
model in
#rstats
? 👉This guy!👈 (I did this via
@lmstudio-ai.bsky.social
using
@hadley.nz
's 'ellmer' package. Happy to share how I did it if people are interested).
4
22
6
reposted by
LM Studio
Ben Darfler
11 months ago
Really enjoying local LLMs. LM Studio appears to be the best option right now. Its support for MLX-based models means I can run LLama 3.1-8b with a full 128k context window on my M3 Max MacBook Pro with 36 GB. Great for document chat, slack synopsis, and more. What is everyone else doing?
1
9
1
reposted by
LM Studio
Just John 🇺🇸🇩🇪🇰🇷
11 months ago
VSCode with
#Cline
plugin connected to a local LM Studio running Qwen 2.5 14B LLM on an M4 pro... Programming prompt compiled and ran the second time after it self-corrected, all tests pass. Code generation took less than a minute to complete. 😳
@lmstudio-ai.bsky.social
@vscode.dev
1
5
1
reposted by
LM Studio
chantastic
12 months ago
Codestral on LM Studio lowkey slays
1
8
1
@martinctc.bsky.social
with a cool blog post showing how to combine local LLM calls with your R code
add a skeleton here at some point
about 1 year ago
0
5
0
reposted by
LM Studio
Hersh Gupta
about 1 year ago
@lmstudio-ai.bsky.social
detected my vulkan llama.cpp runtime out of the box, absolutely magical piece of software ✨
0
4
1
reposted by
LM Studio
Petr Baudis (pasky)
about 1 year ago
Didn't think I have a chance with a smol 12GB 4070 to run any interesting LLM locally. But Qwen2.5-14B-Instruct-IQ4_XS *slaps*. It's no Claude, but I'm amazed how good it is. (Also shout out to
@lmstudio-ai.bsky.social
- what a super smooth experience.)
1
12
1
📣🔧Tool Use (beta) Are you using OpenAI for Tool Use? Want to do the same with Qwen, Llama, or Mistral locally? Try out the new Tool Use beta! Sign up to get the builds here:
forms.gle/FBgAH43GRaR2...
Docs:
lmstudio.ai/docs/advance...
(requires the beta build to work)
about 1 year ago
2
7
0
reposted by
LM Studio
Nathan Lambert
about 1 year ago
I'm really happy with Tulu 3 8B. Is nice in the hosted demo (playground allenai org), but also quantized running locally on my mac in LM studio D. Feels like a keeper.
0
24
1
reposted by
LM Studio
Joe Harris
about 1 year ago
LM Studio is magic. You’re going to want a very beefy machine though.
add a skeleton here at some point
0
5
1
reposted by
LM Studio
Bram Zijlstra
about 1 year ago
Never realised how easy it is to run LLMs locally. Thought I'd spend at least half a day to get it up and running, but with LM Studio it took me less than 15 min today. I feel like a grampa saying "I used to spend days to get Caffe running, you don't know how easy it is today!"
#databs
2
12
3
reposted by
LM Studio
Emma (IPG)
about 1 year ago
okay i didn't believe the hype but it's kinda crazy just how good LLMs have gotten over the past few years...
5
166
17
reposted by
LM Studio
Scott Hanselman 🌮
about 1 year ago
WOW check this out it’s
LMStudio.ai
@lmstudio-ai.bsky.social
running ENTIRELY locally on an NPU from Qualcomm
www.tiktok.com/t/ZTYYaN6cn/
loading . . .
Wow at #MsIgnite looking at the FIRST build of a local AI model running entirely in a snapdragon NPU #AI
TikTok video by Scott Hanselman
https://www.tiktok.com/t/ZTYYaN6cn/
6
68
7
you reached the end!!
feeds!
log in