
Liberate your OpenClaw 馃
- +39








Anthropic is limiting access to Claude models in open agent platforms for Pro/Max subscribers. Don鈥檛 worry though, there are great open models on Hugging Face to keep your agents running! Most of the time, at a fraction of the cost.
If you've been cut off and your OpenClaw, Pi, or Open Code agents need resuscitation, you can move them to open models in two ways:
- Use an open model served through Hugging Face Inference Providers.
- Run a fully local open model on your own hardware.
The hosted route is the fastest way back to a capable agent. The local route is the right fit if you want privacy, zero API costs, and full control.
To do so, just tell your claude code, your cursor or your favorite agent: help me move my OpenClaw agents to Hugging Face models , and link this page.
Hugging Face Inference Providers
Hugging Face inference providers is an open platform that routes to providers of open source models. It鈥檚 the right choice if you want the best models or you don鈥檛 have the necessary hardware.
First, you鈥檒l need to create a token here . Then you can add that token to openclaw like so:
openclaw onboard --auth-choice huggingface-api-keyPaste your Hugging Face token when prompted, and you鈥檒l be asked to select a model.
We鈥檇 recommend GLM-5 because of its excellent Terminal Bench scores, but there are thousands to chose from here .
You can update your Hugging Face model at any time entering its repo_id in the OpenClaw config:
{
agents: {
defaults: {
model: {
primary: "huggingface/zai-org/GLM-5:fastest"
}
}
}
}Note: HF PRO subscribers get $2 free credits each month which applies to Inference Providers usage, learn more here .
Local Setup
Running models locally gives you full privacy, zero API costs, and the ability to experiment without rate limits.
Install Llama.cpp, a fully open source library for low resource inference.
# on mac or linux
brew install llama.cpp
# on windows
winget install llama.cppStart a local server with a built-in web UI:
llama-server -hf unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XLHere, we鈥檙e using Qwen3.5-35B-A3B, which works great with 32GB of RAM. If you have different requirements, please check out the hardware compatibility for the model you're interested in . There are thousands to choose from .
If you load the GGUF in llama.cpp, use an OpenClaw config like this:
openclaw onboard --non-interactive \
--auth-choice custom-api-key \
--custom-base-url "http://127.0.0.1:8080/v1" \
--custom-model-id "unsloth-qwen3.5-35b-a3b-gguf" \
--custom-api-key "llama.cpp" \
--secret-input-mode plaintext \
--custom-compatibility openaiVerify the server is running and the model is loaded:
curl http://127.0.0.1:8080/v1/modelsWhich path should you choose?
Use Hugging Face Inference Providers if you want the quickest path back to a capable OpenClaw agent. Use llama.cpp if you want privacy, full local control, and no API bill.
Either way, you do not need a closed hosted model to get OpenClaw back on its feet!
Models mentioned in this article 2
Datasets mentioned in this article 1
More Articles from our Blog
Open Responses: What you need to know
VibeGame: Exploring Vibe Coding Games
Community

I love that 鉂わ笍馃. This community has so much to offer. It would be a shame if it got blocked by crazy costs.
These are the values we share at Manifest ( https://manifest.build ) too. Open source values. And the desire to give this community, agents that are super effective and cost-efficient!
路 Sign up or log in to comment
- +33