Retour aux articles
IAHugging Face Blog

GGML and llama.cpp join HF to ensure the long-term progress of Local AI

Back to Articles GGML and llama.cpp join HF to ensure the long-term progress of Local AI Published February 20, 2026 Update on GitHub Upvote 505 +499 Georgi Gerganov ggerganov Follow Xuan-Son Nguyen ngxso...

Le flux RSS ne fournissait qu'un extrait. FlowMarket a récupéré le contenu public disponible depuis la page originale, sans contourner les contenus réservés.

GGML and llama.cpp join HF to ensure the long-term progress of Local AI

  • +499
Georgi Gerganov
Xuan-Son Nguyen
Aleksander Grygier
Lysandre
Victor Mustar
Julien Chaumond

We are super happy to announce that GGML, creators of Llama.cpp, are joining HF in order to keep future AI open. 🔥

Georgi Gerganov and team are joining HF with the goal of scaling and supporting the community behind ggml and llama.cpp as Local AI continues to make exponential progress in the coming years.

We've been working with Georgi and team for quite some time (we even have awesome core contributors to llama.cpp like Son and Alek in the team already) so this has been a very natural process.

llama.cpp is the fundamental building block for local inference, and transformers is the fundamental building block for model definition, so this is basically a match made in heaven. ❤️

What will change for llama.cpp, the open source project and the community?

Not much – Georgi and team still dedicate 100% of their time maintaining llama.cpp and have full autonomy and leadership on the technical directions and the community. HF is providing the project with long-term sustainable resources, improving the chances of the project to grow and thrive. The project will continue to be 100% open-source and community driven as it is now.

Technical focus

llama.cpp is the fundamental building block for local inference, and transformers is the fundamental building block for definition of models and architectures, so we’ll work on making sure it’s as seamless as possible in the future (almost “single-click”) to ship new models in llama.cpp from the transformers library ‘source of truth’ for model definitions.

Additionally, we will improve packaging and user experience of ggml-based software. As we enter the phase in which local inference becomes a meaningful and competitive alternative to cloud inference, it is crucial to improve and simplify the way in which casual users deploy and access local models. We will work towards making llama.cpp ubiquitous and readily available everywhere.

Our long term vision

Our shared goal is to provide the community with the building blocks to make open-source superintelligence accessible to the world over the coming years.

We will achieve this together with the growing Local AI community, as we continue to build the ultimate inference stack that runs as efficiently as possible on our devices.

More Articles from our Blog

DeepSeek-V4: a million-token context that agents can actually use

AI and the Future of Cybersecurity: Why Openness Matters

Community

Image de l'article

Big congrats to GGML and Hugging Face! Great news for the Local AI community. Excited to see llama.cpp grow stronger and make local AI easier for everyone!

  • 4 replies

LLama.cpp is the best AI project by far, super reactive to bug solve, very competent team, love you guys, you desserve it

Image de l'article

Our shared goal is to provide the community with the building blocks to make open-source superintelligence accessible to the world over the coming years.

fire
Image de l'article

Hugging Face smart moves never ending. Are you guys using AI for advice? I wonder which of 2 million AI models you are using 😄

Great news.

Serving with llama.cpp using HF-hosted models, including unsloth's on AMD Strix Halo and OpenCode here.

Image de l'article

Congrats to both teams. Well deserved. Wonderful news for wonderful teams and community.

Congratulations to Georgi Gerganov and team! So happy for you guys, this is huge success!

Image de l'article

Great news. congrats to GGML and HF. . always LocalAI.

Image de l'article

This is a match made in heaven for the local AI ecosystem. Transformers as the model definition layer plus llama.cpp as the local inference layer, backed by HF's long-term resources, gives the entire community a stable foundation to build on for years to come.

The focus on packaging and user experience is especially important. Making local inference accessible beyond developers is how we get to an AI future that's open, private, and user-owned — not locked behind API calls.

Congratulations to Georgi and team. Open-source superintelligence that runs on your own hardware isn't just a technical goal, it's a trust model.

Image de l'article

Congratulations! I love Llama.cpp and I love running my models locally. This is absolutely the future of transparency and I love the push for the open, private, user-owned software world! Thank you for all that you are doing!

Image de l'article

So basically HF, "acquires" an open source project. hmm. I've seen this before and it never ends well (see Trixbox, PCBSD, FreeNAS, etc..).

I sure hope history doesn't repeat itself (yet it always does).

Image de l'article

It’s great news for the future of edge AI!

Image de l'article

Please also acquire ik_llama

Image de l'article

gglm's gguf format now the prefered default for executorch (on device) inference 🚀🦙

Image de l'article

niceee.

Image de l'article

finally, something good about living in modern world, you guys are awesome!

"...it is crucial to improve and simplify the way in which casual users deploy and access local models. We will work towards making llama.cpp ubiquitous and readily available everywhere." (It... already was?)

Before you upvote. Raise your hand if you realize that hf.co is a business with the necessary end goal of making money . This isn't a bad thing; however, this blog post is so devoid of substance and so full of hypebole that one can't help but wonder.

  • 1 reply
Image de l'article

I cant wait till hf adds a quota/limitation on amount of models you can quantize with future versions of llama.cpp requiring hf login /token to quantize a model (trust me bro its just basic telemetry) 😻

of course, when the noose tightens further it won't be officially discussed/acknowledged ( why would we?? we have so much to share with sell to the #community like this robot! look its so cute )

Image de l'article

Maybe lcpp will now natively support image models quanting? yay

  • 2 replies
Image de l'article

Check out stable-diffusion.cpp for this, or KoboldCpp if you want a fork that has both llamacpp and stablediffusioncpp integrated.

Image de l'article

This is awesome news! Making llama.cpp and the GGML ecosystem more sustainable and widely supported will help local AI become more accessible and easier to use for everyone for sure.

Image de l'article

Does this mean that we will have GGUF quants of models as they release, or at least support for gguf out of the box for new models in the future?

Image de l'article

reasoning: min_steps: 2 # Minimum reasoning steps before code require_action_field: true # Each step must have thought + action = freedom confidence_calibration: true # Post-process confidence scores ciritcally high!

Congrads!

Image de l'article

Get Pi as the agent harness next

  • 1 reply

cc: @ victor

A great milestone for Local AI! For those already living daily in the ggml and llama.cpp ecosystem, this is a strong signal for what’s ahead. The alignment with Transformers brings clear strategic coherence. A solid move. Looking forward to what comes next.

  • 1 reply

🔥

Image de l'article

Congratulations to all involved! These are great additions!

· Sign up or log in to comment

  • +493

Besoin d'un workflow n8n ou d'aide pour l'installer ?

Après la veille, passez à l'action : trouvez un template n8n ou un créateur capable de l'adapter à vos outils.

Source

Hugging Face Blog - huggingface.co

Voir la publication originale