exploring different ways to support and enable developers interested in running language models locally - and this package is just one of the potential avenues we're exploring
The `@electron/llm` package enables developers to prototype local-first applications that interact with large language models (LLMs) in Electron, providing an API similar to Chromium's `window.AI` API. It simplifies loading and chatting with GGUF models using a utility process and Mojo IPC pipes for efficient streaming, though it's designed as a reference implementation rather than a feature-complete solution. This experimental package is for Electron developers exploring local LLM capabilities who want an easier entry point than directly using Llama.cpp, while being aware they may need to migrate to other solutions for production apps.
How the donated funds are distributed
Kivach works on the Obyte network, and therefore you can track all donations.