The project demonstrates how to train large language models from the ground up—covering data preprocessing, model selection (e.g., GPT‑2, Llama‑3.2, Qwen‑3), pre‑training or initialization, and downstream fine‑tuning using techniques such as Direct Preference Optimization (DPO) or supervised fine‑tuning, with thorough evaluation of metrics like accuracy and F1. It is aimed at AI researchers and machine‑learning engineers who want a hands‑on reference implementation for building, customizing, and evaluating their own LLMs.
How the donated funds are distributed
Kivach works on the Obyte network, and therefore you can track all donations.