ducmanh1312

ducmanh1312/llm-from-scratch

Jupyter Notebook
0
0
No license

The project demonstrates how to train large language models from the ground up—covering data preprocessing, model selection (e.g., GPT‑2, Llama‑3.2, Qwen‑3), pre‑training or initialization, and downstream fine‑tuning using techniques such as Direct Preference Optimization (DPO) or supervised fine‑tuning, with thorough evaluation of metrics like accuracy and F1. It is aimed at AI researchers and machine‑learning engineers who want a hands‑on reference implementation for building, customizing, and evaluating their own LLMs.

Total donated
Undistributed
Share with your subscribers:

Recipients

How the donated funds are distributed

Support the dependencies

Top contributors

ducmanh1312's profile
ducmanh1312
11 contributions

Recent events

Kivach works on the Obyte network, and therefore you can track all donations.

No events yet