intel

intel/neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime

Python
2266
258
Apache License 2.0
Total donated
Undistributed
Share with your subscribers:

Recipients

How the donated funds are distributed

Support the dependencies

Support the repos that depend on this repository

Top contributors

chensuyue's profile
chensuyue
333 contributions
guomingz's profile
guomingz
276 contributions
mengniwang95's profile
mengniwang95
247 contributions
xin3he's profile
xin3he
237 contributions
PenghuiCheng's profile
PenghuiCheng
205 contributions
ftian1's profile
ftian1
161 contributions
yiliu30's profile
yiliu30
146 contributions
zehao-intel's profile
zehao-intel
137 contributions
lvliang-intel's profile
lvliang-intel
134 contributions
changwangss's profile
changwangss
125 contributions

Recent events

Kivach works on the Obyte network, and therefore you can track all donations.

No events yet