intel

intel/neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime

Python
2424
271
Apache License 2.0
Total donated
Undistributed
Share with your subscribers:

Recipients

How the donated funds are distributed

Support the dependencies

Support the repos that depend on this repository

Top contributors

chensuyue's profile
chensuyue
335 contributions
guomingz's profile
guomingz
276 contributions
xin3he's profile
xin3he
268 contributions
mengniwang95's profile
mengniwang95
249 contributions
PenghuiCheng's profile
PenghuiCheng
205 contributions
ftian1's profile
ftian1
161 contributions
yiliu30's profile
yiliu30
155 contributions
zehao-intel's profile
zehao-intel
137 contributions
lvliang-intel's profile
lvliang-intel
134 contributions
Kaihui-intel's profile
Kaihui-intel
130 contributions

Recent events

Kivach works on the Obyte network, and therefore you can track all donations.

No events yet