A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.
Pre-install review · source, risk, and alternatives
Trust level
92 · High trust
Strong recovered source and maintenance signals.
Risk decision
Review required
metadata-only
Install readiness
script-backed · copy-only command
SkillTrust only shows install guidance and copy actions; it never executes installs.
Supported tools can change install steps; Universal entries need source review.
metadata-only