vllm
Repository: vllm
Author: vllm-project · Source status: Clear source
A high-throughput and memory-efficient inference and serving engine for LLMs
Score basis:Clear source · High execution risk · Universal · Evidence completeness 67%
Repository: auto-round
Author: intel·Source status: Clear source
(one of )The SOTA quantization algorithm for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype support and full compatibility with vLLM, SGLang, and Transformers.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Pre-install score
92 · Manual review
One primary score for ranking and pre-install decisions; Evidence completeness 65%.
Risk decision
Review required
metadata-only
Install readiness
script-backed · copy-only command
SkillTrust only shows install guidance and copy actions; it never executes installs.
Before you install
Review source, permissions, and execution risk first, then alternatives. Scores prioritize review; they do not replace manual judgment.
Install guidance
This area handles the execution step: review explicit or inferred tool fit, then decide whether to copy commands.
Explicitly supported
Universal
Shown with the common install pattern for this tool.
git clone https://github.com/intel/auto-round.gitReview source and permissions first. If no explicit command is available, start from the repository guidance.
Current risk hints: metadata-only
Audit main stage
This stage turns 15 audit dimensions into one pre-install score; the constellation only explains the gaps instead of becoming a second score.
The purpose is visible, but source, permissions, or install evidence needs checking.
Current advice
Review before install
Review source, permissions, and execution details before copying commands.
Execution profile
High
Check command, file-write, and network behavior first.
Evidence completeness
65%
This shows how much public evidence supports the score; it is not a safety certification.
Start with the overall shape and the weakest three items, then use the list below for full labels, scores, and actions.
Start here
Read the weakest three first, then move into the full 15-item list.
Full 15-dimension list
The default view starts from the lowest scores; switch to risk-first when you want severity before score.
Repository
intel/auto-round
Author
intel
Community signal
995 stars · 103 forks
Favorites
0
Last updated
2026-04-14
Primary source
intel/auto-round
Source status
Clear source
Install method
script-backed
Related: Same task category...
Why related: Same task category, Keyword overlap, Similar install method
Repository: llm-app
Author: pathwaycom · Source status: Clear source
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 67%
Related: Same task category...
Why related: Same task category, Keyword overlap, Similar install method
Repository: anything-llm
Author: Mintplex-Labs · Source status: Clear source
The all-in-one AI productivity accelerator.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Related: Same task category...
Why related: Same task category, Keyword overlap, Similar install method
Repository: litellm
Author: BerriAI · Source status: Clear source
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 67%
Related: Same task category...
Why related: Same task category, Keyword overlap, Similar install method
Repository: neural-compressor
Author: intel · Source status: Clear source
SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Repository: AI-Playground
Author: intel · Source status: Clear source
AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Repository: intel-xpu-backend-for-triton
Author: intel · Source status: Clear source
OpenAI Triton backend for Intel® GPUs
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Repository: ipex-llm-tutorial
Author: intel · Source status: Clear source
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%