LLMs-from-scratch
Repository: LLMs-from-scratch
Author: rasbt · Source status: Clear source
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Repository: llm-scaler
Author: intel·Source status: Clear source
GitHub repository intel/llm-scaler; metadata recovered from GitHub Search API.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Pre-install score
87 · Manual review
One primary score for ranking and pre-install decisions; Evidence completeness 65%.
Risk decision
Review required
metadata-only
Install readiness
script-backed · copy-only command
SkillTrust only shows install guidance and copy actions; it never executes installs.
Before you install
Review source, permissions, and execution risk first, then alternatives. Scores prioritize review; they do not replace manual judgment.
Install guidance
This area handles the execution step: review explicit or inferred tool fit, then decide whether to copy commands.
Explicitly supported
Universal
Shown with the common install pattern for this tool.
git clone https://github.com/intel/llm-scaler.gitReview source and permissions first. If no explicit command is available, start from the repository guidance.
Current risk hints: metadata-only
Audit main stage
This stage turns 15 audit dimensions into one pre-install score; the constellation only explains the gaps instead of becoming a second score.
The purpose is visible, but source, permissions, or install evidence needs checking.
Current advice
Review before install
Review source, permissions, and execution details before copying commands.
Execution profile
High
Check command, file-write, and network behavior first.
Evidence completeness
65%
This shows how much public evidence supports the score; it is not a safety certification.
Start with the overall shape and the weakest three items, then use the list below for full labels, scores, and actions.
Start here
Read the weakest three first, then move into the full 15-item list.
Full 15-dimension list
The default view starts from the lowest scores; switch to risk-first when you want severity before score.
Repository
intel/llm-scaler
Author
intel
Community signal
250 stars · 26 forks
Favorites
0
Last updated
2026-04-14
Primary source
intel/llm-scaler
Source status
Clear source
Install method
script-backed
Related: Same task category...
Why related: Same task category, Keyword overlap, Similar install method
Repository: llm-app
Author: pathwaycom · Source status: Clear source
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 67%
Related: Same task category...
Why related: Same task category, Keyword overlap, Similar install method
Repository: one-api
Author: songquanpeng · Source status: Clear source
LLM API 管理 & 分发系统,支持 OpenAI、Azure、Anthropic Claude、Google Gemini、DeepSeek、字节豆包、ChatGLM、文心一言、讯飞星火、通义千问、360 智脑、腾讯混元等主流模型,统一 API 适配,可用于 key 管理与二次分发。单可执行文件,提供 Docker 镜像,一键部署,开箱即用。LLM API management & key redistribution syst…
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Related: Same task category...
Why related: Same task category, Keyword overlap, Similar install method
Repository: andrej-karpathy-skills
Author: forrestchang · Source status: Clear source
A single CLAUDE.md file to improve Claude Code behavior, derived from Andrej Karpathy's observations on LLM coding pitfalls.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Related: Same task category...
Why related: Same task category, Keyword overlap, Similar install method
Repository: neural-compressor
Author: intel · Source status: Clear source
SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Repository: auto-round
Author: intel · Source status: Clear source
(one of )The SOTA quantization algorithm for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype support and full compatibility with vLLM, SGLang, and Transformers.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Repository: AI-Playground
Author: intel · Source status: Clear source
AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%
Repository: intel-xpu-backend-for-triton
Author: intel · Source status: Clear source
OpenAI Triton backend for Intel® GPUs
Score basis:Clear source · High execution risk · Universal · Evidence completeness 65%