Agentic Ai Engineering
Repository: agentic-ai-engineering
Author: agenticloops-ai · Source status: Clear source
Hands-on tutorials for building AI agents from scratch.
Score basis:Clear source · Low risk signals · Universal
Compare skills
Pick 2–4 skills and compare what really matters: fit, risk, install effort, and community signal.
Comparison matrix
Highlights show current best; tooltip explains diff/best rules.
SAS-v2.1 diff rules / risk tag notes
Start with the matrix. Open this section when you need to understand audit grades, top threats, control gaps, and best-value highlights.
Suggested baseline
Search to add skills, or paste 2–4 comma-separated slugs.
How differences are detected
A row is marked different when selected skills have distinct values. Only-differences mode hides rows that are identical.
How best values are highlighted
Audit score, evidence confidence, trust score, and community signal prefer higher values; execution risk and install friction prefer lower values.
How to read risk tags
Risk tags come from SAS-v2.1 public-evidence signals and point to command, network, secret, context, or supply-chain items to review before install.
Selected audit signals
evals
Execution risk:High
Threat tags:unexpected code execution, data exfiltration, human approval gap
Control gaps:missing license, broad permissions, shell without guardrails
NVIDIA-Nemotron-3-Super
Execution risk:High
Threat tags:unexpected code execution, data exfiltration, human approval gap
Control gaps:missing license, broad permissions, shell without guardrails
DemoGPT
Execution risk:High
Threat tags:prompt injection, tool poisoning, unexpected code execution
Control gaps:missing license, broad permissions, shell without guardrails
| Dimension | evals | NVIDIA-Nemotron-3-Super | DemoGPT |
|---|---|---|---|
| SAS-v2.1 pre-install audit | |||
Threat tags | unexpected code execution, data exfiltration, human approval gap | unexpected code execution, data exfiltration, human approval gap | prompt injection, tool poisoning, unexpected code execution |
Evidence confidence | 65% | 65% | 67% |
| Source & provenance | |||
Provenance | strands-agents/evals | cobusgreyling/NVIDIA-Nemotron-3-Super | melih-unsal/DemoGPT |
Freshness | 2026-04-02 | 2026-04-02 | 2026-04-01 |
| Risk & trust | |||
Trust score | 76 | 82 | 82 |
| Community | |||
Stars | 101 | 26 | 1.9K |
Repository: splitrail
Author: Piebald-AI · Source status: Clear source
Fast, cross-platform, real-time token usage tracker and cost monitor for Gemini CLI / Claude Code / Codex CLI / Qwen Code / Cline / Roo Code / Kilo Code / GitHub Copilot / OpenCode / Pi Agent / Piebald.
Score basis:Clear source · Risk needs review · Universal
Repository: AgentEval
Author: AgentEvalHQ · Source status: Clear source
AgentEval is the comprehensive .NET toolkit for AI agent evaluation—tool usage validation, RAG quality metrics, stochastic evaluation, and model comparison—built first for Microsoft Agent Framework (MAF) and Microsoft.Ex
Score basis:Clear source · Low risk signals · Universal
Repository: mcp-for-beginners
Author: microsoft · Source status: Clear source
This open-source curriculum introduces the fundamentals of Model Context Protocol (MCP) through real-world, cross-language examples in .NET, Java, TypeScript, JavaScript, Rust and Python.
Score basis:Clear source · Low risk signals · Universal