HITL-EVAL
Repository: HITL-EVAL
Author: subhojeet-chowdhury · Source status: Clear source
Human in the loop evaluation of llm responses for few shot prompting.
Score basis:Clear source · Risk needs review · Universal
Narrow the list first, then review author, source, risk, and alternatives to pick faster.
Start simple, then layer advanced filters
Supported tools
Common tasks
Domains
Ability shape
Safety signals
Top authors
ABIDE checks
Showing the first 24 of 8,962 indexed skills.
Applied filters
Repository: HITL-EVAL
Author: subhojeet-chowdhury · Source status: Clear source
Human in the loop evaluation of llm responses for few shot prompting.
Score basis:Clear source · Risk needs review · Universal
Repository: basilisk
Author: faheddd4 · Source status: Clear source
Automate adversarial prompt testing on LLMs to identify security weaknesses with an open-source AI red teaming framework.
Score basis:Clear source · Risk needs review · Universal
Repository: vigilant-ai
Author: srujan186 · Source status: Clear source
A working guardrail layer that classifies any LLM prompt as safe or harmful, sits in front of a live LLM, and measures exactly where it passes, where it fails, and why.
Score basis:Clear source · Risk needs review · Universal
Repository: LLM-Prompt-Optimizer
Author: queentizy · Source status: Clear source
Automated engine for testing and refining Large Language Model prompts.
Score basis:Clear source · Risk needs review · Universal
Repository: llm-router
Author: Fhaz5000 · Source status: Clear source
Route prompts efficiently to appropriate modes for large language models using a lightweight, single-header C++ library.
Score basis:Clear source · Risk needs review · Universal
Repository: Kagantic-vault-structure
Author: Asim00740 · Source status: Clear source
Provide a structured template for Obsidian-style markdown vaults to improve LLM agent navigation and human readability in Git repositories
Score basis:Clear source · Risk needs review · Universal
Repository: pack-my-code
Author: Crystainexhaustible329 · Source status: Clear source
Package code context into clean markdown for LLM prompts using a minimalist, lightweight tool that respects .gitignore and requires Git installed.
Score basis:Clear source · Risk needs review · Universal
Repository: agent-factory
Author: cornhuskinghemophiliab653 · Source status: Clear source
Provide and generate AI agent prompts with industry guides, glossaries, and multi-provider LLM support for scalable, automated content creation.
Score basis:Clear source · Risk needs review · Universal
Repository: LLM_Chat_Navigator
Author: snjvbrla · Source status: Clear source
Enhance long AI conversations with sidebar prompts, quick navigation, and reply outlines for efficient browsing and context tracking across supported sites.
Score basis:Clear source · Risk needs review · Universal
Repository: LLMInjector
Author: Logarithmic-blackafrican589 · Source status: Clear source
Automate prompt injection testing in Burp Suite to find and analyze vulnerabilities in large language model integrations.
Score basis:Clear source · Risk needs review · Universal
Repository: pydantic-agent-template
Author: BGmano · Source status: Clear source
Build LLM-powered AI agents with a minimal FastAPI and Pydantic template featuring async SQLAlchemy, uv for dependencies, and Alembic migrations.
Score basis:Clear source · Risk needs review · Universal
Repository: llm-cost-calculator
Author: akunba3970 · Source status: Clear source
Estimate token usage and API costs for large language models to help developers manage AI prompt expenses before scaling workloads.
Score basis:Clear source · Risk needs review · Universal
Repository: The-Senior-Dev-s-LLM-Prompt-Library-100-Production-Ready-System-Prompts
Author: Exponential-genushelvella846 · Source status: Clear source
Build production-ready LLM system prompts for senior dev workflows, from React and SQL to refactoring and architecture.
Score basis:Clear source · Risk needs review · Universal
Repository: repo-context-exporter
Author: msaly · Source status: Clear source
Exports source repositories into compact Markdown context files for LLM prompting and codebase review
Score basis:Clear source · Risk needs review · Universal
Repository: LitPilot
Author: Sito0914 · Source status: Clear source
Automate PhD literature reviews from PDFs with LLM prompts, evidence-linked notes, Excel tracking, and cross-paper synthesis.
Score basis:Clear source · Risk needs review · Universal
Repository: veil
Author: emilio4906 · Source status: Clear source
Secure LLM prompts with end-to-end encryption for inference examples
Score basis:Clear source · Risk needs review · Universal
Repository: Worm-GPT-LLM-2026
Author: jabesotienobecky-maker · Source status: Clear source
Test, analyze, and automate LLM red-team prompt delivery for boundary checks, jailbreak research, and prompt injection testing
Score basis:Clear source · Risk needs review · Universal
Repository: llm-chat-app-template
Author: kp9696 · Source status: Clear source
GitHub repository kp9696/llm-chat-app-template; metadata recovered from GitHub Search API.
Score basis:Clear source · Risk needs review · Universal
Repository: tutorial-llm-prompt
Author: eastmoon · Source status: Clear source
Tutorial and learning report with modern Large Language Model ( LLM ) prompt script.
Score basis:Clear source · Risk needs review · Universal
Repository: QuietPrompt
Author: BhomeshRazdan · Source status: Clear source
QuietPrompt is a local-first AI tool for coding.
Score basis:Clear source · Risk needs review · Universal
Repository: MCP
Author: adityark-gh · Source status: Clear source
Prompts are for users, resources are for the application and tools are for the model
Score basis:Clear source · Risk needs review · Universal
Repository: Memoria
Author: RazMake · Source status: Clear source
A VS Code extension that transforms a given workspace into a notebook with fast scaffolding templates, editing macros, and blueprint‑specific assistants powered by MCP servers.
Score basis:Clear source · Risk needs review · Universal
Repository: mcp-tool-explorer
Author: jurgen178 · Source status: Clear source
Inspect and test MCP servers directly inside VS Code and browse Tools, Resources, and Prompts, run tool calls, view history, and syntax-highlighted results.
Score basis:Clear source · Risk needs review · Universal
Repository: anamnesis
Author: Chepech · Source status: Clear source
Anemesis is an Obsidian plugin that turns notes into a structured, queryable memory for AI agents by indexing, enriching, and embedding content into a local vector store, then exposing retrieval via MCP so agents can ac…
Score basis:Clear source · Risk needs review · Universal
You get source-backed evidence and hints here; installation decisions always stay with you.