Audit agent skill definitions for security, completeness, and compatibility across different formats.
As AI agent skills become portable across platforms (Codex, Claude Code, OpenClaw, OpenCode), ensuring they're secure and well-documented is critical. skill-audit analyzes skill definitions to catch dangerous patterns before they cause problems.
npm install -g skill-auditOr run directly with npx:
npx skill-audit scan ./my-skillsScan a directory for skills and audit them:
skill-audit scan ./skillsAudit a single skill:
skill-audit check ./my-skill/SKILL.mdList discovered skills without auditing:
skill-audit list ./Critical
- Remote code execution (eval, curl | sh)
- Access to sensitive files (/etc/passwd, .ssh, .env)
- Fork bombs and dangerous shell patterns
- Direct writes to block devices
High
- Process spawning (child_process, spawn, exec)
- sudo usage requiring elevated privileges
- Hardcoded API key references
- Remote shell script downloads
Medium
- File write operations without path validation
- Network downloads (wget, curl)
- Permissive chmod operations
- Unpinned credentials in definitions
Low
- Potential sensitive data logging
- Missing error handling in scripts
- Missing description section
- No usage examples
- Missing usage instructions
- Excessive line length
- Broken anchor links
- Unpinned dependencies
- Missing shell error handling (set -e)
- No input validation
skill-audit automatically detects and handles:
- OpenClaw - SKILL.md files
- Codex - skill.yaml and manifest.json
- Claude Code - .claude/commands/*.md
- OpenCode - opencode.json configuration
Recursively scan a directory for skills and audit all found:
skill-audit scan [path] [options]
Options:
-v, --verbose Show detailed output including quality suggestions
-j, --json Output results as JSON
--min-score <score> Exit with error if any skill scores below threshold
--no-summary Skip the summary sectionAudit a single skill file or directory:
skill-audit check <path> [options]
Options:
-v, --verbose Show detailed output
-j, --json Output as JSONExits with code 1 if critical or high severity issues are found.
Discover skills in a directory without running audits:
skill-audit list [path] [options]
Options:
-j, --json Output as JSONEach skill receives three scores:
- Security Score (0-100): Starts at 100, deducted for each finding based on severity
- Quality Score (0-100): Based on documentation completeness and best practices
- Overall Score: Weighted average (60% security, 40% quality)
Letter grades:
- A: 90-100
- B: 80-89
- C: 70-79
- D: 60-69
- F: Below 60
Use skill-audit in CI to enforce minimum standards:
# GitHub Actions example
- name: Audit skills
run: npx skill-audit scan ./skills --min-score 70# Pre-commit hook
skill-audit check ./my-skill --min-score 80 || exit 1import { auditPath, summarize } from 'skill-audit';
const results = await auditPath('./skills');
const summary = summarize(results);
console.log(`Audited ${summary.totalSkills} skills`);
console.log(`Average score: ${summary.averageScore}`);
console.log(`Critical issues: ${summary.criticalCount}`);Audit the OpenClaw skills directory:
skill-audit scan ~/.npm-global/lib/node_modules/openclaw/skills -vCheck a specific skill with JSON output:
skill-audit check ./skills/my-tool/SKILL.md --jsonFind all skills in a monorepo:
skill-audit list ./packages --json | jq '.[] | .name'Fail CI if any critical issues:
skill-audit scan . --min-score 60Agent skills are becoming the standard way to package reusable AI capabilities. With repos like openai/skills providing skill catalogs and tools like compound-engineering-plugin enabling cross-platform skill sharing, security and quality become paramount.
skill-audit provides:
- Early detection of dangerous patterns before production
- Consistent quality standards across skill libraries
- CI-friendly validation for skill repositories
- Cross-format compatibility analysis
MIT