This plugin provides a simple way to chat with AI in JMeter. Feather Wand serves as your intelligent assistant for JMeter test plan development, optimization, and troubleshooting.
πͺ About the name: The name "Feather Wand" was suggested by my children who were inspired by an episode of the animated show Bluey. In the episode, a simple feather becomes a magical wand that transforms the ordinary into something special (heavy) - much like how this plugin aims to transform your JMeter experience with a touch of AI magic!
- Chat with AI directly within JMeter using Claude, OpenAI, or Ollama models
- New! Multi-AI CLI Terminal: run Claude Code, OpenAI Codex CLI, Gemini CLI, or OpenCode interactively within JMeter β switch between available CLIs via a dropdown selector, with full awareness of your current test plan structure.
- New! Streaming AI responses: AI responses appear token-by-token in real-time (supports Claude, OpenAI, and Ollama)
- New! Stop button: Cancel an ongoing AI response at any time with the Stop button
- New! Response chime: Audio notification when AI responses complete (configurable)
- Get suggestions for JMeter elements based on your needs
- Ask questions about JMeter functionality and best practices
- Command intellisense with auto-completion for special commands in the chat input box
- Use
@thiscommand to get detailed information about the currently selected element - Use
@codecommand to extract code blocks from AI responses into the JSR223 editor - Use
@usagecommand to get usage examples for JMeter components - Use
@lintcommand to automatically rename elements in your test plan for better organization and readability - Use
@optimizecommand to get optimization recommendations for the currently selected element in your test plan - Use
@wrapcommand to intelligently group HTTP samplers under Transaction Controllers for better organization and reporting - Use right click context menu to refactor code, format code, and add functions in JSR223 script editor
- Customize AI behavior through configuration properties
- Switch between Claude, OpenAI, and Ollama models based on your preference or specific needs
- Install the JMeter Plugins Manager from Plugins Manager.
- Restart JMeter.
- Launch Plugins Manager.
- Search for
feather wandunderAvailable Pluginstab. - Select it and click
Apply Changes and Restart JMeterbutton.
- Download the latest release JAR file from the Releases page.
- Place the JAR file in the
lib/extdirectory of your JMeter installation. - Copy the contents of
jmeter-ai-sample.propertiesinto yourjmeter.propertiesfile (located in thebindirectory of your JMeter installation) or into youruser.propertiesfile. - Configure your API key(s) for Anthropic and/or OpenAI in the properties file.
- Restart JMeter.
- The Feather Wand plugin will appear as a new component in the right-click menu under "Add" > "Non-Test Elements" > " Feather Wand".
The Feather Wand plugin can be configured through JMeter properties. Copy the jmeter-ai-sample.properties file content
to your jmeter.properties or user.properties file and modify the properties as needed.
| Property | Description | Default Value |
|---|---|---|
jmeter.ai.streaming.enabled |
Enable real-time streaming of AI responses (token-by-token display) | true |
jmeter.ai.response.chime |
Play an audio chime when AI responses complete | false |
When streaming is enabled (default), AI responses appear progressively in the chat as they are generated. You can cancel the response at any time using the Stop button that appears next to the Send button. This feature is supported by all three AI services: Claude, OpenAI, and Ollama. If you prefer to receive the complete response at once (non-streaming), set
jmeter.ai.streaming.enabled=false.
| Property | Description | Default Value |
|---|---|---|
anthropic.api.key |
Your Claude API key | Required |
claude.default.model |
Default Claude model to use | claude-sonnet-4-6 |
claude.temperature |
Temperature setting (0.0-1.0) | 0.5 |
claude.max.tokens |
Maximum tokens for AI responses | 1024 |
claude.max.history.size |
Maximum conversation history size | 10 |
claude.system.prompt |
System prompt that guides Claude's responses | See sample properties file |
anthropic.log.level |
Logging level for Anthropic API requests ("info" or "debug") | Empty (disabled) |
| Property | Description | Default Value |
|---|---|---|
openai.api.key |
Your OpenAI API key | Required |
openai.default.model |
Default OpenAI model to use | gpt-4o |
openai.temperature |
Temperature setting (0.0-1.0) | 0.5 |
openai.max.tokens |
Maximum tokens for AI responses | 1024 |
openai.max.history.size |
Maximum conversation history size | 10 |
openai.system.prompt |
System prompt that guides OpenAI's responses | See sample properties file |
openai.log.level |
Logging level for OpenAI API requests ("INFO" or "DEBUG") | Empty (disabled) |
| Property | Description | Default Value |
|---|---|---|
ollama.host |
Ollama server host | http://localhost |
ollama.port |
Ollama server port | 11434 |
ollama.default.model |
Default Ollama model to use | deepseek-r1:1.5b |
ollama.temperature |
Temperature setting (0.0-1.0) | 0.5 |
ollama.max.history.size |
Maximum conversation history size | 10 |
ollama.thinking.mode |
Enable extended thinking (ENABLED or DISABLED) |
DISABLED |
ollama.thinking.level |
Thinking depth (LOW, MEDIUM, or HIGH) |
MEDIUM |
ollama.request.timeout.seconds |
HTTP request timeout in seconds (increase for thinking models) | 120 |
ollama.system.prompt |
System prompt that guides Ollama's responses | See sample properties file |
β οΈ Whenollama.thinking.mode=ENABLED, increaseollama.request.timeout.secondsto at least300to avoid timeout errors during long inference.
| Property | Description | Default Value |
|---|---|---|
jmeter.ai.refactoring.enabled |
Enable code refactoring for JSR223 script editor | true |
jmeter.ai.service.type |
The AI service to use for code refactoring ("openai" or "anthropic") | "openai" |
The AI CLI Terminal supports Claude Code, OpenAI Codex CLI, Gemini CLI, and OpenCode. The plugin
automatically detects which CLIs are available on your system's PATH and presents them in a dropdown selector.
Prerequisites:
| CLI | Binary Name | Installation Guide |
|---|---|---|
| Claude Code | claude |
Claude Code Quickstart |
| OpenAI Codex CLI | codex |
OpenAI Codex CLI |
| Google Gemini CLI | gemini |
Google Gemini CLI |
| OpenCode | opencode |
OpenCode |
| Property | Description | Default Value |
|---|---|---|
jmeter.ai.terminal.claudecode.enabled |
Enable the embedded AI CLI Terminal feature (applies to all supported CLIs) | true |
jmeter.ai.terminal.claudecode.path |
Full path to the claude executable (e.g., /usr/local/bin/claude or C:\...) |
Empty (auto-detect) |
jmeter.ai.terminal.claudecode.prompt |
Custom system prompt passed to the CLI (not recommended to change) | See sample properties file |
The system prompt defines how the AI (Claude or OpenAI) responds to your queries. You can customize this in the properties file to focus on specific aspects of JMeter or add your own guidelines.
claude.system.prompt, openai.system.prompt, and ollama.system.prompt can be configured separately in the
properties file. The default prompts are designed to provide helpful, JMeter-specific responses tailored to each AI
model's capabilities.
Use the @usage command to view detailed token usage information for your AI interactions:
-
How to Use:
- Simply type
@usagein the chat - The command will show usage statistics for either OpenAI or Anthropic depending on which service you're using
- Simply type
-
Information Provided:
- Overall summary of total conversations and tokens used
- Detailed breakdown of recent conversations (last 10)
- Token usage per conversation (input and output tokens)
- Timestamps and model information
- Link to official pricing pages for cost information
-
Example Output:
``
- Total Conversations: 5
- Total Input Tokens: 1500
- Total Output Tokens: 2000
- Total Tokens: 3500
- Conversation 1: 300 input, 400 output tokens
- Conversation 2: 250 input, 350 output tokens ... ``
-
Benefits:
- Track your API usage and costs
- Monitor token consumption patterns
- Identify potential optimization opportunities
- Keep track of conversation history
Use the @this command in your message to get detailed information about the currently selected element in your test
plan. For example:
- "Tell me about @this element"
- "How can I optimize @this?"
- "What are the best practices for @this?"
Feather Wand will analyze the selected element and provide tailored information and advice.
Use the @optimize command (or simply type "optimize") to get optimization recommendations for the currently selected
element in your test plan. This command will:
- Analyze the selected element's configuration
- Identify potential performance bottlenecks
- Suggest specific, actionable improvements
- Provide best practices for that element type
For example, if you have an HTTP Request sampler selected, the optimization recommendations might include:
- Connection and timeout settings adjustments
- Proper header management
- Efficient parameter handling
- Encoding settings optimization
- Redirect handling improvements
Simply select an element in your test plan and type @optimize or optimize in the chat to receive tailored
optimization recommendations.
Use the @lint command to automatically rename elements in your test plan for better organization and readability:
-
How to Use:
- Type
@lintin the chat to analyze your test plan structure - The AI will suggest better names for elements based on their function and context
- Review the suggestions and confirm to apply the changes
- Use the undo/redo buttons to revert or reapply changes if needed
- e.g.
@lint rename the elements based on the URLor@lint rename the elements in pascal case
- Type
-
Benefits:
- Improves test plan readability and maintenance
- Applies consistent naming conventions across your test plan
- Helps identify elements with generic or unclear names
- Makes test plans more understandable for team members
- Undo it if you don't like the changes
- Redo it if you like the changes
-
Best Practices:
- Run
@lintafter creating a new test plan to establish good naming from the start - Use it before sharing test plans with team members
- Apply it to imported test plans to make them conform to your naming standards
- Run
This feature is particularly valuable for large test plans or when working in teams where consistent naming is essential for collaboration.
Use the @wrap command to intelligently group HTTP samplers under Transaction Controllers for better organization and
reporting:
-
How to Use:
- Select a Thread Group in your test plan
- Type
@wrapin the chat - The AI will analyze your HTTP samplers and group similar ones under Transaction Controllers
- Use the undo button to revert changes if needed
-
Benefits:
- Improves test plan organization and readability
- Enhances test reports with meaningful transaction metrics
- Groups related HTTP requests logically
- Preserves the original order and hierarchy of samplers
- Maintains all child elements (like assertions and post-processors) with their parent samplers
-
How It Works:
- Analyzes sampler names and paths to identify logical groupings
- Creates appropriately named Transaction Controllers
- Moves samplers under their respective Transaction Controllers
- Preserves the original order and hierarchy
- Uses pattern matching and structural analysis (not AI) for its grouping logic
This feature is especially useful for imported or recorded test plans that contain many individual HTTP samplers without proper organization.
Feather Wand supports real-time streaming of AI responses across all three supported AI services (Claude, OpenAI, and Ollama). This feature is enabled by default and provides a more responsive chat experience.
- When you send a message, the AI response begins to appear token-by-token in the chat area in real-time
- A Stop button appears next to the Send button while the response is being generated
- You can click Stop at any time to cancel the response mid-stream
- The response is automatically saved to the conversation history once complete
| Control | Description |
|---|---|
| Stop button | Appears during streaming β click to cancel the current response |
Streaming is enabled by default. To disable it, set the following in your properties file:
jmeter.ai.streaming.enabled=falseWhen disabled, AI responses will appear all at once after the entire response has been generated (non-streaming mode).
- Faster perceived response time: You see the response as it's being generated rather than waiting for it to complete
- Early feedback: If the response isn't what you expected, you can stop it early without waiting for it to finish
- Improved UX: Provides a more interactive and responsive chat experience
Feather Wand can play an audio chime when AI responses complete, giving you an audible cue so you can work in other windows and know exactly when the AI has finished responding.
- When an AI response (streaming or non-streaming) completes, a WAV chime sound plays.
- The chime plays after the full response is displayed in the chat area.
- If the sound resource cannot be loaded, it falls back to the system beep.
The response chime is disabled by default. To enable it, add the following to your properties file:
jmeter.ai.response.chime=true| Property | Description | Default Value |
|---|---|---|
jmeter.ai.response.chime |
Play an audio chime when AI responses finish | false |
The chime uses the WAV file bundled at src/main/resources/org/qainsights/jmeter/ai/sound/jmeter-chime.wav.
An MP3 fallback is also included at the same location.
Feather Wand features a fully embedded interactive AI CLI Terminal using JediTerm. This allows you to interact with multiple AI command-line tools directly from within JMeter, bringing agentic AI workflows into your performance testing environment.
| CLI | Description |
|---|---|
| Claude Code | Anthropic's agentic coding tool for natural language test plan interaction |
| OpenAI Codex CLI | OpenAI's lightweight coding agent for terminal-based development workflows |
| Google Gemini CLI | Google's AI-powered CLI for cloud development and analysis |
| OpenCode | An open-source AI coding agent designed for terminal-based workflows |
The plugin automatically detects which of these CLIs are available on your system's PATH and presents them
in a dropdown selector within the terminal header. Simply select the CLI you'd like to use and start interacting.
- Prerequisites: Install one or more supported CLIs on your system (see the Configuration section for installation links).
- Auto-detection: Feather Wand scans your system's
PATHon startup and populates the dropdown with detected CLIs. - CLI Selector: Use the dropdown in the terminal header to switch between available CLIs at any time. Switching will automatically restart the terminal with the newly selected CLI.
- Setup: Make sure to set
jmeter.ai.terminal.claudecode.enabled=truein your properties file. - Capabilities:
- Start, reload, and interact with the JMeter test plan using natural language.
- The selected CLI automatically receives the full structure/context of the currently open
.jmxfile via aCLAUDE.mdfile written to the test plan directory. - You can ask the CLI to run the test plan, parse JTL files, and more.
- Use the Reload button to refresh the test plan from disk.
- Use the Ctx button to send the test plan context again.
- Disabling: If you do not want to use this feature, set
jmeter.ai.terminal.claudecode.enabled=false. The terminal widget will gracefully start a dummy process with an instructional message.
The AI CLI Terminal is built using a clean Adapter Pattern:
AiCliAdapterinterface β defines the contract for any AI CLI integrationBaseCliAdapterabstract class β provides common logic (e.g., PATH detection viafindOnPath)- Concrete adapters β
ClaudeCodeCliAdapter,OpenAiCodexCliAdapter,GeminiCliAdapter,OpenCodeCliAdapter
To add a new CLI, simply implement the AiCliAdapter interface (or extend BaseCliAdapter) and register it in
the detectAvailableClis() method.
Feather Wand supports Anthropic (Claude), OpenAI, and Ollama APIs. You can configure any combination in your properties file.
- Go to Anthropic API website
- Sign up for an account
- Create a new API key
- Copy the API key and paste it into the
anthropic.api.keyproperty in yourjmeter.propertiesfile - For more information about the API key, visit the API Key documentation
- Go to OpenAI API website
- Sign up for an account
- Create a new API key
- Copy the API key and paste it into the
openai.api.keyproperty in yourjmeter.propertiesfile - For more information about the API key, visit the API Key documentation
- Install Ollama from ollama.com
- Pull a model, e.g.
ollama pull llama3.1orollama pull deepseek-r1:1.5b - Set
jmeter.ai.service.type=ollamain yourjmeter.propertiesfile - Configure
ollama.host,ollama.port, andollama.default.modelas needed - No API key required - Ollama runs fully locally
Feather Wand automatically filters available models to show only chat-compatible models. By default, it excludes audio, TTS, transcription, and other non-chat models. You can select your preferred model from the dropdown in the UI, or set default models in the properties file:
- For Claude:
claude.default.model(e.g.,claude-sonnet-4-6) - For OpenAI:
openai.default.model(e.g.,gpt-4o) - For Ollama:
ollama.default.model(e.g.,llama3.1,deepseek-r1:1.5b)
Feather Wand applies intelligent filtering to the available models to ensure you only see relevant chat models in the dropdown:
- OpenAI Models: Filters out audio, TTS, whisper, davinci, search, transcribe, realtime, and instruct models to show only GPT chat models.
- Claude Models: Shows only the latest available Claude models.
This filtering ensures that you only see models that are compatible with the chat interface and appropriate for JMeter-related tasks.
If you encounter any issues or have suggestions for improvement, please open an issue on the GitHub repository.
Please check the roadmap for more details.
While the Feather Wand plugin aims to provide helpful assistance, please keep the following in mind:
- AI Limitations: The AI can make mistakes or provide incorrect information. Always verify critical suggestions before implementing them in production tests.
- Backup Your Test Plans: Always backup your test plans before making significant changes, especially when implementing AI suggestions.
- Test Verification: After making changes based on AI recommendations, thoroughly verify your test plan functionality in a controlled environment before running it against production systems.
- Performance Impact: Some AI-suggested configurations may impact test performance. Monitor resource usage when implementing new configurations.
- Security Considerations: Do not share sensitive information (credentials, proprietary code, etc.) in your conversations with the AI.
- API Costs: Be aware that using the Claude API or OpenAI API incurs costs based on token usage. The plugin is designed to minimize token usage, but excessive use may result in higher costs.
This plugin is provided as a tool to assist JMeter users, but the ultimate responsibility for test plan design, implementation, and execution remains with the user.