Skip to content

sigwl/AiDA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AiDA - AI Assistant for IDA Pro

License Stars Forks

AiDA is a high-performance, AI-powered assistant plugin for IDA Pro (9.0+) written in C++ to provide maximum speed and stability. It's designed to accelerate the reverse engineering of modern C++ games by leveraging large language models (Google Gemini, OpenAI, and Anthropic) directly within the IDA environment.

FeaturesInstallationConfigurationUsageImportant NoteLicenseDiscord

Features

  • (COMING SOON!) Hybrid Engine Scanning: Combines static pattern scanning (GSpots) and advanced AI analysis to locate critical Unreal Engine globals like GWorld, GNames, and GObjects.
  • In-Depth Function Analysis: Provides a detailed report on a function's purpose, logic, inputs/outputs, and potential game hacking opportunities.
  • Automatic Renaming: Suggests descriptive, context-aware names for functions.
  • Struct Generation: Reconstructs C++ structs from function disassembly, automatically handling padding and member offsets.
  • Hook Generation: Creates C++ MinHook snippets for easy function interception.
  • Custom Queries: Ask any question about a function and get a direct, technical answer.
  • Multi-Provider Support: Works with Google Gemini, OpenAI (ChatGPT), and Anthropic (Claude) models.
  • Native Performance: Written in C++ for a seamless and fast user experience with no Python dependency.

Installation

To install and run AiDA, follow these steps:

Prerequisites

Before installing the AiDA plugin, ensure you have the following essential dependencies:

  1. Microsoft Visual C++ Redistributables: Install the official Microsoft Visual C++ Redistributables. These are crucial for many C++ applications on Windows.

  2. OpenSSL: Install OpenSSL. For Windows, a reliable third-party installer can be found at https://slproweb.com/products/Win32OpenSSL.html.

    • The "Win64 OpenSSL v3.x.x Light" version should typically be sufficient.
    • Please use the installer (.exe). During the installation process, it is critical to choose the following option when prompted:
      • Copy OpenSSL DLLs to:
        • The Windows system directory (check this one!)
        • 🚫 The OpenSSL binaries (/bin) directory (do not check this one!)

Plugin Installation

Once the prerequisites are met:

  1. Go to the Releases page of this repository.
  2. Download the latest release ZIP file (e.g., AiDA_v1.1.zip).
  3. Extract the archive. You will find an AiDA.dll file.
  4. Copy AiDA.dll into your IDA Pro plugins directory. The path is typically:
    • %APPDATA%\Hex-Rays\IDA Pro\plugins on Windows
    • $HOME/.idapro/plugins on Linux/Mac

MCP Installation

AiDA also supports Model Context Protocol (MCP) integration. This feature is based on the excellent work from ida-pro-mcp by mrexodia.

Prerequisites

Ensure you have Python 3.11 or higher installed on your system.

Installation Steps

  1. Install AiDA via pip:

    pip install git+https://github.com/sigwl/AiDA
  2. Run the installation command to automatically copy the plugin to your IDA Pro plugins directory:

    aida --install
  3. Open IDA Pro, go to Edit → Plugins, and click AiDA-MCP to activate the Model Context Protocol support.

Configuration

  1. The first time you run IDA Pro with the plugin, it will prompt you to open the settings dialog.
  2. You can also access it at any time via the right-click context menu in a disassembly or pseudocode view: AI Assistant > Settings....
  3. In the settings dialog, select your desired AI Provider and enter your API key. The key will be saved locally in your user directory (%APPDATA%\Hex-Rays\IDA Pro\ai_assistant.cfg) and is never transmitted anywhere except to the AI provider's API.

GitHub Copilot Configuration (Special Instructions)

Using GitHub Copilot requires an external proxy server that translates Copilot's API into a standard format.

Step 1: Run the Copilot API Proxy You must have the copilot-api server running in the background. This server handles authentication with your GitHub account.

  1. Make sure you have Bun installed.
  2. Install Node.js if you don't already have it. Node.js is required to run the Copilot API proxy.
  3. Open a terminal or command prompt and run the following command:
    npx copilot-api@latest start
  4. The first time you run this, it will guide you through a one-time authentication process with GitHub.
  5. Leave this terminal window open. The proxy server must be running for AiDA to use Copilot.

Step 2: Configure AiDA

  1. In IDA, open the AiDA settings (AI Assistant > Settings...).
  2. Set the Provider to Copilot.
  3. Ensure the Proxy Address in the Copilot tab is correct. The default is http://127.0.0.1:4141, which should work if you ran the command above without changes.
  4. Select your desired Copilot model (e.g., claude-sonnet-4).

API Provider Configuration

  • Provider: Choose the AI service you want to use (Gemini, OpenAI, or Anthropic).
  • API Key: Your personal key for the selected provider. This is required for authentication.
  • Model Name: Specify which model to use. More powerful models (like Gemini 2.5 Pro or Claude 4 Opus) provide higher-quality analysis but cost more per use. Lighter models (like Gemini 1.5 Flash or GPT-4o mini) are faster and cheaper.

IMPORTANT: Model Choice Determines Output Quality The quality of the AI model you select is the single most important factor affecting the accuracy and insightfulness of the results. For critical analysis of complex functions, using a top-tier model is strongly recommended.

For example, a powerful model like Google's Gemini 2.5 Pro will consistently provide more comprehensive and correct analysis than a lighter, faster model like Gemini 1.5 Flash.

Analysis Parameters

  • Max Prompt Tokens: This is a critical setting for managing cost and quality. It limits the total amount of context (your function's code, cross-references, etc.) sent to the AI.

    • Higher Value (e.g., 1,048,576): Provides the AI with more context, leading to more accurate and detailed analysis. This is more expensive and slightly slower.
    • Lower Value (e.g., 32,000): Cheaper and faster, but the AI may miss important details due to the limited context.
  • XRef Context Count: The maximum number of calling functions (callers) and called functions (callees) to include in the prompt. Increasing this gives the AI a better understanding of the function's role.

  • XRef Analysis Depth: How "deep" to go in the call chain when gathering context. A depth of 1 gets direct callers; a depth of 2 gets direct callers and their callers.

    Warning: A depth greater than 3 can cause the context size to grow extremely quickly. However, a higher value is often necessary for a complete analysis of complex call chains.

  • Code Snippet Lines: The number of lines of decompiled code to include for each cross-reference. A high value (e.g., 60-100) is recommended to give the AI better context.

  • Bulk Processing Delay: A delay (in seconds) between consecutive API calls during automated tasks like the Unreal Scanner. This is a safety feature to prevent you from being rate-limited by the API provider.

Usage

Simply right-click within a disassembly or pseudocode view in IDA to access the AI Assistant context menu. From there, you can select any of the analysis or generation features. All actions can also be found in the main menu under Tools > AI Assistant.

Important Note

Please be aware that AiDA is currently in BETA and is not yet fully stable. You may encounter bugs or unexpected behavior.

If you experience any issues or have bug reports, please:

License

This project is licensed under the MIT License. See the LICENSE file for details.

About

An AI-powered assistant for IDA 9.0+ to accelerate reverse engineering of C++ games.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •