-
-
Notifications
You must be signed in to change notification settings - Fork 221
Description
Is your feature request related to a problem? Please describe.
Currently, the AI decompiler in ipsw supports a fixed set of LLM providers. While this includes local backends like Ollama, there is no way to use Codex CLI even when it is already installed and available on the system.
This becomes a problem for users who rely on Codex for local code understanding and transformation, especially since Codex is optimized for code-related tasks and works well in reverse-engineering workflows. At the moment, there is no supported way to select Codex as a backend for the AI decompiler, which limits local, offline, or tool-specific workflows.
Describe the solution you'd like
I would like ipsw to support Codex CLI as a local AI decompiler backend, similar in spirit to how Ollama is supported.
Ideally, this would mean:
• Adding a codex provider that allows ipsw to invoke Codex CLI locally
• Passing the disassembly and context to Codex and receiving the reconstructed high-level code back
• Allowing users to select Codex as the backend without relying on external APIs
Even a minimal or experimental integration would be useful, as long as Codex can be used as a first-class local backend for the AI decompiler.
Describe alternatives you've considered
• Using cloud-based LLM providers (OpenAI, Claude, etc.), which works but requires API keys and external network access
• Using Ollama with local models, which is supported but does not always match Codex’s code-focused behavior
• Running Codex CLI separately and manually pasting assembly output, which works but breaks the ipsw workflow and automation
None of these fully replace direct Codex integration inside ipsw.
Search
- I did search for other open and closed issues before opening this
Code of Conduct
- I agree to follow this project's Code of Conduct
Additional context
Codex CLI already runs locally on macOS and Linux and is designed specifically for understanding and transforming code. Supporting it as a local backend would complement existing providers and give users more flexibility in choosing the best tool for decompilation tasks.