Erin is an OpenAI-based Python function auto-generation tool. By analyzing function names and parameter types, Erin can automatically infer function intent and generate corresponding Python implementation code.
- π€ Smart Function Generation: Automatically generates function implementations based on function names and parameter types
- π§ Dynamic Execution: Generated functions can be executed immediately
- π Type Inference: Automatically infers parameter types from argument values
- π¨ Decorator Support: Supports
@erindecorator, automatically uses function docstrings as context - π Configurable: Supports custom OpenAI API endpoints and models
- π Logging: Complete logging system for debugging and monitoring
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install project dependencies
uv syncpip install -e .Erin supports the following environment variables:
| Environment Variable | Description | Required | Default |
|---|---|---|---|
OPENAI_API_KEY |
OpenAI API key | Yes | - |
OPENAI_BASE_URL |
Custom API endpoint (e.g., for OpenAI-compatible services) | No | OpenAI official endpoint |
OPENAI_MODEL |
Model name to use | No | gpt-4o-mini |
Linux/macOS:
export OPENAI_API_KEY="your-api-key-here"
export OPENAI_BASE_URL="https://api.openai.com/v1" # Optional
export OPENAI_MODEL="gpt-4o-mini" # OptionalWindows (PowerShell):
$env:OPENAI_API_KEY="your-api-key-here"
$env:OPENAI_BASE_URL="https://api.openai.com/v1" # Optional
$env:OPENAI_MODEL="gpt-4o-mini" # OptionalWindows (CMD):
set OPENAI_API_KEY=your-api-key-here
set OPENAI_BASE_URL=https://api.openai.com/v1
set OPENAI_MODEL=gpt-4o-miniimport erin
# Directly call function names, Erin will automatically generate implementations based on function name and arguments
result = erin.calculate_sum(1, 2, 3)
print(result) # Output: 6
# Calculate average
avg = erin.calculate_average([1, 2, 3, 4, 5])
print(avg) # Output: 3.0
# Check if even
is_even = erin.is_even(4)
print(is_even) # Output: TrueErin supports using decorators to define functions. The decorator automatically uses the function's docstring as context hints to help generate more accurate function implementations:
import erin
# Use @erin decorator
@erin
def calculate_sum(a, b, c):
"""Calculate the sum of three numbers"""
pass
result = calculate_sum(1, 2, 3)
print(result) # Output: 6
# Use @erin(name="...") to specify function name
@erin(name="add_numbers")
def my_function(x, y):
"""Add two numbers together"""
pass
result = my_function(5, 10)
print(result) # Output: 15
# You can also use erin module directly as a decorator
@erin
def reverse_string(s):
"""Reverse a string"""
pass
reversed_str = reverse_string("hello")
print(reversed_str) # Output: "olleh"Decorator Advantages:
- π Automatic Docstring Usage: The function's
__doc__is automatically passed asoptional_contextto the prompt, helping the LLM better understand function intent - π― More Accurate Implementation: By providing context through docstrings, the generated function implementations are usually more aligned with expectations
- π Preserve Function Signature: Uses
functools.update_wrapperto preserve original function metadata
-
Function Call: When you call
erin.function_name(...)or use a decorator, Erin will:- Infer parameter types from argument values
- Generate a prompt based on the function name (if using a decorator, it will also include the function's docstring as context)
- Call OpenAI API to generate function code
- Dynamically execute the generated code and return the result
-
Type Inference: Erin automatically infers types from argument values:
1βint"hello"βstr[1, 2, 3]βlist{"key": "value"}βdict
-
Decorator Pattern: When using the
@erindecorator:- The function's
__doc__is automatically extracted and passed asoptional_contextto the prompt - You can customize the function name via the
nameparameter (defaults to the decorated function's name) - The decorated function preserves original metadata (via
functools.update_wrapper)
- The function's
import erin
# String operations
reversed_str = erin.reverse_string("hello")
print(reversed_str) # "olleh"
# List operations
unique_items = erin.remove_duplicates([1, 2, 2, 3, 3, 4])
print(unique_items) # [1, 2, 3, 4]
# Dictionary operations
merged = erin.merge_dicts({"a": 1}, {"b": 2})
print(merged) # {"a": 1, "b": 2}
# Math operations
factorial = erin.calculate_factorial(5)
print(factorial) # 120
# Using decorator to define functions (recommended approach)
@erin
def find_max_value(numbers):
"""Find the maximum value from a list of numbers"""
pass
max_val = find_max_value([3, 1, 4, 1, 5, 9, 2, 6])
print(max_val) # 9
@erin(name="custom_name")
def my_custom_function(data):
"""Process data and return processed results"""
passErin has a built-in complete logging system. To enable logging, you need to configure Python's logging module:
import logging
# Configure logging level
logging.basicConfig(
level=logging.INFO, # Or logging.DEBUG for more detailed information
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# Now you'll see log output when using erin
import erin
result = erin.calculate_sum(1, 2, 3)- INFO: Records key operations (function calls, API calls, execution results)
- DEBUG: Records detailed information (parameter formatting, prompt content, code generation, etc.)
2024-01-01 12:00:00 - erin - INFO - OpenAI client initialized
2024-01-01 12:00:01 - erin - INFO - Calling function: calculate_sum, arguments: (1, 2, 3)
2024-01-01 12:00:01 - erin - INFO - Calling OpenAI API to generate function code...
2024-01-01 12:00:02 - erin - INFO - Successfully generated function code, code length: 45 characters
2024-01-01 12:00:02 - erin - INFO - Executing generated function...
2024-01-01 12:00:02 - erin - INFO - Function executed successfully, return value: 6
-
First Call: Each function name generates code on the first call, and subsequent calls will regenerate (current version does not support caching)
-
API Costs: Each function call will invoke the OpenAI API, please be aware of API usage costs
-
Security: Generated code will execute in the current Python environment, please ensure function names and parameters are from trusted sources
-
Error Handling: If the generated code has errors, Erin will raise exceptions and log detailed information
-
Type Inference Limitations: The current version infers types from argument values, complex types (such as
list[int]) may be inferred aslist
erin/
βββ __init__.py # Main module, contains LLMCallable class
βββ prompt.py # Prompt formatting module
βββ executor.py # Function executor module
# Using uv
uv run python -m pytest
# Or using pip
pytest# Using ruff (if configured)
ruff format .
ruff check .This project is licensed under the WTFPL (Do What The F*ck You Want To Public License).
You are free to use, modify, and distribute the code of this project without any restrictions.
Issues and Pull Requests are welcome!