Inference Providers documentation
Use Coding Environment with Inference Providers
Use Coding Environment with Inference Providers
OpenEnv is an open standard for Agentic environments where AI models can interact with tasks like coding, browsing the web, playing games, or trading in a stock market.
In this guide you’ll learn how to pair Hugging Face Inference Providers with the OpenEnv Coding environment to iteratively generate, run, and refine Python programs. You can use this pattern for any code generation task, or try out other environments for tasks like browsing the web.
Why Use an OpenEnv Coding Environment?
The OpenEnv Coding environment is a lightweight HTTP service that executes untrusted Python inside an isolated interpreter. Each call returns stdout, stderr, and exit_code. Coupling that sandbox with a hosted LLM lets you build closed-loop solvers: the model proposes code, OpenEnv Coding executes it, and you feed the results back to the model until the task succeeds.
Project Overview
We will build a coding agent that uses an OpenEnv Coding environment to execute code generated by an LLM. The workflow follows these steps:
- Connect to the OpenEnv Coding environment on the Hub.
- Generate Python code to solve a task with Inference Providers.
- Execute the code in the environment.
- Get feedback on the results from the environment.
- Send the feedback back to the model and repeat the process until the task succeeds.
If you prefer this guide as a complete script, check out the example below.
Step 1: Prepare the Environment
We need to install the OpenEnv library and connect to the coding environment.
Let’s start by installing the OpenEnv library from GitHub.
pip install git+https://github.com/meta-pytorch/OpenEnv.git
With the OpenEnv library installed, we can start coding. We can connect to the coding environment by creating a CodingEnv object.
from envs.coding_env import CodeAction, CodingEnv
env = CodingEnv(base_url="https://huggingface.co/proxy/openenv-coding-env.hf.space")This example connects to the coding environment hosted on Hugging Face Spaces. This is perfect for this guide, but if you need to run the environment locally, we recommend using CodingEnv.from_hub(...).
Step 2: Configure the Inference Client
Set up the connection to Inference Providers. You can find your User Access Token in your settings page.
from openai import OpenAI
client = OpenAI(
base_url="https://huggingface.co/proxy/router.huggingface.co/v1",
api_key=os.environ["HF_TOKEN"]
)Step 3: Define the Task and Helper Functions
Let’s define a clear coding task with specific success criteria.
SYSTEM_PROMPT = (
"You are an expert Python programmer. Respond with valid Python code that "
"solves the user's task. Always wrap your final answer in a fenced code "
"block starting with ```python. Provide a complete script that can be "
"executed as-is, with no commentary outside the code block."
)
CODING_TASK = (
"Write Python code that prints the sum of squares of the integers from 1 "
"to 100 inclusive. The final line must be exactly `Result: <value>` with "
"the correct number substituted."
)
EXPECTED_SUBSTRING = "Result: 338350"The environment will execute the code and return the results. We need to define a way to check if the task is solved so that we can stop the loop.
Next, we need to create utility functions to extract Python code and format feedback for the language model.
For simplicity, we’ll use a simple string matching and vanilla Python to extract the code and format the feedback. You might chose to use your favorite prompt engineering library to build more sophisticated functions.
import re
def extract_python_code(text: str) -> str:
"""Extract the first Python code block from the model output."""
code_blocks = re.findall(
r"```(?:python)?\s*(.*?)```",
text,
re.IGNORECASE | re.DOTALL,
)
if code_blocks:
return code_blocks[0].strip()
return text.strip()
def format_feedback(
step: int,
stdout: str,
stderr: str,
exit_code: int,
) -> str:
"""Generate feedback text describing the previous execution."""
stdout_display = stdout if stdout.strip() else "<empty>"
stderr_display = stderr if stderr.strip() else "<empty>"
return (
f"Execution feedback for step {step}:\n"
f"exit_code={exit_code}\n"
f"stdout:\n{stdout_display}\n"
f"stderr:\n{stderr_display}\n"
"If the task is not solved, return an improved Python script."
)
def build_initial_prompt(task: str) -> str:
"""Construct the first user prompt for the coding task."""
return (
"You must write Python code to satisfy the following task. "
"When executed, your script should behave exactly as described.\n\n"
f"Task:\n{task}\n\n"
"Reply with the full script in a single ```python code block."
)These functions will be used to extract the code and format the feedback for the language model.
Step 4: Implement the Main Solving Loop
Now, let’s implement the main solving loop. This loop will iteratively generate code, execute it, and format the feedback for the language model.
Let’s define some constants for the loop.
MAX_ATTEMPTS = 5 # how many times to try to solve the task
MODEL = "openai/gpt-oss-120b:novita" # works great for this task
MAX_TOKENS = 2048 # the maximum number of tokens to generate
TEMPERATURE = 0.2 # the temperature to use for the modelLet’s define the initial messages for the language model.
history = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": build_initial_prompt(CODING_TASK)},
]We need to reset the environment and get the initial observation.
obs = env.reset().observation
Now, let’s implement the main loop.
for step in range(1, MAX_ATTEMPTS + 1):
# Get model response
response = client.chat.completions.create(
model=MODEL,
messages=history,
max_tokens=2048,
temperature=0.2,
)
assistant_message = response.choices[0].message.content.strip()
history.append({"role": "assistant", "content": assistant_message})
# Extract and execute code
code = extract_python_code(assistant_message)
# act in the environment
result = env.step(CodeAction(code=code))
# get the feedback from the environment
obs = result.observation
# Check if task is solved
solved = obs.exit_code == 0 and EXPECTED_SUBSTRING in obs.stdout
if solved:
break
# Provide feedback for next iteration
history.append(
{
"role": "user",
"content": format_feedback(
step,
obs.stdout,
obs.stderr,
obs.exit_code,
),
}
)
Now, let’s print the final result.
print(obs.stdout)🎉 That’s it! You’ve successfully built a coding agent that uses an OpenEnv Coding environment to execute code generated by an LLM. You can now try it out with your own tasks.
Complete Working Example
Here’s the complete script that demonstrates all the concepts covered in this guide:
Click to view the full coding_env_inference.py script
#!/usr/bin/env python3
"""Solve a coding task with a hosted LLM via Hugging Face Inference.
This script mirrors ``textarena_wordle_inference.py`` but targets the Coding
environment. It launches the CodingEnv Docker image locally and asks an
OpenAI-compatible model served through Hugging Face's router to iteratively
produce Python code until the task is solved.
Prerequisites
-------------
1. Build the Coding environment Docker image::
docker build \
-f src/envs/coding_env/server/Dockerfile \
-t coding-env:latest .
2. Set your Hugging Face token, or any other API key that is compatible with the OpenAI API:
export HF_TOKEN=your_token_here
export API_KEY=your_api_key_here
3. Run the script::
python examples/coding_env_inference.py
The script keeps sending execution feedback to the model until it prints
``Result: 338350`` or reaches the configured step limit.
"""
from __future__ import annotations
import os
import re
from typing import List, Tuple
from openai import OpenAI
from envs.coding_env import CodeAction, CodingEnv
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
API_BASE_URL = "https://huggingface.co/proxy/router.huggingface.co/v1"
API_KEY = os.getenv("API_KEY") or os.getenv("HF_TOKEN")
MODEL = "openai/gpt-oss-120b:novita"
MAX_STEPS = 5
VERBOSE = True
CODING_TASK = (
"Write Python code that prints the sum of squares of the integers from 1 "
"to 100 inclusive. The final line must be exactly `Result: <value>` with "
"the correct number substituted."
)
EXPECTED_SUBSTRING = "Result: 338350"
SYSTEM_PROMPT = (
"You are an expert Python programmer. Respond with valid Python code that "
"solves the user's task. Always wrap your final answer in a fenced code "
"block starting with ```python. Provide a complete script that can be "
"executed as-is, with no commentary outside the code block."
)
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def extract_python_code(text: str) -> str:
"""Extract the first Python code block from the model output."""
code_blocks = re.findall(
r"```(?:python)?\s*(.*?)```",
text,
re.IGNORECASE | re.DOTALL,
)
if code_blocks:
return code_blocks[0].strip()
return text.strip()
def format_feedback(
step: int,
stdout: str,
stderr: str,
exit_code: int,
) -> str:
"""Generate feedback text describing the previous execution."""
stdout_display = stdout if stdout.strip() else "<empty>"
stderr_display = stderr if stderr.strip() else "<empty>"
return (
f"Execution feedback for step {step}:\n"
f"exit_code={exit_code}\n"
f"stdout:\n{stdout_display}\n"
f"stderr:\n{stderr_display}\n"
"If the task is not solved, return an improved Python script."
)
def build_initial_prompt(task: str) -> str:
"""Construct the first user prompt for the coding task."""
return (
"You must write Python code to satisfy the following task. "
"When executed, your script should behave exactly as described.\n\n"
f"Task:\n{task}\n\n"
"Reply with the full script in a single ```python code block."
)
# ---------------------------------------------------------------------------
# Gameplay
# ---------------------------------------------------------------------------
def solve_coding_task(
env: CodingEnv,
client: OpenAI,
) -> Tuple[bool, List[str]]:
"""Iteratively ask the model for code until the task is solved."""
history = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": build_initial_prompt(CODING_TASK)},
]
obs = env.reset().observation
transcripts: List[str] = []
for step in range(1, MAX_STEPS + 1):
response = client.chat.completions.create(
model=MODEL,
messages=history,
max_tokens=2048,
temperature=0.2,
)
assistant_message = response.choices[0].message.content.strip()
history.append({"role": "assistant", "content": assistant_message})
code = extract_python_code(assistant_message)
if VERBOSE:
print(f"\n🛠️ Step {step}: executing model-produced code")
print(code)
result = env.step(CodeAction(code=code))
obs = result.observation
transcripts.append(
(
f"Step {step} | exit_code={obs.exit_code}\n"
f"stdout:\n{obs.stdout}\n"
f"stderr:\n{obs.stderr}\n"
)
)
if VERBOSE:
print(" ▶ exit_code:", obs.exit_code)
if obs.stdout:
print(" ▶ stdout:\n" + obs.stdout)
if obs.stderr:
print(" ▶ stderr:\n" + obs.stderr)
solved = obs.exit_code == 0 and EXPECTED_SUBSTRING in obs.stdout
if solved:
return True, transcripts
history.append(
{
"role": "user",
"content": format_feedback(
step,
obs.stdout,
obs.stderr,
obs.exit_code,
),
}
)
# Keep conversation history compact to avoid exceeding context limits
if len(history) > 20:
history = [history[0]] + history[-19:]
return False, transcripts
# ---------------------------------------------------------------------------
# Entrypoint
# ---------------------------------------------------------------------------
def main() -> None:
if not API_KEY:
raise SystemExit(
"HF_TOKEN (or API_KEY) must be set to query the model."
)
client = OpenAI(base_url=API_BASE_URL, api_key=API_KEY)
env = CodingEnv.from_docker_image(
"coding-env:latest",
ports={8000: 8000},
)
try:
success, transcripts = solve_coding_task(env, client)
finally:
env.close()
print(
"\n✅ Session complete"
if success
else "\n⚠️ Session finished without solving the task"
)
print("--- Execution transcripts ---")
for entry in transcripts:
print(entry)
if __name__ == "__main__":
main()Next Steps
Now that you have a working coding agent, here are some ways to extend and improve it:
- Try different models and providers to see how they perform.
- Try other environments like browsing the web or playing a game.
- Integrate environments with your application