Together Code Interpreter
Execute LLM-generated code seamlessly with a simple API call.








Fast, secure code execution for LLM-generated code
Together Code Interpreter allows you to execute LLM-generated Python code securely and feed the output back to the LLM for richer responses.
Build agentic workflows
Build coding agents that can run code to make decisions by calling APIs, processing data & performing calculations.
Data analysis & visualization
Analyze datasets to provide on-the-fly insights and visualizations like charts and graphs.
Reinforced learning
Bring code execution into interactive RL environments with real-time feedback, pass/fail signals, and easy scaling.

"Together Code Interpreter has dramatically accelerated our RL post-training cycles, enabling us to reliably scale to over 100 concurrent coding sandboxes and run thousands of code evaluations per minute. Its reliable and scalable infrastructure has proven invaluable."
- Michael Luo & Sijun Tan, Project lead at Agentica
Configure powerful workflows with our Python library
Seamlessly execute Python code in a sandboxed environment.
Create a session
Create a Together Code Interpreter session that you can use to execute code. Each session is billed at $0.03/session.
Execute Python code in an active session
Execute code in a secure sandboxed environment, install Python libraries, upload files, and conduct fully fledged data analysis experiments.
Maintain the state between runs
The session_id can be used to access a previously initialized session. All packages, variables and memory will be retained.