Skip to main content

LangChain Integration

Integrate InstaVM with LangChain to build AI applications that can execute Python code securely in the cloud.

Installation

Install the required packages:

pip install instavm langchain langchain-openai

Quick Start

Here's a complete example of integrating InstaVM with LangChain:

from instavm import InstaVM
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.tools import tool
from langchain.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

@tool
def execute_python_code(code: str) -> str:
"""Execute Python code on InstaVM and return the result."""
try:
with InstaVM("your-api-key-here") as vm:
result = vm.execute(code)
return str(result)
except Exception as e:
return f"Error executing code: {str(e)}"

# Set up the agent
llm = ChatOpenAI(model="gpt-3.5-turbo")
tools = [execute_python_code]

prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that can execute Python code safely on InstaVM. Always use the execute_python_code tool for code execution."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])

agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Use the agent for data science tasks
result = agent_executor.invoke({
"input": "Create a matplotlib visualization showing a sine wave, install matplotlib if needed"
})
print(result["output"])

# Example: File processing workflow
file_result = agent_executor.invoke({
"input": "Generate sample CSV data with sales information and create a summary report"
})
print(file_result["output"])

Key Features

  • Tool Integration: InstaVM as a LangChain tool for code execution
  • Agent Workflows: Build complex multi-step workflows with code execution
  • Secure Environment: Execute code safely in isolated cloud containers
  • Error Handling: Robust error handling for production applications

Advanced Usage

Custom Tool Configuration

You can customize the InstaVM tool for specific use cases:

@tool
def execute_data_analysis_code(code: str, dataset_description: str = "") -> str:
"""Execute data analysis code on InstaVM with optional dataset context."""
try:
with InstaVM("your-api-key-here") as vm:
# Pre-install common data science packages if needed
setup_code = """
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
"""
vm.execute(setup_code)
result = vm.execute(code)
return f"Analysis result: {str(result)}"
except Exception as e:
return f"Error in data analysis: {str(e)}"

Chain Integration

Integrate InstaVM into LangChain chains:

from langchain.chains import LLMChain

# Create a chain that generates and executes code
code_chain = LLMChain(
llm=llm,
prompt=prompt,
tools=[execute_python_code]
)

Use Cases

  • Data Science Workflows: Automated data analysis and visualization
  • Code Validation: Test and validate generated code in real-time
  • Educational Assistants: Interactive coding tutors and learning tools
  • Business Intelligence: Automated report generation with data processing

Authentication

Set your InstaVM API key from the InstaVM Dashboard:

# Option 1: Direct API key
with InstaVM("your-api-key-here") as vm:
# Your code here

# Option 2: Environment variable (recommended)
import os
with InstaVM(os.getenv("INSTAVM_API_KEY")) as vm:
# Your code here

Error Handling

Implement robust error handling for production use:

@tool
def safe_execute_python_code(code: str) -> str:
"""Safely execute Python code with comprehensive error handling."""
try:
with InstaVM("your-api-key-here") as vm:
result = vm.execute(code)
if result.success:
return f"Success: {result.output}"
else:
return f"Execution failed: {result.error}"
except Exception as e:
return f"System error: {str(e)}"