Ollama Integration
Combine the power of local Ollama models with InstaVM's secure cloud code execution for complete AI-powered development workflows.
Installation
Install the required packages:
pip install instavm ollama
Make sure you have Ollama installed locally and have downloaded a model (e.g., ollama pull llama3.2
).
Quick Start
Here's a complete example combining Ollama and InstaVM:
from instavm import InstaVM
import ollama
import re
def execute_code_with_ollama(user_query: str, api_key: str):
"""Use Ollama to generate code and execute it on InstaVM."""
# Generate code using Ollama
response = ollama.chat(
model='llama3.2', # or any model you have installed locally
messages=[
{
'role': 'system',
'content': '''You are a helpful coding assistant.
Generate Python code to solve the user's request.
Always wrap your code in ```python code blocks.
Include any necessary imports and make the code self-contained.'''
},
{
'role': 'user',
'content': user_query
}
]
)
# Extract code from response
code_match = re.search(r'```python(.*?)```', response['message']['content'], re.DOTALL)
if code_match:
code = code_match.group(1).strip()
# Execute on InstaVM using context manager
try:
with InstaVM(api_key) as vm:
result = vm.execute(code)
print(f"Generated Code by Ollama:\n{code}\n")
print(f"InstaVM Execution Result:\n{result}")
return result
except Exception as e:
print(f"Error executing code on InstaVM: {str(e)}")
return None
else:
print("No code found in Ollama response")
return response['message']['content']
# Example usage
api_key = "your-api-key-here"
# Data analysis example
execute_code_with_ollama(
"Create a pandas DataFrame with sample e-commerce data and calculate monthly revenue trends",
api_key
)
# Machine learning example
execute_code_with_ollama(
"Create a simple linear regression model using scikit-learn with sample data and plot the results",
api_key
)
Key Features
- Local AI Models: Use Ollama models running locally for privacy and speed
- Cloud Execution: Execute generated code securely in InstaVM's cloud environment
- Code Extraction: Automatically extract and execute code from Ollama responses
- Full Privacy: Your prompts stay local while only code execution happens in the cloud
Advanced Usage
Interactive Workflow
Create an interactive coding assistant:
def interactive_coding_session(api_key: str):
"""Interactive session with Ollama + InstaVM."""
print("🤖 Ollama + InstaVM Coding Assistant")
print("Type 'exit' to quit\n")
while True:
user_input = input("📝 What would you like to code? ")
if user_input.lower() == 'exit':
break
result = execute_code_with_ollama(user_input, api_key)
print(f"\n{'='*50}\n")
# Start interactive session
interactive_coding_session("your-api-key-here")
Custom Model Configuration
Use different Ollama models for specific tasks:
def execute_with_custom_model(query: str, api_key: str, model: str = "llama3.2"):
"""Execute code with a specific Ollama model."""
response = ollama.chat(
model=model,
messages=[
{
'role': 'system',
'content': f'''You are an expert in {query.split()[0] if query else "programming"}.
Generate efficient, well-commented Python code.
Always use ```python code blocks.'''
},
{
'role': 'user',
'content': query
}
],
options={
'temperature': 0.3, # Lower temperature for more focused code generation
'top_p': 0.9
}
)
# ... rest of the execution logic
Error Recovery
Implement error recovery with Ollama:
def execute_with_retry(query: str, api_key: str, max_retries: int = 3):
"""Execute code with automatic error recovery."""
for attempt in range(max_retries):
try:
# Generate code
response = ollama.chat(
model='llama3.2',
messages=[
{
'role': 'system',
'content': 'Generate Python code. Fix any errors from previous attempts.'
},
{
'role': 'user',
'content': query
}
]
)
# Extract and execute code
code_match = re.search(r'```python(.*?)```', response['message']['content'], re.DOTALL)
if code_match:
code = code_match.group(1).strip()
with InstaVM(api_key) as vm:
result = vm.execute(code)
if result.success:
return result
else:
# If execution failed, include error in next attempt
query += f"\n\nPrevious error: {result.error}"
except Exception as e:
print(f"Attempt {attempt + 1} failed: {e}")
if attempt == max_retries - 1:
return f"Failed after {max_retries} attempts: {e}"
Use Cases
- Code Generation: Generate and test code locally before deployment
- Learning Assistant: Educational tool with privacy-first approach
- Rapid Prototyping: Quickly generate and test code concepts
- Data Analysis: Combine local AI reasoning with cloud data processing
- Code Review: Generate code improvements and test them safely
Model Recommendations
Different Ollama models work better for different tasks:
- Code Generation:
codellama
,llama3.2
,deepseek-coder
- Data Science:
llama3.2
,mistral
- General Purpose:
llama3.2
,phi3
- Lightweight:
phi3-mini
,gemma2
Authentication
Get your InstaVM API key from the InstaVM Dashboard:
import os
# Use environment variable (recommended)
api_key = os.getenv("INSTAVM_API_KEY")
# Or pass directly
api_key = "your-api-key-here"
Performance Tips
- Model Selection: Choose smaller models for faster response times
- Code Caching: Cache generated code for repeated tasks
- Batch Processing: Process multiple queries together
- Error Handling: Implement retry logic for robust applications