Andy API Documentation
Distributed AI compute pool with OpenAI-compatible endpoints, automatic load balancing, and failover support.
Quick Navigation
Quick Start
The Andy API is a distributed AI compute pool providing OpenAI-compatible endpoints with automatic load balancing, failover support, and real-time monitoring across multiple hosts.
Free Chat API (No Authentication)
Get started instantly with our free chat endpoint - no API key required:
curl -X POST "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions" \ -H "Content-Type: application/json" \ -d '{ "model": "sweaterdog/andy-4:latest", "messages": [ { "role": "user", "content": "Hello! Can you help me understand how AI models work?" } ], "temperature": 0.7, "max_tokens": 150 }'
Authenticated Requests (API Key Required)
For enhanced limits and additional endpoints, get an API key and include it in your requests:
curl -X POST "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions" \ -H "Authorization: Bearer YOUR_API_KEY_HERE" \ -H "Content-Type: application/json" \ -d '{ "model": "sweaterdog/andy-4:latest", "messages": [ { "role": "user", "content": "Explain quantum computing in simple terms" } ], "temperature": 0.7, "max_tokens": 200 }'
Python Example
import requests # Configuration url = "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions" api_key = "YOUR_API_KEY_HERE" # Get from /api-keys page headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" # Include for authenticated requests } payload = { "model": "sweaterdog/andy-4:latest", "messages": [ { "role": "user", "content": "Write a short poem about artificial intelligence" } ], "temperature": 0.8, "max_tokens": 100 } # Make the request response = requests.post(url, json=payload, headers=headers) result = response.json() # Print the response print(result["choices"][0]["message"]["content"])
JavaScript/Node.js Example
const fetch = require('node-fetch'); async function callAndyAPI() { const response = await fetch('https://mindcraft.riqvip.dev/api/andy/v1/chat/completions', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer YOUR_API_KEY_HERE' // For authenticated requests }, body: JSON.stringify({ model: 'sweaterdog/andy-4:latest', messages: [ { role: 'user', content: 'Explain machine learning in simple terms' } ], temperature: 0.8, max_tokens: 150 }) }); const data = await response.json(); console.log(data.choices[0].message.content); } callAndyAPI();
Authentication
Endpoint Type | Authentication | Rate Limit | Daily Requests |
---|---|---|---|
Chat Completions | None Required | 10/minute | Unlimited |
Other APIs | API Key Required | 1,000/day | Unlimited |
Get Your API Key
1. Create a free account
2. Visit the API Keys page
3. Generate a new API key
4. Include it in the Authorization header for non-chat endpoints
Using API Keys
# With curl curl -H "Authorization: Bearer YOUR_API_KEY_HERE" \ -H "Content-Type: application/json" \ -X POST "https://mindcraft.riqvip.dev/api/andy/v1/models" # Python headers = {"Authorization": "Bearer YOUR_API_KEY_HERE"} # JavaScript headers: {"Authorization": "Bearer YOUR_API_KEY_HERE"}
API Reference
Base URL: https://mindcraft.riqvip.dev
Chat Completions
Create a chat completion using the distributed model pool. No API key required.
Parameter | Type | Required | Description |
---|---|---|---|
model |
string | Yes | Model name (e.g., "sweaterdog/andy-4:latest") |
messages |
array | Yes | Array of message objects with "role" and "content" |
temperature |
number | No | Sampling temperature (0-2, default: 1) |
max_tokens |
integer | No | Maximum tokens to generate |
stream |
boolean | No | Stream response chunks (default: false) |
stop |
array | No | Stop sequences |
Other Endpoints (API Key Required)
List all available models across the compute pool.
Generate embeddings using available embedding models.
Get current status of the compute pool including active hosts and load.
# List available models curl -H "Authorization: Bearer YOUR_API_KEY_HERE" \ "https://mindcraft.riqvip.dev/api/andy/v1/models" # Check pool status curl -H "Authorization: Bearer YOUR_API_KEY_HERE" \ "https://mindcraft.riqvip.dev/api/andy/pool_status"
Advanced Usage
Streaming Responses
Get real-time streaming responses for longer conversations:
import requests import json url = "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions" headers = { "Content-Type": "application/json", "Authorization": "Bearer YOUR_API_KEY_HERE" # Optional for chat } payload = { "model": "sweaterdog/andy-4:latest", "messages": [{"role": "user", "content": "Tell me a detailed story"}], "stream": True, "temperature": 0.8 } response = requests.post(url, json=payload, headers=headers, stream=True) for line in response.iter_lines(): if line: line = line.decode('utf-8') if line.startswith('data: '): data = line[6:] if data != '[DONE]': chunk = json.loads(data) content = chunk['choices'][0]['delta'].get('content', '') print(content, end='', flush=True)
OpenAI Library Compatibility
Use Andy API as a drop-in replacement for OpenAI:
from openai import OpenAI # Initialize client with Andy API client = OpenAI( api_key="YOUR_API_KEY_HERE", # Optional for chat completions base_url="https://mindcraft.riqvip.dev/api/andy/v1" ) # Use exactly like OpenAI response = client.chat.completions.create( model="sweaterdog/andy-4:latest", messages=[ {"role": "user", "content": "Explain neural networks"} ], temperature=0.7, max_tokens=300 ) print(response.choices[0].message.content)
Error Handling
import requests from time import sleep def safe_api_call(payload, max_retries=3): url = "https://mindcraft.riqvip.dev/api/andy/v1/chat/completions" for attempt in range(max_retries): try: response = requests.post(url, json=payload, timeout=30) if response.status_code == 200: return response.json() elif response.status_code == 429: # Rate limited sleep(2 ** attempt) # Exponential backoff continue else: response.raise_for_status() except requests.exceptions.RequestException as e: if attempt == max_retries - 1: raise e sleep(1) raise Exception("Max retries exceeded")
Contributing Compute Power
Help grow the Andy API network by contributing your GPU compute power. Join the distributed pool and earn rewards while helping the community access AI models.
Setup Local Client