π§ Mindcraft CEβοΈ
Crafting minds for Minecraft with LLMs and Mineflayer!
The experimental version of Mindcraft!
FAQ |
Discord Support |
Website |
Andy API
Video Tutorial |
Blog Post |
Paper Website |
MineCollab
Caution
Do not connect this bot to public servers with coding enabled. This project allows an LLM to write/execute code on your computer. The code is sandboxed, but still vulnerable to injection attacks. Code writing is disabled by default, you can enable it by setting allow_insecure_coding to true in settings.js. Ye be warned.
New Experimental Features
Mindcraft CE is the experimental fork of Mindcraft, featuring unique implementations and unmerged PRs from the original repository. Each branch offers distinct features not found in others.
| Branch | Focus | Status | Key Features |
|---|---|---|---|
stable |
Production ready | Stable | Confirmed working snapshot |
develop |
Active development | Beta | Upstream + extra/unique content |
r0.1 |
Complete revamp | Experimental | Ground-up redesign |
agent-system |
AI tooling | Experimental | Function calling, RAG, tool-based prompting |
Warning
Some of the new features may not work right, proceed at your own risk. If you encounter problems, consider contributing by submitting a pull request to the corresponding branch.
Revamp 0.1
You can access this on the r0.1 branch.
An entire rework of mindcraft in its entirety. This will inevitably become the new core architecture of mindcraft-ce, separating all the current additions.
Agent System
You can access this on the agent-system branch.
π§ Function Calling
use_function_callingβ New tool-based AI interaction system insettings.js- Enables structured tool calls instead of text-based commands
- Supported across Claude, GPT, Gemini, Grok, DeepSeek, and Mistral models
π§ RAG System (Retrieval-Augmented Generation)
- LanceDB Integration β Vector database for intelligent context retrieval
- RAGManager β New class for handling memory and knowledge retrieval
π οΈ Tool-Based Prompting
- Modular prompt system with separate XML templates:
conversing.xml,coding.xml,bot_responder.xmlimage_analysis.xml,saving_memory.xml
_default.tools.jsonβ New tool-based profile configuration_default.commands.jsonβ Legacy command-based system (still supported)
ποΈ Enhanced Vision & Models
- Improved vision request handling across all model providers
- Andy API TTS implementation
π― Other Improvements
- π³ Docker support with improved container configuration
- π Multi-agent MineCollab framework
- π OpenRouter integration for 100+ models
π§ Coming Soon
- Model Provider Repositories β Install and update model providers from external repositories via
model_provider_repositoriesinsettings.js - Tools Repositories β Extend bot capabilities with community-created tools via
tools_provider_repositoriesinsettings.js - Both support auto-install/update and manual management through the Mindserver UI
Getting Started
Requirements
- Minecraft Java Edition (up to v1.21.11, recommend v1.21.6)
- Node.js Installed (Node v18 or v20 LTS recommended. Node v24+ may cause issues with native dependencies)
- At least one API key from a supported API provider. See supported APIs. OpenAI is the default.
Important
If installing node on windows, ensure you check Automatically install the necessary tools
If you encounter npm install errors on macOS, see the FAQ for troubleshooting native module build issues
Install and Run
-
Make sure you have the requirements above.
-
Download the latest release and unzip it, or clone the repository.
-
Rename
keys.example.jsontokeys.jsonand fill in your API keys (you only need one). The desired model is set inandy.jsonor other profiles. For other models refer to the table below. -
In terminal/command prompt, run
npm installfrom the installed directory -
Start a minecraft world and open it to LAN on localhost port
55916 -
Run
node main.jsfrom the installed directory
If you encounter issues, check the FAQ or find support on discord. We are currently not very responsive to github issues. To run tasks please refer to Minecollab Instructions
Configuration
Model Customization
You can configure project details in settings.js. See file.
You can configure the agent's name, model, and prompts in their profile like andy.json. The model can be specified with the model field, with values like model: "gemini-3.1-pro". You will need the correct API key for the API provider you choose. See all supported APIs below.
β VIEW SUPPORTED APIs β
| API Name | Config Variable | Docs |
|---|---|---|
openai |
OPENAI_API_KEY |
docs |
GEMINI_API_KEY |
docs | |
anthropic |
ANTHROPIC_API_KEY |
docs |
xai |
XAI_API_KEY |
docs |
deepseek |
DEEPSEEK_API_KEY |
docs |
ollama (local) |
n/a | docs |
qwen |
QWEN_API_KEY |
Intl./cn |
mistral |
MISTRAL_API_KEY |
docs |
replicate |
REPLICATE_API_KEY |
docs |
groq (not grok) |
GROQCLOUD_API_KEY |
docs |
huggingface |
HUGGINGFACE_API_KEY |
docs |
novita |
NOVITA_API_KEY |
docs |
openrouter |
OPENROUTER_API_KEY |
docs |
glhf |
GHLF_API_KEY |
docs |
hyperbolic |
HYPERBOLIC_API_KEY |
docs |
vllm |
n/a | n/a |
cerebras |
CEREBRAS_API_KEY |
docs |
mercury |
MERCURY_API_KEY |
docs |
lmstudio |
n/a | docs |
For more comprehensive model configuration and syntax, see Model Specifications.
For local models, we recommend you use LM Studio for the Andy series of models. Ollama breaks current models, and should be avoided. Please see our huggingface page for more info. For a full breakdown of all Andy models, specs, and VRAM requirements, see the Andy Models page.
Online Servers
To connect to online servers your bot will need an official Microsoft/Minecraft account. You can use your own personal one, but will need another account if you want to connect too and play with it. To connect, change these lines in settings.js:
Important
The bot's name in the profile.json must exactly match the Minecraft profile name! Otherwise the bot will spam talk to itself.
To use different accounts, Mindcraft will connect with the account that the Minecraft launcher is currently using. You can switch accounts in the launcher, then run node main.js, then switch to your main account after the bot has connected.
Tasks
Tasks automatically start the bot with a prompt and a goal item to acquire or blueprint to construct. To run a simple task that involves collecting 4 oak_logs run
node main.js --task_path tasks/basic/single_agent.json --task_id gather_oak_logs
Here is an example task json format:
{
"gather_oak_logs": {
"goal": "Collect at least four logs",
"initial_inventory": {
"0": {
"wooden_axe": 1
}
},
"agent_count": 1,
"target": "oak_log",
"number_of_target": 4,
"type": "techtree",
"max_depth": 1,
"depth": 0,
"timeout": 300,
"blocked_actions": {
"0": [],
"1": []
},
"missing_items": [],
"requires_ctable": false
}
}
The initial_inventory is what the bot will have at the start of the episode, target refers to the target item and number_of_target refers to the number of target items the agent needs to collect to successfully complete the task.
If you want more optimization and automatic launching of the minecraft world, you will need to follow the instructions in Minecollab Instructions
Docker Container
If you intend to allow_insecure_coding, it is a good idea to run the app in a docker container to reduce risks of running unknown code. This is strongly recommended before connecting to remote servers, although still does not guarantee complete safety.
docker build -t mindcraft . && docker run --rm --add-host=host.docker.internal:host-gateway -p 8080:8080 -p 3000-3003:3000-3003 -e SETTINGS_JSON='{"auto_open_ui":false,"profiles":["./profiles/gemini.json"],"host":"host.docker.internal"}' --volume ./keys.json:/app/keys.json --name mindcraft mindcraft
When running in docker, if you want the bot to join your local minecraft server, you have to use a special host address host.docker.internal to call your localhost from inside your docker container. Put this into your settings.js:
"host": "host.docker.internal", // instead of "localhost", to join your local minecraft from inside the docker container
To connect to an unsupported minecraft version, you can try to use viaproxy
Bot Profiles
Bot profiles are json files (https://github.com/mindcraft-ce/mindcraft-ce/blob/develop/such as andy.json) that define:
- Bot backend LLMs to use for talking, coding, and embedding.
- Prompts used to influence the bot's behavior.
- Examples help the bot perform tasks.
Model Specifications
LLM models can be specified simply as "model": "gpt-5.4, or more specifically with "{api}/{model}", like "openrouter/google/gemini-3.1-pro". See all supported APIs here.
The model field can be a string or an object. A model object must specify an api, and optionally a model, url, and additional params. You can also use different models/providers for chatting, coding, vision, embedding, and voice synthesis. See the example below.
"model": {
"api": "openai",
"model": "gpt-5.2",
"url": "https://api.openai.com/v1/",
"params": {
"max_tokens": 1000,
"temperature": 1
}
},
"code_model": {
"api": "openai",
"model": "gpt-4.1",
"url": "https://api.openai.com/v1/"
},
"vision_model": {
"api": "openai",
"model": "gpt-5.2",
"url": "https://api.openai.com/v1/"
},
"embedding": {
"api": "openai",
"url": "https://api.openai.com/v1/",
"model": "text-embedding-3-large"
},
"speak_model": "openai/tts-1/echo"
model is used for chat, code_model is used for newAction coding, vision_model is used for image interpretation, embedding is used to embed text for example selection, and speak_model is used for voice synthesis. model will be used by default for all other models if not specified. Not all APIs support embeddings, vision, or voice synthesis.
All apis have default models and urls, so those fields are optional. The params field is optional and can be used to specify additional parameters for the model. It accepts any key-value pairs supported by the api. Is not supported for embedding models.
Embedding Models
Embedding models are used to embed and efficiently select relevant examples for conversation and coding.
Supported Embedding APIs: openai, google, replicate, huggingface, novita
If you try to use an unsupported model, then it will default to a simple word-overlap method. Expect reduced performance. We recommend using supported embedding APIs.
Voice Synthesis Models
Voice synthesis models are used to narrate bot responses and specified with speak_model. This field is parsed differently than other models and only supports strings formatted as "{api}/{model}/{voice}", like "openai/tts-1/echo". We only support openai and google for voice synthesis.
Specifying Profiles via Command Line
By default, the program will use the profiles specified in settings.js. You can specify one or more agent profiles using the --profiles argument: node main.js --profiles ./profiles/andy.json ./profiles/jill.json
Contributing
We welcome contributions to the project! We are generally less responsive to github issues, and more responsive to pull requests. Join the discord for more active support and direction.
While AI generated code is allowed, please vet it carefully. Submitting tons of sloppy code and documentation actively harms development.
Patches
Some of the node modules that we depend on have bugs in them. To add a patch, change your local node module file and run npx patch-package [package-name]
Development Team
@Sweaterdog | @riqvip | @uukelele | @mrelmida
Also thanks to all the other developers of the Mindcraft project: @MaxRobinsonTheGreat, @kolbytn, @icwhite, @Ninot1Quyi
Citation:
This work is published in the paper Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning. Please use this citation if you use this project in your research:
@article{mindcraft2025,
title = {Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning},
author = {White*, Isadora and Nottingham*, Kolby and Maniar, Ayush and Robinson, Max and Lillemark, Hansen and Maheshwari, Mehul and Qin, Lianhui and Ammanabrolu, Prithviraj},
journal = {arXiv preprint arXiv:2504.17950},
year = {2025},
url = {https://arxiv.org/abs/2504.17950},
}
Contributors
Thanks to everyone who has submitted issues on and off Github, made suggestions, and generally helped make this a better project.