Setting up Auto-GPT
📋 Requirements
Choose an environment to run Auto-GPT in (pick one):
- Docker (opens in a new tab) (recommended)
- Python 3.10 or later (instructions: for Windows (opens in a new tab))
- VSCode + devcontainer (opens in a new tab)
🗝️ Getting an API key
Get your OpenAI API key from: https://platform.openai.com/account/api-keys (opens in a new tab).
Attention
To use the OpenAI API with Auto-GPT, we strongly recommend setting up billing (AKA paid account). Free accounts are limited (opens in a new tab) to 3 API calls per minute, which can cause the application to crash.
You can set up a paid account at Manage account > Billing > Overview (opens in a new tab).
Important
It's highly recommended that you keep keep track of your API costs on the Usage page (opens in a new tab). You can also set limits on how much you spend on the Usage limits page (opens in a new tab).
Setting up Auto-GPT
Set up with Docker
-
Make sure you have Docker installed, see requirements
-
Pull the latest image from Docker Hub (opens in a new tab)
docker pull significantgravitas/auto-gpt
-
Create a folder for Auto-GPT
-
In the folder, create a file called
docker-compose.yml
with the following contents:version: "3.9" services: auto-gpt: image: significantgravitas/auto-gpt depends_on: - redis env_file: - .env environment: MEMORY_BACKEND: ${MEMORY_BACKEND:-redis} REDIS_HOST: ${REDIS_HOST:-redis} profiles: ["exclude-from-up"] volumes: - ./auto_gpt_workspace:/app/auto_gpt_workspace - ./data:/app/data ## allow auto-gpt to write logs to disk - ./logs:/app/logs ## uncomment following lines if you have / want to make use of these files #- ./azure.yaml:/app/azure.yaml #- ./ai_settings.yaml:/app/ai_settings.yaml redis: image: "redis/redis-stack-server:latest"
-
Create the necessary configuration files. If needed, you can find templates in the repository (opens in a new tab).
-
Continue to Run with Docker
Note "Docker only supports headless browsing"
Auto-GPT uses a browser in headless mode by default: HEADLESS_BROWSER=True
.
Please do not change this setting in combination with Docker, or Auto-GPT will crash.
Set up with Git
Important
Make sure you have Git (opens in a new tab) installed for your OS.
Info "Executing commands"
To execute the given commands, open a CMD, Bash, or Powershell window.
On Windows: press ++win+x++ and pick Terminal, or ++win+r++ and enter cmd
-
Clone the repository
git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git
-
Navigate to the directory where you downloaded the repository
cd Auto-GPT
Set up without Git/Docker
Warning
We recommend to use Git or Docker, to make updating easier.
- Download
Source code (zip)
from the latest stable release (opens in a new tab) - Extract the zip-file into a folder
Configuration
-
Find the file named
.env.template
in the mainAuto-GPT
folder. This file may be hidden by default in some operating systems due to the dot prefix. To reveal hidden files, follow the instructions for your specific operating system: Windows (opens in a new tab), macOS (opens in a new tab). -
Create a copy of
.env.template
and call it.env
; if you're already in a command prompt/terminal window:cp .env.template .env
. -
Open the
.env
file in a text editor. -
Find the line that says
OPENAI_API_KEY=
. -
After the
=
, enter your unique OpenAI API Key without any quotes or spaces. -
Enter any other API keys or tokens for services you would like to use.
Note
To activate and adjust a setting, remove the
#
prefix. -
Save and close the
.env
file.
Info "Using a GPT Azure-instance"
If you want to use GPT on an Azure instance, set USE_AZURE
to True
and
make an Azure configuration file:
- Rename
azure.yaml.template
toazure.yaml
and provide the relevantazure_api_base
,azure_api_version
and all the deployment IDs for the relevant models in theazure_model_map
section:fast_llm_model_deployment_id
: your gpt-3.5-turbo or gpt-4 deployment IDsmart_llm_model_deployment_id
: your gpt-4 deployment IDembedding_model_deployment_id
: your text-embedding-ada-002 v2 deployment ID
Example:
# Please specify all of these values as double-quoted strings
# Replace string in angled brackets (<>) to your own ID
azure_model_map:
fast_llm_model_deployment_id: "<my-fast-llm-deployment-id>"
...
Details can be found in the openai-python docs (opens in a new tab), and in the Azure OpenAI docs (opens in a new tab) for the embedding model. If you're on Windows you may need to install an MSVC library (opens in a new tab).
Running Auto-GPT
Run with Docker
Easiest is to use docker-compose
. Run the commands below in your Auto-GPT folder.
-
Build the image. If you have pulled the image from Docker Hub, skip this step.
docker-compose build auto-gpt
-
Run Auto-GPT
docker-compose run --rm auto-gpt
By default, this will also start and attach a Redis memory backend. If you do not want this, comment or remove the
depends: - redis
andredis:
sections fromdocker-compose.yml
.
You can pass extra arguments, e.g. running with --gpt3only
and --continuous
:
docker-compose run --rm auto-gpt --gpt3only --continuous
If you dare, you can also build and run it with "vanilla" docker commands:
docker build -t auto-gpt .
docker run -it --env-file=.env -v $PWD:/app auto-gpt
docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous
Run with Dev Container
-
Install the Remote - Containers (opens in a new tab) extension in VS Code.
-
Open command palette with ++f1++ and type
Dev Containers: Open Folder in Container
. -
Run
./run.sh
.
Run without Docker
Simply run the startup script in your terminal. This will install any necessary Python packages and launch Auto-GPT.
-
On Linux/MacOS:
./run.sh
-
On Windows:
.\run.bat
If this gives errors, make sure you have a compatible Python version installed. See also the requirements.