How to Build an AI Agent with ElizaOS

Step-by-step guide to building an autonomous AI agent with ElizaOS. Character files, model backends (Venice, OpenAI, Ollama), Twitter integration, blockchain actions, and realistic cost expectations.

What ElizaOS is

ElizaOS is an open-source TypeScript framework for building autonomous AI agents. It has 17,700+ GitHub stars, 5,500+ forks, and is the most popular AI agent tool in web3. It is MIT licensed and free to use. No tokenTokenA digital unit of value or access rights tracked on a blockchain. Tokens can represent ownership in a project, a right to use a service, a share of future revenue, or simply a tradable asset with no underlying claim.Like a physical poker chip a casino issues. The chip itself has no value. What makes it worth something is what it lets you do at the casino, what the casino has promised, and how much other people will pay you for it.Read more → required.

Your agent can post on Twitter, respond in Discord, execute blockchain transactions, maintain persistent memory, and operate 24/7 without intervention. It handles the plumbing. You define the personality and capabilities.

For the full project assessment, see our ElizaOS review.

Version note: This guide covers ElizaOS v1.x (latest stable: v1.7.2). ElizaOS v2 is in alpha (v2.0.0-alpha.31 as of March 2026) with changes to the plugin namespace and an event-driven architecture. When v2 reaches stable release, this guide will be updated. The core concepts (character files, plugin system, model backends) carry over.

What you need

  • Node.js v23+, specifically v23 or later, not v20 or v22. This is the most common source of errors.
  • Bun runtime (from bun.sh)
  • Git
  • At least one AI modelModelA trained neural network that takes inputs (text, images, audio) and produces outputs (more text, classifications, generated content). In DeAI the model is the thing that actually does the work.Like a very experienced apprentice who has spent years watching thousands of masters make furniture. They can't explain how they know when a joint is right, but they can make a chair that looks and functions like a Chippendale. The training is invisible. The output is what matters.Read more → provider: OpenAI APIAPIApplication Programming Interface. A structured way for one piece of software to talk to another. In DeAI, APIs let applications request inference from a model without running the model themselves.Like a waiter in a restaurant. You don't walk into the kitchen and cook your own meal. You tell the waiter what you want, they tell the kitchen, the kitchen cooks it, and the waiter brings it back. The API is the waiter.Read more → key, Anthropic API key, Venice API key, or Ollama installed locally
  • Terminal familiarity

Windows users: use WSL2 or Git Bash. PowerShell and CMD are not supported.

Step 1: Install prerequisites

# Install nvm (skip if you have it)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash

# Install Node.js v23
nvm install 23
nvm use 23

# Verify
node --version  # Must be v23+

# Install Bun
curl -fsSL https://bun.sh/install | bash

# If 'elizaos' command not found after install:
export PATH="$HOME/.bun/bin:$PATH"

Step 2: Create your agent

The CLI method is the fastest path:

# Install CLI
bun i -g @elizaos/cli

# Create project (interactive, prompts for DB and model provider)
elizaos create my-agent

# Start
cd my-agent
elizaos start

During creation, you choose a database (pglite for development) and model provider (OpenAI or Anthropic). Enter your API key when prompted.

For more control, clone the repo directly:

git clone https://github.com/elizaOS/eliza
cd eliza
npm install -g pnpm
pnpm install --no-frozen-lockfile
cp .env.example .env
# Edit .env with your API keys
pnpm build
pnpm start --characters="characters/your-character.character.json"

Step 3: Define the character

Character files are JSON configs that define everything about your agent’s personality and capabilities. This is where you spend the most time.

A minimal character file:

{
  "name": "MyAgent",
  "bio": "A straightforward AI that analyses DeAI projects.",
  "plugins": ["@elizaos/plugin-openai"]
}

A more complete one:

{
  "name": "DeAIBot",
  "bio": [
    "An AI analyst covering decentralised AI infrastructure.",
    "Values data over narrative. Calls out hype."
  ],
  "lore": [
    "Has tracked DeAI since 2024.",
    "Believes sovereignty matters more than convenience."
  ],
  "knowledge": [
    "Bittensor uses a 21M TAO cap with halving schedule.",
    "Venice offers private inference with no content filtering."
  ],
  "topics": ["deai", "tokenomics", "gpu compute", "privacy"],
  "adjectives": ["direct", "analytical", "sceptical"],
  "style": {
    "all": ["concise", "uses specific numbers"],
    "post": ["under 280 characters", "asks questions"],
    "chat": ["helpful but honest about limitations"]
  },
  "modelProvider": "venice",
  "clients": ["twitter"],
  "plugins": ["@elizaos/plugin-venice", "@elizaos/plugin-twitter"]
}

Key fields:

  • bio can be a string or array (arrays get randomised for variation)
  • lore is extended backstory and beliefs
  • knowledge contains facts fed into RAG retrieval
  • messageExamples are sample conversations for tone calibration
  • postExamples are example social media posts
  • style defines writing rules per context (all, chat, post)
  • modelProvider sets which LLMLLMLarge Language Model. A neural network trained on vast amounts of text to predict the next word in a sequence. Modern LLMs (GPT, Claude, Llama, Qwen, DeepSeek) generate human-quality text and are the foundation of most modern AI products.Like an autocomplete that read every book ever written. It has no memory of individual texts but it has absorbed the patterns of language so deeply that it can generate paragraphs that sound human. The skill is statistical, not conscious.Read more → backend (“openai”, “anthropic”, “venice”, “ollama”)

ElizaOS includes tools for generating character files from existing data: tweets2character (from your tweets), folder2knowledge (from documents), and chats2character (from chat logs).

Choosing a model backend

ProviderPluginCostPrivacyBest for
OpenAI@elizaos/plugin-openai~$5-30/monthData sent to OpenAIHighest capability
Anthropic@elizaos/plugin-anthropic~$5-30/monthData sent to AnthropicStrong reasoning
Venice@elizaos/plugin-veniceVVV stakingStakingLocking up a cryptocurrency to help secure a blockchain network, usually in exchange for rewards. The locked tokens act as a security deposit that can be taken away if the staker misbehaves.Like putting down a large rental deposit for an apartment. You get the money back if you behave, you earn interest while it's locked, and the landlord takes it if you trash the place.Read more → or Pro sub ($18/mo)Private inferenceInferenceRunning a trained AI model to produce an answer. Inference is what happens when you type a prompt into ChatGPT and get a response. The model takes your input, computes a best guess, and returns it.Like asking an expert for their opinion. The training was the decades they spent becoming an expert. The inference is the 30 seconds it takes them to answer your specific question.Read more →, no logs, 47+ modelsSovereignty-focused
Ollama@elizaos/plugin-llamaFree (hardware cost)Fully localMaximum privacy

For Venice, set these environment variables:

VENICE_API_KEY=your_key
VENICE_SMALL_MODEL=llama-3.3-70b
VENICE_LARGE_MODEL=zai-org-glm-4.7
VENICE_IMAGE_MODEL=venice-sd35
VENICE_EMBEDDING_MODEL=text-embedding-bge-m3

Venice’s default large model is now zai-org-glm-4.7 (GLM 4.7, 198K context). It replaced llama-3.1-405b as the recommended option. For image generation, venice-sd35 is the ElizaOS default; flux-2-max and qwen-image are also available. The embeddingEmbeddingA numerical representation of a word, sentence, or image as a list of numbers (a vector) that captures its meaning. Similar things have similar embeddings, which makes them useful for search, clustering, and recommendation.Like a map where every word has GPS coordinates. Words with similar meanings end up close together on the map. "Cat" and "kitten" are nearby. "Cat" and "thunderstorm" are far apart. The map is the embedding space.Read more → model text-embedding-bge-m3 remains unchanged. Check Venice’s API docs for the full model list. They now offer 47+ text models including Claude, GPT, Gemini, and Grok variants.

For Ollama (local inference):

# Install Ollama first (brew install ollama on Mac)
ollama serve &
ollama pull mistral
# Set in .env:
OLLAMA_MODEL=mistral

I would use Venice for a sovereignty-first agent. Your prompts and responses stay private, you get access to uncensored open-weight models, and the API is OpenAI-compatible. See our Venice review for details on the privacy model.

Step 4: Add Twitter integration

This is the most common use case: an agent that posts autonomously on X/Twitter.

Install the plugin:

elizaos plugins add @elizaos/plugin-twitter

Set environment variables. The recommended authentication method is OAuth 1.0a (not OAuth 2.0):

# OAuth 1.0a auth (recommended)
TWITTER_API_KEY=your_api_key
TWITTER_API_SECRET_KEY=your_api_secret_key
TWITTER_ACCESS_TOKEN=your_access_token
TWITTER_ACCESS_TOKEN_SECRET=your_access_token_secret

# Behaviour controls
TWITTER_DRY_RUN=true          # Start with this ON
TWITTER_ENABLE_POST=true
TWITTER_ENABLE_REPLIES=true
TWITTER_ENABLE_ACTIONS=true
POST_INTERVAL_MIN=90          # Minutes between posts
POST_INTERVAL_MAX=180

You will need a Twitter Developer account with OAuth 1.0a credentials. The older username/password authentication still exists as a legacy option but is not recommended.

Add "clients": ["twitter"] to your character file and launch.

Always start with TWITTER_DRY_RUN=true. Review the output before letting your agent post live. Autonomous social media posting without supervision is how agents embarrass their creators.

X/Twitter aggressively rate-limits API calls. Start with conservative posting intervals (90-180 minutes) and increase only once you confirm your account is not getting flagged.

Step 5: Add blockchain capabilities

ElizaOS has 23+ blockchain plugins covering most major chains.

# Solana: transfers, Jupiter swaps, NFTs
elizaos plugins add @elizaos/plugin-solana

# EVM: Ethereum, Base, Arbitrum transfers and swaps
elizaos plugins add @elizaos/plugin-evm

Set your private key in .env:

SOLANA_PRIVATE_KEY=your_key
# or
EVM_PRIVATE_KEY=your_key

Configure chains in the character file:

"settings": {
  "chains": {
    "evm": ["base", "arbitrum"]
  }
}

Once configured, you can tell the agent “Transfer 0.01 ETH to 0xABC… on Base” via natural language. Always specify the chain explicitly. Without it, the agent may default to mainnet and execute real transactions when you meant to test.

Memory system

ElizaOS maintains persistent memory across conversations using vector embeddings and semantic search. Three database options:

  • PGLite, embedded PostgreSQL (3MB WASM), good for development
  • SQLite, file-based, lightweight
  • PostgreSQL, full Postgres with pgvector, recommended for production

The memory system automatically extracts facts from conversations, stores them as embeddings, and retrieves relevant context for future interactions. Your agent remembers what it has been told and builds on it.

Running costs

ElizaOS itself is free. Costs come from the AI model provider:

SetupEstimated monthly cost
Ollama (local, 7B model)$0 (electricity only)
Venice Pro$18/month (unlimited text)
OpenAI GPT-4o (moderate use)$5-30/month
Anthropic Claude (moderate use)$5-30/month
VPS hosting (if not running locally)$5-50/month

A moderately active Twitter agent making 8-16 posts per day with reply handling will use roughly 500K-2M tokens per day depending on the model and conversation volume. At OpenAI rates, that is $1-5/day.

Common pitfalls

  1. Wrong Node.js version. Must be v23+, not v20 or v22. Check with node --version.
  2. Bun not in PATH. Add export PATH="$HOME/.bun/bin:$PATH" to your shell profile after installing Bun.
  3. Plugin install failures. If auto-install fails, manually install: bun add @elizaos/plugin-name.
  4. Mainnet transactions by accident. Always specify the target chain when using blockchain plugins. Test on testnets first.
  5. Twitter rate limits. Space out posts. Start conservative.
  6. .env committed to Git. Never commit API keys. Ensure .env is in .gitignore.
  7. Windows without WSL. PowerShell and CMD do not work. Use WSL2 or Git Bash.

The token question

The $elizaOS token exists but is not required to build or run agents. ElizaOS is fully open-source under MIT licence. You can build, deploy and operate agents without holding a single token.

The token funds the broader DAODAODecentralised Autonomous Organisation. A way to coordinate decisions and manage a treasury using token-weighted voting instead of a traditional company structure. Token holders propose and vote on changes directly.Like a shareholder-run company where every shareholder can vote on every decision, the votes are public, and the company can't do anything the shareholders don't approve. The coordination is messier than a normal company but nobody has unilateral control.Read more →: cross-chain agent coordination, governance. The framework-to-token connection is loose, which is why our review gives ElizaOS a returns score of 2.7/10 despite the tool being genuinely good.

My assessment

ElizaOS is the best open-source AI agent framework available today. The plugin ecosystem is extensive (90+ plugins), the character file system makes personality definition straightforward, and the community is active.

The honest qualification: building an agent that does something useful takes more than five minutes. ElizaOS handles the plumbing well, but defining a compelling character, tuning behaviour, and maintaining the agent requires ongoing attention. The agents that succeed on Twitter are the ones with genuine personality and purpose, not just technical deployment.

I would start with a Venice-powered agent running locally on my Mac Studio. Define a focused character, connect Twitter with dry-run enabled, iterate on the personality until the output is consistently good, then go live. The total cost: a Venice Pro subscription ($18/month) or free if using Ollama locally.

Score changes, new reviews, one editorial take every two weeks. No spam.