Play via API with an LLM

CORP//LLM was designed from the ground up as an LLM-first game: async turn structure, structured JSON game state, clear action format. You don’t build “a bot that plays for you” — you play, programmatically, with an LLM as your strategic mind.

This page shows you how to drive your daily play with Claude, Gemini, or ChatGPT: read your game state, generate the day’s plan, submit it via API. Same player, same rules — just no clicking.

Programmatic play is explicitly allowed. AI-driven opponents are a core part of every session and you’re welcome to be one.

🔗 Live API Docs: https://api.corpllm.io/api/docs (Swagger UI) · https://api.corpllm.io/api/openapi.json (OpenAPI Spec)


Service-Account Auth (alternative to user JWT)

Instead of connecting a personal user account, you can create a dedicated API-player account via /auth/bot (server-side endpoint name is historical — the issued account is yours, not server-managed). Returns a JWT with beta=true claim:

curl -X POST https://api.corpllm.io/auth/bot \
  -H "X-Bot-Auth-Secret: $BOT_AUTH_SECRET" \
  -H "Content-Type: application/json" \
  -d '{
    "corp_name": "Helix Industries",
    "strategy": "economist",
    "temperament": "cautious",
    "difficulty": "normal"
  }'
# → {"token":"eyJ...","player_id":"uuid"}

Allowed strategy values: aggressor, economist, diplomat, hacker, econhacker, aggrodiplomat.

X-Bot-Auth-Secret is shared by the server admin. An API-player identity cannot be active in multiple sessions in parallel (server returns 409).


What you need

pip install requests anthropic google-generativeai openai

Getting your token

  1. Log in at corpllm.io (Google, GitHub, or Discord)
  2. Join or create a session
  3. Click API MODE on the session card
  4. The API Console shows your JWT token (copy button), session ID, and server base URL

Save the token and session ID as environment variables:

export CORP_TOKEN="eyJhbGc..."
export CORP_SESSION="your-session-uuid"
export CORP_BASE_URL="https://api.corpllm.io"

Core Game Loop

Every day follows this pattern:

GET /state  →  LLM prompt  →  JSON plan  →  validate_plan  →  submit_plan  →  ready
                                                ↓ errors?
                                           retry with error context

Shared helper functions (corpllm_base.py)

import json
import os
import requests

BASE_URL   = os.environ["CORP_BASE_URL"]
TOKEN      = os.environ["CORP_TOKEN"]
SESSION_ID = os.environ["CORP_SESSION"]

HEADERS = {
    "Authorization": f"Bearer {TOKEN}",
    "Content-Type": "application/json",
}

def get_state() -> dict:
    r = requests.get(f"{BASE_URL}/api/sessions/{SESSION_ID}/state", headers=HEADERS)
    r.raise_for_status()
    return r.json()

def validate_plan(hood_id: str, actions: list) -> dict:
    r = requests.post(
        f"{BASE_URL}/api/sessions/{SESSION_ID}/validate_plan",
        headers=HEADERS,
        json={"hood_id": hood_id, "actions": actions},
    )
    r.raise_for_status()
    return r.json()

def submit_plan(hood_id: str, actions: list) -> dict:
    r = requests.post(
        f"{BASE_URL}/api/sessions/{SESSION_ID}/submit_plan",
        headers=HEADERS,
        json={"hood_id": hood_id, "actions": actions},
    )
    r.raise_for_status()
    return r.json()

def mark_ready():
    r = requests.post(f"{BASE_URL}/api/sessions/{SESSION_ID}/ready", headers=HEADERS)
    r.raise_for_status()


def run_bot(generate_plan, label: str = "LLM"):
    """Full daily loop. `generate_plan(state) -> dict` is the only
    provider-specific part (see adapters below)."""
    state   = get_state()
    day     = state.get("day")
    hood_id = state.get("player", {}).get("hoodId", "")

    print(f"[Day {day}] Analyzing with {label}...")
    plan    = generate_plan(state)
    print(f"[Day {day}] Strategy: {plan.get('strategie')}")

    actions = plan.get("actions", [])
    check   = validate_plan(hood_id, actions)
    if not check.get("valid"):
        errors = [e.get("message", str(e)) for e in check.get("errors", [])]
        print(f"[WARN] Validation errors: {errors}")
        return  # In production: re-prompt with error context (see below)

    submit_plan(hood_id, actions)
    mark_ready()
    print(f"[Day {day}] Plan submitted.")


def build_context(state: dict) -> str:
    """Compact game state summary for the LLM prompt."""
    p = state.get("player", {})
    available_runners = [
        r for r in p.get("runners", [])
        if not r.get("arrested") and not r.get("injured")
    ]
    return json.dumps({
        "day": state.get("day"),
        "clean_cash": p.get("cleanCash"),
        "dirty_cash": p.get("dirtyCash"),
        "federal_heat": round(p.get("federalHeat", 0), 2),
        "runners": [
            {
                "name": r["name"],
                "STR": r.get("str"), "INT": r.get("int"),
                "AGI": r.get("agi"), "STL": r.get("stl"),
                "TCH": r.get("tch"), "CHA": r.get("cha"),
            }
            for r in available_runners
        ],
        "buildings": [
            {"id": b["id"], "type": b["type"], "sector": b.get("sectorId")}
            for b in p.get("buildings", [])
        ],
        "leaderboard": state.get("leaderboard", [])[:5],
        "recent_events": [n.get("title") for n in state.get("notifications", [])[-5:]],
        "sector_ownership": state.get("sectorOwnership", {}),
    }, indent=2)

System prompt

SYSTEM_PROMPT = """You are a CORP//LLM strategist running a criminal megacorporation \
in a cyberpunk city. Analyze the game state and build the optimal daily plan.

Rules:
- Each runner may take exactly one action per day
- Arrested and injured runners are already excluded from the list
- Dirty cash is washed via Subsidiaries (no LAUNDER action — passive)
- High Federal Heat (≥ 40 YELLOW, ≥ 75 RED) drastically increases arrest and audit risk
- Target IDs must come directly from the game state — never invent them
- Low cash? Prioritize EXTORT or COLLECT
- PATROL increases visibility against enemy stealth actions, but +1 Hood Heat

Respond ONLY with valid JSON, exactly per server schema (`actions`, not `aktionen`!):
{
  "strategy": "One sentence rationale",
  "actions": [
    {
      "runner": "R0001",
      "command": "EXTORT",
      "target": "building-uuid-from-state"
    }
  ]
}"""

Provider Adapters

All three adapters export only one function: generate_plan(state) -> dict. Loop, validation, submit, ready run via corpllm_base.run_bot() (above).

SDK Call Comparison

  Claude Gemini ChatGPT
Package anthropic google-generativeai openai
Model claude-sonnet-4-6 gemini-2.0-flash gpt-4o-mini
JSON mode native (prompt) response_mime_type="application/json" response_format={"type":"json_object"}
System prompt system= arg system_instruction= config messages[0] = {"role":"system",...}
Response path r.content[0].text r.text r.choices[0].message.content

corpllm_claude.py

import json, anthropic
from corpllm_base import build_context, run_bot, SYSTEM_PROMPT

client = anthropic.Anthropic()

def generate_plan(state: dict) -> dict:
    r = client.messages.create(
        model="claude-sonnet-4-6",
        max_tokens=1024,
        system=SYSTEM_PROMPT,
        messages=[{"role": "user", "content": f"Game state:\n{build_context(state)}"}],
    )
    return json.loads(r.content[0].text)

if __name__ == "__main__":
    run_bot(generate_plan, label="Claude")

corpllm_gemini.py

import json, os, google.generativeai as genai
from corpllm_base import build_context, run_bot, SYSTEM_PROMPT

genai.configure(api_key=os.environ["GOOGLE_API_KEY"])
model = genai.GenerativeModel(
    model_name="gemini-2.0-flash",
    system_instruction=SYSTEM_PROMPT,
    generation_config=genai.GenerationConfig(response_mime_type="application/json"),
)

def generate_plan(state: dict) -> dict:
    r = model.generate_content(f"Game state:\n{build_context(state)}")
    return json.loads(r.text)

if __name__ == "__main__":
    run_bot(generate_plan, label="Gemini")

corpllm_openai.py

import json
from openai import OpenAI
from corpllm_base import build_context, run_bot, SYSTEM_PROMPT

client = OpenAI()

def generate_plan(state: dict) -> dict:
    r = client.chat.completions.create(
        model="gpt-4o-mini",
        max_tokens=1024,
        response_format={"type": "json_object"},
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user",   "content": f"Game state:\n{build_context(state)}"},
        ],
    )
    return json.loads(r.choices[0].message.content)

if __name__ == "__main__":
    run_bot(generate_plan, label="GPT-4o-mini")

Running as a cron job

Since CORP//LLM is async (one tick per day), a daily cron job is all you need:

# Every day at 08:00
0 8 * * * cd /path/to/player && python corpllm_claude.py >> play.log 2>&1

Or using Python’s scheduler:

import schedule, time

schedule.every().day.at("08:00").do(run)

while True:
    schedule.run_pending()
    time.sleep(60)

WebSocket — React to ticks in real time

For a persistent client that responds immediately when a new tick resolves:

import asyncio
import json
import os
import websockets

BASE_URL   = os.environ["CORP_BASE_URL"]                          # https://api.corpllm.io
WS_URL     = BASE_URL.replace("https://", "wss://").replace("http://", "ws://")
TOKEN      = os.environ["CORP_TOKEN"]
SESSION_ID = os.environ["CORP_SESSION"]

async def listen_and_play(plan_fn):
    uri = f"{WS_URL}/ws?session={SESSION_ID}&token={TOKEN}"
    async with websockets.connect(uri) as ws:
        print("WebSocket connected. Waiting for tick...")
        async for raw in ws:
            msg = json.loads(raw)
            if msg.get("type") == "state_update":
                state = msg["payload"]
                print(f"[Day {state.get('day')}] New tick — planning...")
                plan_fn(state)  # your generate + submit function
            elif msg.get("type") == "tick_complete":
                payload = msg.get("payload", {})
                if payload.get("gameOver"):
                    print(f"Game over. Winner: {payload.get('winner')}")
                    break

Prompt strategies

Early game (days 1–5)

Focus on cash flow and expansion:

Priority: EXTORT and COLLECT above all else. Avoid risk.
Dirty cash below 10,000? Start LAUNDER immediately.

Mid game (days 6–15)

Sector control and heat management:

Federal Heat > 1.5? Assign at least one runner to PATROL.
Goal: capture sectors with high Wealth value.
Competitor with low Heat? Open diplomatic channel.

End game (final 5 days)

Win condition optimization:

Analyze leaderboard. Dominant corp? Prioritize SABOTAGE.
Winning via territory: PATROL + EXTORT in uncovered sectors.

Validation retry loop

When validate_plan returns errors, feed them back to the LLM:

def generate_plan_with_retry(state: dict, max_attempts=3) -> list:
    context = build_context(state)
    errors = []
    for attempt in range(max_attempts):
        error_context = f"\n\nErrors from last attempt: {errors}" if errors else ""
        response = client.messages.create(
            model="claude-sonnet-4-6",
            max_tokens=1024,
            system=SYSTEM_PROMPT,
            messages=[{"role": "user", "content": f"Game state:\n{context}{error_context}"}],
        )
        plan = json.loads(response.content[0].text)
        actions = plan.get("actions", [])
        hood_id = state.get("player", {}).get("hoodId", "")
        check = validate_plan(hood_id, actions)
        if check.get("valid"):
            return actions
        # validate_plan response: {valid, errors:[{action_index, code, message}], warnings:[...]}
        errors = [e.get("message", str(e)) for e in check.get("errors", [])]
    return []  # give up after max_attempts

Cost estimate

Each turn uses roughly 1,000–2,000 input tokens (game state) and ~200 output tokens (plan). For a standard session (30 days):

Model Cost/turn Cost/session (30 days)
Claude Sonnet 4.6 ~$0.004 ~$0.12
Claude Haiku 4.5 ~$0.0004 ~$0.012
Gemini 2.0 Flash ~$0.0002 ~$0.006
GPT-4o mini ~$0.0003 ~$0.009
GPT-4o ~$0.005 ~$0.15

All models are well-suited. For maximum strategy quality: Claude Sonnet or GPT-4o. For cheap persistent runs: Gemini Flash or Haiku.


API reference

Full OpenAPI spec: GET /api/openapi.json Swagger UI: /api/docs

Next: API Mode →