Skip to content

LM Studio Setup (Custom / Local)

Use this guide when you want Genie to talk to a local model server or any OpenAI-compatible gateway instead of a hosted provider account.

Genie does not run the model itself. You need a server that already exposes an OpenAI-compatible API.

Typical options:

  • LM Studio with its local server enabled
  • a self-hosted gateway
  • another compatible local or remote endpoint

Open Settings and look at the provider settings for:

  • Coding for CODE generation
  • Logistics for ASK coordination, preflight, QUERY, and DATA work

You can point both roles at the same endpoint or split them across different models.

When you choose Custom / Local, Genie exposes these fields for each role:

  • Custom model id
  • Endpoint (OpenAI-compatible)
  • API key if your server requires one
  • Test Connection

Example endpoint shape:

http://127.0.0.1:1234/v1
  1. Set Coding Provider to Custom / Local.
  2. Enter the model ID your local server expects.
  3. Enter the endpoint base URL.
  4. Add the API key only if your server requires one.
  5. Click Test Connection before saving.
  6. Repeat for Logistics if you want ASK, QUERY, or DATA to use the same local stack.

Check these first:

  • the local server is running
  • the endpoint includes the correct /v1 base path if your server expects it
  • the model ID matches a model your server actually exposes
  • the API key is present if your gateway enforces one

Genie is activated but requests still fail

Section titled “Genie is activated but requests still fail”

Activation and provider credentials are separate.

Check Coding and Logistics separately. Each role can have its own endpoint, model, and key.

  • keeping model traffic local
  • testing alternative coding models
  • pointing Genie at an internal gateway
  • separating a stronger coding model from a faster planning model