Documentation

Deploy containerized apps with AI — from prompts, code, or Git.

Set up AI providers, manage secrets, and enable cloud deployment targets.

Getting started

Create a workspace and ship your first deployment.

Sign up and create a workspace

#signup
  1. 1Go to the Sign up page and create your account.
  2. 2Add your organization name and confirm your email.
  3. 3Create your first workspace and project.

Deploy from an AI prompt

Generate code and deploy it as a container in one flow.

#prompt-deploy
  1. 1Open a project and choose "Deploy from Prompt".
  2. 2Describe the app, runtime, and environment requirements.
  3. 3Select an AI provider (OpenAI, Anthropic, Google, or Cohere).
  4. 4Review the generated output and deploy.

Deploy from your local machine

#local-deploy
  1. 1Choose "Deploy from Local" in your project.
  2. 2Upload your source folder or drag-and-drop files.
  3. 3Select build settings and runtime options.
  4. 4Deploy and open the public URL.

Deploy from a Git repository

#git-deploy
  1. 1Connect GitHub, GitLab, or Bitbucket.
  2. 2Select a repository and branch.
  3. 3Set build commands, env vars, and secrets.
  4. 4Deploy and monitor build logs.

API User Guide

Comprehensive REST API guide for end users, including authentication, deployment flows, and practical examples.

API overview and base URL

Use `/api` as the base path and choose the auth type based on the endpoint family.

#api-overview
  1. 1Set your API base URL and auth variables once in your shell.

    Command

    export NEXUS_API_BASE="https://nexusai.run/api"
    export NEXUS_JWT="YOUR_JWT_FROM_LOGIN"
    export NEXUS_TOKEN="nxk_YOUR_ACCESS_TOKEN"
  2. 2Most product endpoints require a user JWT in Authorization header.

    Command

    curl -s "$NEXUS_API_BASE/projects" \
      -H "Authorization: Bearer $NEXUS_JWT"
  3. 3Automation endpoints use access tokens (`nxk_...`) with scoped permissions.

    Command

    curl -s "$NEXUS_API_BASE/gpt/providers" \
      -H "Authorization: Bearer $NEXUS_TOKEN"
  4. 4Typical successful API shape:

    Command

    {
      "success": true,
      "data": { ... }
    }
  5. 5OpenAPI spec for GPT automation endpoints:

    Command

    curl -s "$NEXUS_API_BASE/openapi.yaml"

Authenticate and get a JWT

Register or login, then reuse the JWT for project, deployment, secrets, GitHub, and database APIs.

#api-authentication
  1. 1Login and capture token from the response data payload.

    Command

    curl -s -X POST "$NEXUS_API_BASE/auth/login" \
      -H "Content-Type: application/json" \
      -d '{
        "email": "[email protected]",
        "password": "your-password"
      }'
  2. 2Verify token validity.

    Command

    curl -s "$NEXUS_API_BASE/auth/verify" \
      -H "Authorization: Bearer $NEXUS_JWT"
  3. 3Use this JavaScript helper for browser or Node fetch calls.

    Command

    const api = async (path, options = {}) => {
      const res = await fetch(`${NEXUS_API_BASE}${path}`, {
        ...options,
        headers: {
          'Content-Type': 'application/json',
          Authorization: `Bearer ${NEXUS_JWT}`,
          ...(options.headers || {}),
        },
      });
      return res.json();
    };
    
    const projects = await api('/projects');

Access tokens for automation (nxk_)

Create scoped access tokens for machine workflows such as GPT actions and runtime secret retrieval.

#api-access-tokens
  1. 1Create a token in the app: Settings -> Access Tokens -> Create Token.
  2. 2Recommended scopes by use case:

    Command

    GPT automation:
      deployments:create, deployments:read, deployments:logs, deployments:delete
    
    Runtime secrets fetch:
      secrets:read (or secrets:read:values if values are required)
  3. 3Call a GPT endpoint with your access token.

    Command

    curl -s "$NEXUS_API_BASE/gpt/deployments" \
      -H "Authorization: Bearer $NEXUS_TOKEN"
  4. 4Fetch runtime secrets metadata (without values).

    Command

    curl -s "$NEXUS_API_BASE/secrets/runtime?includeValues=false" \
      -H "Authorization: Bearer $NEXUS_TOKEN"

ChatGPT App OAuth setup (recommended)

Use OAuth for NexusAI as a ChatGPT App so each user connects with tenant-scoped identity.

#api-chatgpt-app-oauth
  1. 1Set OAuth endpoints in your ChatGPT App configuration.

    Command

    Authorization URL: https://nexusai.run/oauth/authorize
    Token URL: https://nexusai.run/api/oauth/token
    User info URL: https://nexusai.run/api/oauth/me
  2. 2Configure backend environment variables for your ChatGPT client and callback URL allowlist.

    Command

    OAUTH_CLIENT_ID=<chatgpt_client_id>
    OAUTH_CLIENT_SECRET=<chatgpt_client_secret_optional_for_public_client>
    OAUTH_REDIRECT_URIS=<comma-separated-allowed-callback-urls>
    OAUTH_ISSUER=https://nexusai.run
    OAUTH_SIGNING_SECRET=<strong-random-secret>
  3. 3For public clients, enable PKCE (S256) and include code_challenge/code_verifier in the OAuth flow.
  4. 4Smoke-test the OAuth token exchange manually (replace placeholders).

    Command

    curl -s -X POST "https://nexusai.run/api/oauth/token" \
      -H "Content-Type: application/json" \
      -d '{
        "grant_type":"authorization_code",
        "client_id":"<client_id>",
        "code":"<authorization_code>",
        "redirect_uri":"<redirect_uri>",
        "code_verifier":"<pkce_verifier_if_used>"
      }'
  5. 5Validate the returned access token against `/api/oauth/me` before enabling deployment tools.

    Command

    curl -s "https://nexusai.run/api/oauth/me" \
      -H "Authorization: Bearer <oauth_access_token>"

Projects API quickstart

Create and manage projects, then use the projectId for deployment APIs.

#api-projects
  1. 1Create a project.

    Command

    curl -s -X POST "$NEXUS_API_BASE/projects" \
      -H "Authorization: Bearer $NEXUS_JWT" \
      -H "Content-Type: application/json" \
      -d '{
        "name": "my-api",
        "description": "Customer API",
        "framework": "Express",
        "vaultEnvironment": "Development",
        "repoUrl": "https://github.com/acme/my-api",
        "gitBranch": "main"
      }'
  2. 2List projects and inspect one by id.

    Command

    curl -s "$NEXUS_API_BASE/projects" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s "$NEXUS_API_BASE/projects/<projectId>" -H "Authorization: Bearer $NEXUS_JWT"
  3. 3Update a project.

    Command

    curl -s -X PUT "$NEXUS_API_BASE/projects/<projectId>" \
      -H "Authorization: Bearer $NEXUS_JWT" \
      -H "Content-Type: application/json" \
      -d '{"gitBranch":"main","vaultEnvironment":"Staging"}'

Deployments API (create, monitor, manage, redeploy)

Use simple deploy, full deploy, logs, lifecycle actions, and redeploy templates.

#api-deployments
  1. 1Create a quick deployment from generated code.

    Command

    curl -s -X POST "$NEXUS_API_BASE/deployments" \
      -H "Authorization: Bearer $NEXUS_JWT" \
      -H "Content-Type: application/json" \
      -d '{
        "projectId":"<projectId>",
        "name":"my-api-quick",
        "code":"console.log(\"Hello NEXUS AI\")",
        "framework":"node",
        "deploymentProvider":"local-docker",
        "envVars":{"NODE_ENV":"production"},
        "healthCheckEnabled":true,
        "healthCheckType":"tcp"
      }'
  2. 2Create a full deployment from a Git repository.

    Command

    curl -s -X POST "$NEXUS_API_BASE/deployments/full" \
      -H "Authorization: Bearer $NEXUS_JWT" \
      -F "projectId=<projectId>" \
      -F "name=my-api-full" \
      -F "sourceType=repo" \
      -F "repoUrl=https://github.com/acme/my-api" \
      -F "repoBranch=main" \
      -F "deploymentProvider=gcp-cloud-run" \
      -F "region=us-central1" \
      -F "buildCommand=npm run build" \
      -F "startCommand=npm run start" \
      -F "outputDir=dist" \
      -F "healthCheckEnabled=true" \
      -F "healthCheckType=http" \
      -F "healthCheckUrl=/health"
  3. 3Monitor and control deployment lifecycle.

    Command

    curl -s "$NEXUS_API_BASE/deployments" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s "$NEXUS_API_BASE/deployments/<deploymentId>" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s "$NEXUS_API_BASE/deployments/<deploymentId>/logs" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s -X POST "$NEXUS_API_BASE/deployments/<deploymentId>/stop" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s -X POST "$NEXUS_API_BASE/deployments/<deploymentId>/start" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s -X DELETE "$NEXUS_API_BASE/deployments/<deploymentId>" -H "Authorization: Bearer $NEXUS_JWT"
  4. 4Load a previous deployment template for a guided redeploy flow.

    Command

    curl -s "$NEXUS_API_BASE/deployments/<deploymentId>/redeploy-template" \
      -H "Authorization: Bearer $NEXUS_JWT"

GitHub integrations API

Connect the GitHub App, list repos, create bindings, and trigger manual deployments.

#api-github-integrations
  1. 1Start GitHub App installation (JSON mode returns installUrl).

    Command

    curl -s "$NEXUS_API_BASE/github/install/start?format=json&redirectPath=/github" \
      -H "Authorization: Bearer $NEXUS_JWT"
  2. 2List repositories available to the installation.

    Command

    curl -s "$NEXUS_API_BASE/github/repos?installationId=<installationId>" \
      -H "Authorization: Bearer $NEXUS_JWT"
  3. 3Create or update a repo binding.

    Command

    curl -s -X POST "$NEXUS_API_BASE/github/bindings" \
      -H "Authorization: Bearer $NEXUS_JWT" \
      -H "Content-Type: application/json" \
      -d '{
        "installationId":"<installationId>",
        "repoId":"123456789",
        "owner":"acme",
        "name":"my-api",
        "fullName":"acme/my-api",
        "defaultBranch":"main",
        "allowedBranches":["main","prod"],
        "autoDeploy":true,
        "runtimeTarget":"cloudrun",
        "servicePort":3000,
        "buildMode":"auto"
      }'
  4. 4Trigger manual deployment and inspect webhook/deployment history.

    Command

    curl -s -X POST "$NEXUS_API_BASE/github/bindings/<bindingId>/deploy" \
      -H "Authorization: Bearer $NEXUS_JWT" \
      -H "Content-Type: application/json" \
      -d '{"branch":"main"}'
    
    curl -s "$NEXUS_API_BASE/github/deployments?page=1&limit=30" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s "$NEXUS_API_BASE/github/webhook-deliveries?page=1&limit=50" -H "Authorization: Bearer $NEXUS_JWT"

Database service API examples

Query and manage deployment-attached databases from the Databases tab APIs.

#api-database-endpoints
  1. 1List database services and fetch a connection payload.

    Command

    curl -s "$NEXUS_API_BASE/deployments/<deploymentId>/databases" \
      -H "Authorization: Bearer $NEXUS_JWT"
    
    curl -s -X POST "$NEXUS_API_BASE/deployments/<deploymentId>/databases/<serviceId>/connection" \
      -H "Authorization: Bearer $NEXUS_JWT" \
      -H "Content-Type: application/json" \
      -d '{"format":"uri"}'
  2. 2Browse schema and query data.

    Command

    curl -s "$NEXUS_API_BASE/deployments/<deploymentId>/databases/<serviceId>/browse/schema" \
      -H "Authorization: Bearer $NEXUS_JWT"
    
    curl -s -X POST "$NEXUS_API_BASE/deployments/<deploymentId>/databases/<serviceId>/browse/query" \
      -H "Authorization: Bearer $NEXUS_JWT" \
      -H "Content-Type: application/json" \
      -d '{
        "query":"SELECT * FROM users ORDER BY created_at DESC LIMIT 10",
        "params":[]
      }'
  3. 3Service maintenance and metrics.

    Command

    curl -s -X POST "$NEXUS_API_BASE/deployments/<deploymentId>/databases/<serviceId>/restart" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s -X POST "$NEXUS_API_BASE/deployments/<deploymentId>/databases/<serviceId>/rotate-credentials" -H "Authorization: Bearer $NEXUS_JWT"
    curl -s "$NEXUS_API_BASE/deployments/<deploymentId>/databases/<serviceId>/metrics" -H "Authorization: Bearer $NEXUS_JWT"

Errors, status codes, and troubleshooting

#api-errors-and-troubleshooting
What do common API status codes mean?+

200/201/202: Request succeeded. 400: Invalid input or missing required fields. 401: Missing/invalid token. 403: Permission/scope blocked or account setup required. 404: Resource not found. 409: Duplicate/conflict (for example existing resource). 429: Rate limit reached. 500: Unexpected server error.

How do account setup checks appear in API errors?+

Some authenticated endpoints require verified email and payment setup. Example 403 payloads include: - code: EMAIL_NOT_VERIFIED - code: PAYMENT_METHOD_REQUIRED

How should I debug failed requests quickly?+

Use this checklist: 1. Confirm base URL includes /api. 2. Confirm token type matches endpoint family (JWT vs nxk access token). 3. Confirm your role or token scopes allow the action. 4. Validate required fields exactly as expected by the endpoint. 5. Retry with curl -v to inspect headers and response body.

Command

curl -v -X POST "$NEXUS_API_BASE/deployments" \
  -H "Authorization: Bearer $NEXUS_JWT" \
  -H "Content-Type: application/json" \
  -d '{"projectId":"...","name":"...","code":"..."}'

NEXUS AI GitHub Deployments

Connect the GitHub App, bind repositories, and monitor automated deployments.

Connect GitHub

Install the NEXUS AI GitHub App and grant repository permissions.

#github-connect
  1. 1Open GitHub in the sidebar.
  2. 2Click Connect GitHub to start installation.
  3. 3On GitHub, select your account or organization and review permissions.
  4. 4Choose repository access (all repositories or selected repositories only).
  5. 5Complete installation and return to NEXUS AI.

Select and bind a repository

Create deployment rules for a repository in your tenant.

#github-bind-repo
  1. 1In GitHub > Repo Bindings, click Load Repositories.
  2. 2Select a repository and default branch.
  3. 3Set allowed branches (example: main,prod) and enable Auto Deploy if needed.
  4. 4Configure build mode, runtime target, service port, and optional subdomain.
  5. 5Click Create Binding to save the rule.

How auto-deploy works

Push events to allowed branches create deployments automatically.

#github-auto-deploy
  1. 1A push webhook arrives from GitHub and signature verification runs first.
  2. 2NEXUS AI checks repository binding, branch allowlist, and auto-deploy status.
  3. 3A deployment record is queued and processed by the build and runtime workers.
  4. 4Deployment status updates from queued to building to deploying, then healthy or failed.

Monitor deployments and webhooks

#github-monitoring
  1. 1Use the Deployments tab to track commit, branch, actor, status, and runtime URL.
  2. 2Use Webhook Deliveries to inspect GitHub delivery IDs and processing outcomes.
  3. 3Use Repo Bindings to review or delete repository-to-tenant bindings.
Why do I get "GitHub App slug is not configured"?+

The backend is missing GitHub App slug configuration. Fix: set GITHUB_APP_SLUG in backend environment variables (or set GITHUB_APP_NAME so the slug can be derived), then restart the backend.

Why does the GitHub install URL return 404?+

This usually means the app slug or account access is wrong. Fix: verify the app exists at https://github.com/apps/<your-app-slug> and confirm you are signed into a GitHub account allowed to install that app.

I selected repositories but did not return to NEXUS AI. What should I check?+

Your GitHub App setup callback URL may be incorrect. Fix: set the GitHub App Setup URL to your backend callback endpoint, for example https://nexusai.run/api/github/install/callback. Do not use the OAuth login callback for app installation flow.

Push events are received but deployments do not start. Why?+

Most often, one of these conditions failed: repository not bound, branch not allowlisted, or auto-deploy disabled. Fix: review the Repo Binding and Webhook Deliveries tabs to confirm the exact ignore reason.

Role permissions for GitHub deployments

#github-rbac
Who can connect GitHub and manage bindings?+

Tenant Owner and Admin roles can install the GitHub App, create bindings, and change deployment rules.

What can Developer members do?+

Developers can view deployment status and logs for bound repositories, and can trigger manual redeploy if enabled by your organization policy.

Database service deployments

Deploy MySQL, PostgreSQL, MongoDB, and Redis alongside your app and manage them from Deployment Details.

Deploy with additional database services

#database-additional-services
  1. 1Open your project and start a deployment (local, prompt, or Git flow).
  2. 2In the deployment form, go to Additional Services (Optional).
  3. 3Select one or more services: MySQL, PostgreSQL, MongoDB, or Redis.
  4. 4Optionally set a custom display name for each selected service.
  5. 5Deploy your app. NEXUS AI creates the selected services with auto-configured connection details.

Supported service types and default internal ports

#database-supported-types
  1. 1Use these default internal ports for service-to-service connections:

    Command

    mysql:3306
    postgresql:5432
    mongodb:27017
    redis:6379
  2. 2Typical in-network hostnames match the service names:

    Command

    MYSQL_HOST=mysql
    POSTGRES_HOST=postgresql
    MONGO_HOST=mongodb
    REDIS_HOST=redis

Managed provider limitation

#database-managed-provider-limits
  1. 1Cloud Run, AWS App Runner, and Azure Container Apps deployments do not support Additional Services in the deployment form.
  2. 2For those providers, use a managed external database and inject credentials using Secrets Vault.
  3. 3If you need built-in compose-based database services, deploy on Local Docker provider.

Manage services from Deployment Details

#database-manage-from-deployment
  1. 1Open a running deployment and switch to the Databases tab.
  2. 2Review service status and metadata for each database service.
  3. 3Use refresh to update live service state.
  4. 4Use service actions to restart services, rotate credentials, and retrieve connection strings.
  5. 5Use metrics and browser/query tools (if enabled) to inspect schema and run queries.

Database permissions

#database-permissions
  1. 1Viewing database services requires database.read permission.
  2. 2Restarting services or rotating credentials requires database.manage permission.
  3. 3Running write queries requires database.write permission.

Database deployment troubleshooting

#database-troubleshooting
Why does the Databases tab show "No Database Services"?+

This deployment was created without Additional Services enabled. Fix: create a new deployment and select MySQL, PostgreSQL, MongoDB, or Redis before deploying.

Why are Additional Services disabled in the deployment form?+

Additional Services are disabled for Cloud Run, App Runner, and Container Apps providers. Fix: use a managed external database for those providers, or use Local Docker provider for compose-based sidecar services.

My app cannot connect to the database service. What should I check first?+

Most connection issues are host/port mismatches or service startup timing. Fix: use the internal service hostname (for example mysql/postgresql/mongodb/redis) and default internal port, then confirm the deployment status is RUNNING.

What happens when I rotate credentials?+

NEXUS AI rotates credentials and recreates the service container with the new values. Fix: update your application credentials immediately and redeploy if your app caches old values.

Secrets and AI

Securely store API keys and connect cloud AI providers.

Manage secrets and environment variables

#secrets
  1. 1Open the Secrets Vault in your project.
  2. 2Add API keys, tokens, and runtime variables.
  3. 3Scope secrets by environment (dev, staging, prod).
  4. 4Rotate keys and audit usage when needed.

Connect cloud AI providers

Add provider credentials once, then select them per project.

#ai-providers
  1. 1Create a Vault secret for your provider API key (recommended).
  2. 2Go to AI Providers and click "Add Provider".
  3. 3Choose a provider and select your Vault secret (or paste the API key directly).
  4. 4Pick a model, set max tokens and temperature, then enable and save.
  5. 5In a project, select the provider when generating code from a prompt.

GPT Store

Deploy and manage containers from ChatGPT using NEXUS AI.

What the GPT can do

#gpt-overview
  1. 1Deploy containers from Docker images.
  2. 2List deployments and check status.
  3. 3Fetch logs and destroy deployments with confirmation.
  4. 4Auto-select providers based on your plan via /providers.

Generate an access token

#gpt-token
Which scopes are required?+

Use deployments:create, deployments:read, deployments:logs, deployments:delete for full GPT actions.

Configure GPT actions

#gpt-actions
  1. 1Set the action schema URL to https://nexusai.run/openapi.yaml.
  2. 2Use Bearer auth with your NEXUS AI access token (nxk_...).
  3. 3Call /providers first to learn which providers are allowed for the plan.
  4. 4Confirm before deleting deployments.

Example prompts

#gpt-prompts
  1. 1Try these in ChatGPT:

    Command

    Deploy nginx:latest on port 80 and name it my-nginx.
    Upload this zip and deploy it as zip-app.
    Deploy https://github.com/org/repo.git on main using secret github-token.
    List my deployments and show the status of the most recent one.
    Show the last 50 log lines for my latest deployment.
    Destroy the deployment named my-nginx (confirm before deleting).

Source deployments (Git + Zip)

#gpt-source-deploys
What do I pass for private Git repos?+

Use repoSecretName with the Secrets Vault name that stores your repo token.

How do zip uploads work?+

Upload a zip archive, then deploy with sourceType=zip and the uploadId returned by the upload call.

Troubleshooting

#gpt-troubleshooting
Why do I get "Invalid token"?+

Use an access token that starts with nxk_. Ensure it was created in the same environment (local vs production).

Why do I get "Insufficient scope"?+

Add deployments:read (and other GPT scopes) to the access token and try again.

MCP (Model Context Protocol)

Comprehensive MCP documentation for ChatGPT, Claude, and custom clients using OAuth-protected JSON-RPC on NEXUS AI.

MCP endpoint, transport, and discovery

NEXUS AI exposes MCP as an OAuth-protected HTTP JSON-RPC endpoint.

#mcp-overview
  1. 1Use POST requests to the MCP endpoint with JSON-RPC 2.0 payloads and an OAuth Bearer access token.

    Command

    MCP endpoint: https://api.zollo.live/mcp
    Transport: HTTP JSON-RPC 2.0
    Methods: ping, initialize, tools/list, tools/call
    Notifications: notifications/initialized, notifications/cancelled
    Protocol version: 2026-01-16
  2. 2Use OAuth/OIDC discovery metadata endpoints during client setup.

    Command

    https://nexusai.run/.well-known/oauth-authorization-server
    https://nexusai.run/.well-known/oauth-protected-resource/mcp
    https://nexusai.run/.well-known/openid-configuration
  3. 3GET and DELETE on /mcp are intentionally rejected. Use POST only.

    Command

    curl -i -X GET "https://api.zollo.live/mcp"
    curl -i -X DELETE "https://api.zollo.live/mcp"

Connect NEXUS AI MCP in ChatGPT

Use this for no-code setup inside ChatGPT.

#mcp-chatgpt-setup
  1. 1Open ChatGPT and go to connected apps or MCP servers.
  2. 2Add server URL:

    Command

    https://api.zollo.live/mcp
  3. 3Complete OAuth sign-in and approve requested scopes.
  4. 4After connection succeeds, run a verification prompt in ChatGPT.
  5. 5Example verification prompt:

    Command

    List my NEXUS AI deployments and show their status.

OAuth flow for MCP clients (PKCE + auth code)

Use standard authorization code flow with PKCE for public clients.

#mcp-oauth-flow
  1. 1Register an OAuth client (dynamic registration) or use a pre-configured client.

    Command

    curl -s -X POST "https://nexusai.run/register" \
      -H "Content-Type: application/json" \
      -d '{
        "client_name":"my-mcp-client",
        "redirect_uris":["https://example.com/callback"],
        "token_endpoint_auth_method":"none",
        "grant_types":["authorization_code","refresh_token"],
        "response_types":["code"],
        "scope":"deployments:read deployments:create deployments:logs deployments:delete"
      }'
  2. 2Create an authorization code via /oauth/authorize in browser flow, then exchange it at /oauth/token with code_verifier.

    Command

    curl -s -X POST "https://nexusai.run/oauth/token" \
      -H "Content-Type: application/json" \
      -d '{
        "grant_type":"authorization_code",
        "client_id":"<client_id>",
        "code":"<authorization_code>",
        "redirect_uri":"https://example.com/callback",
        "code_verifier":"<pkce_code_verifier>"
      }'
  3. 3Validate your token and granted scopes before MCP calls.

    Command

    curl -s "https://nexusai.run/oauth/me" \
      -H "Authorization: Bearer <oauth_access_token>"
  4. 4Use the OAuth access token in Authorization header for all /mcp requests.

    Command

    Authorization: Bearer <oauth_access_token>

OAuth bridge endpoint (first-party authenticated flows)

NEXUS AI also supports creating auth codes from a logged-in user session.

#mcp-oauth-bridge
  1. 1If you already have a NEXUS AI JWT session, call /oauth/authorize/code to mint an authorization code directly.

    Command

    curl -s -X POST "https://nexusai.run/oauth/authorize/code" \
      -H "Authorization: Bearer <nexus_jwt>" \
      -H "Content-Type: application/json" \
      -d '{
        "response_type":"code",
        "client_id":"<client_id>",
        "redirect_uri":"https://example.com/callback",
        "scope":"deployments:read deployments:logs",
        "code_challenge":"<pkce_code_challenge>",
        "code_challenge_method":"S256",
        "state":"state-123"
      }'
  2. 2Use the returned code in the normal /oauth/token exchange.

Dynamic client registration details

DCR endpoint validates redirect URIs and client metadata.

#mcp-dynamic-client-registration
Which endpoint is used for dynamic registration?+

Use POST /register (or POST /api/oauth/register).

Command

https://nexusai.run/register
https://nexusai.run/api/oauth/register
What redirect URI rules are enforced?+

Rules: 1. At least one redirect URI is required. 2. Maximum 10 redirect URIs. 3. Each URI must be <= 2000 chars. 4. URI fragments (#...) are not allowed. 5. Must use https, except localhost/127.0.0.1 can use http for development.

Which token endpoint auth methods are supported?+

Supported values are none, client_secret_post, and client_secret_basic. Public MCP clients should usually use none + PKCE.

What scopes can a client request?+

Supported scopes are deployments:read, deployments:create, deployments:logs, deployments:delete.

MCP JSON-RPC request examples

Use these examples to test connectivity and tool execution.

#mcp-jsonrpc-calls
  1. 1Initialize MCP session.

    Command

    curl -s -X POST "https://api.zollo.live/mcp" \
      -H "Authorization: Bearer <oauth_access_token>" \
      -H "Content-Type: application/json" \
      -d '{
        "jsonrpc":"2.0",
        "id":1,
        "method":"initialize",
        "params":{"protocolVersion":"2025-06-18"}
      }'
  2. 2List available tools.

    Command

    curl -s -X POST "https://api.zollo.live/mcp" \
      -H "Authorization: Bearer <oauth_access_token>" \
      -H "Content-Type: application/json" \
      -d '{"jsonrpc":"2.0","id":2,"method":"tools/list"}'
  3. 3Call a tool (example: current user context).

    Command

    curl -s -X POST "https://api.zollo.live/mcp" \
      -H "Authorization: Bearer <oauth_access_token>" \
      -H "Content-Type: application/json" \
      -d '{
        "jsonrpc":"2.0",
        "id":3,
        "method":"tools/call",
        "params":{"name":"nexusai_whoami","arguments":{}}
      }'
  4. 4Tool results are returned in result.content text. NEXUS AI includes structured data as JSON text inside that content for compatibility with MCP clients.

Complete MCP tool catalog

Tool names are underscore-based and case-sensitive.

#mcp-tool-catalog
  1. 1Identity and discovery tools:

    Command

    nexusai_whoami
    nexusai_projects_list
    nexusai_providers_list
  2. 2Deployment tools:

    Command

    nexusai_deploy_list
    nexusai_deploy_status
    nexusai_deploy_logs
    nexusai_deploy_create
    nexusai_deploy_source
    nexusai_deploy_redeploy
    nexusai_deploy_rollback
    nexusai_deploy_stop
    nexusai_deploy_start
    nexusai_deploy_delete
    nexusai_deploy_scale
    nexusai_deploy_health
  3. 3Secrets, domains, and usage tools:

    Command

    nexusai_secrets_list
    nexusai_secrets_create
    nexusai_secrets_update
    nexusai_secrets_delete
    nexusai_domains_add
    nexusai_domains_list
    nexusai_domains_verify
    nexusai_domains_remove
    nexusai_usage_stats

Scope requirements by tool

Each tool checks OAuth scopes at call time.

#mcp-scope-matrix
  1. 1Use this scope matrix:

    Command

    No scope required:
      nexusai_whoami
    
    deployments:read:
      nexusai_projects_list, nexusai_providers_list, nexusai_deploy_list,
      nexusai_deploy_status, nexusai_deploy_health, nexusai_secrets_list,
      nexusai_domains_list, nexusai_usage_stats
    
    deployments:logs:
      nexusai_deploy_logs
    
    deployments:create:
      nexusai_deploy_create, nexusai_deploy_source, nexusai_deploy_redeploy,
      nexusai_deploy_rollback, nexusai_deploy_stop, nexusai_deploy_start,
      nexusai_deploy_scale, nexusai_secrets_create, nexusai_secrets_update,
      nexusai_domains_add, nexusai_domains_verify
    
    deployments:delete:
      nexusai_deploy_delete, nexusai_secrets_delete, nexusai_domains_remove
  2. 2If a scope is missing, the tool returns an MCP result with isError=true and a clear missing-scope message.

High-value tool arguments (quick reference)

Most frequent arguments used in production MCP flows.

#mcp-tool-argument-reference
  1. 1nexusai_deploy_create arguments:

    Command

    required: image, port
    optional: name, environment (DEVELOPMENT|STAGING|PRODUCTION),
    envVars, provider, region, autoDestroyHours, requestId
  2. 2nexusai_deploy_source arguments:

    Command

    required: repoUrl
    optional: name, environment, repoBranch, repoSecretName, envVars,
    provider, region, autoDestroyHours, requestId, framework, dockerfile,
    buildCommand, startCommand, installCommand, outputDir
  3. 3nexusai_deploy_redeploy arguments:

    Command

    required: deploymentId
    optional: overrides {
      name, displayName, provider, region, envVars, code, dockerfile, framework,
      autoDestroyHours, healthCheckEnabled, healthCheckType, healthCheckUrl
    }
  4. 4Other common required IDs:

    Command

    deploy status/logs/start/stop/delete/health: deploymentId
    deploy scale: deploymentId + replicas (1-10)
    secret update/delete: secretId
    domains add: deploymentId + domain
    domains verify/remove: deploymentId + domainId

Self-hosted MCP/OAuth configuration

Set these environment variables for reliable external client integration.

#mcp-self-hosted-config
  1. 1Core OAuth/MCP variables:

    Command

    MCP_ORIGIN=https://api.example.com
    OAUTH_ISSUER=https://api.example.com
    OAUTH_SIGNING_SECRET=<strong-random-secret>
    
    OAUTH_CLIENT_ID=<optional-static-client-id>
    OAUTH_CLIENT_SECRET=<optional-static-client-secret>
    OAUTH_REDIRECT_URIS=https://chat.openai.com/aip/callback,https://claude.ai/api/mcp/auth_callback
  2. 2Important behavior:

    Command

    If client secret is not configured, PKCE is required.
    redirect_uri must exactly match allowed client redirect URIs.
    issuer/discovery/token signing must stay consistent across all backend nodes.

MCP troubleshooting playbook

#mcp-troubleshooting
I receive 401 from /mcp with "Unauthorized: Missing or invalid OAuth access token."+

Use an OAuth access token from /oauth/token, not a regular app JWT or nxk token. Confirm the request uses Authorization: Bearer <oauth_access_token>.

I get invalid_client or client_secret mismatch during token exchange.+

Check token_endpoint_auth_method for your registered client. If token_endpoint_auth_method is none, do not send client_secret and use PKCE.

Token exchange fails with redirect_uri is not allowed.+

The redirect_uri must exactly match one of the registered redirect_uris for that client.

Token exchange fails with code_verifier errors.+

Use the exact code_verifier pair that generated the code_challenge for that authorization request.

tools/call returns missing scope errors.+

Request the required scopes during authorization and re-run the OAuth flow so the new token includes them.

Tool call returns Unknown tool.+

Use exact tool names from tools/list. NEXUS AI tool names use underscores (for example nexusai_whoami), not dotted names.

Connection works once, then fails in some clients.+

Verify discovery issuer and JWT issuer are consistent. In self-hosted setups, align MCP_ORIGIN, OAUTH_ISSUER, and external domain/proxy configuration.

Access Control (RBAC)

Role-based access control for projects, environments, and deployment providers.

Understanding roles and permissions

NEXUS AI uses role-based access control (RBAC) to manage what users can do within an organization.

#rbac-overview
What roles are available in NEXUS AI?+

NEXUS AI provides six user roles with different permission levels: • Owner - Full control over the organization, projects, and all settings • Admin - Similar to Owner, can manage users and most settings • Deployment Manager - Can manage deployments across all environments • Member (Developer) - Can create projects and deploy to Development/Staging • Auditor - Read-only access for compliance and monitoring • Billing Manager - Manages billing and views usage data

What is the difference between organization roles and project roles?+

Organization roles define what a user can do across the entire organization. Project-level permissions can further restrict or grant access to specific projects. For example, a Member can be granted access to deploy to Production for a specific project through the Team Management page.

Environment permissions by role

Control which environments users can create projects in and deploy to.

#rbac-environments
Which environments can each role access by default?+

Default environment permissions by role: • Owner / Admin / Super Admin Development, Staging, Production (full access) • Deployment Manager Development, Staging, Production (full access) • Member (Developer) Development, Staging only • Auditor Read-only (cannot create projects or deploy) • Billing Manager Read-only (cannot create projects or deploy) Members (Developers) cannot create projects in or deploy to Production by default. This protects production environments from unauthorized changes.

How do I grant a Member access to Production?+

To grant Production access to a specific user: 1. Go to Team Management in the sidebar 2. Find the user and click the actions menu (three dots) 3. Select "Manage Permissions" 4. Check "Production" in the Allowed Environments section 5. Click "Save Permissions" This grants Production access only for projects the user is assigned to.

Why can't I create a project in Production as a Member?+

By default, Members can only create projects in Development and Staging environments. This is a security feature to prevent accidental production deployments. If you need Production access, ask your organization Owner or Admin to grant you permission via Team Management.

Deployment provider permissions

Control which cloud providers users can deploy to.

#rbac-providers
Which deployment providers can each role access by default?+

Default provider permissions by role: • Owner / Admin / Super Admin Local Docker, GCP Cloud Run, AWS App Runner, Azure Container Apps (all providers) • Deployment Manager Local Docker only (can be expanded via Team Management) • Member (Developer) Local Docker only • Auditor Read-only (cannot deploy) • Billing Manager Read-only (cannot deploy) Cloud providers (GCP Cloud Run, AWS App Runner, Azure Container Apps) require explicit permission for non-admin users.

How do I grant a user access to cloud providers?+

To grant cloud provider access to a specific user: 1. Go to Team Management in the sidebar 2. Find the user and click the actions menu (three dots) 3. Select "Manage Permissions" 4. Check the desired providers (GCP Cloud Run, AWS App Runner, Azure Container Apps) 5. Click "Save Permissions" Note: Cloud provider access also depends on your organization's plan tier.

Full permissions matrix

Complete overview of all permissions by role.

#rbac-permissions-matrix
Organization-level permissions+

View Organization Owner: ✓ Admin: ✓ Deploy Mgr: ✓ Member: ✓ Auditor: ✓ Billing Mgr: ✓ Manage Organization Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✗ Auditor: ✗ Billing Mgr: ✗ Manage Users Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✗ Auditor: ✗ Billing Mgr: ✗ View Billing Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✗ Auditor: ✗ Billing Mgr: ✓ Manage Billing Owner: ✓ Admin: ✗ Deploy Mgr: ✗ Member: ✗ Auditor: ✗ Billing Mgr: ✓ View Audit Logs Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✗ Auditor: ✓ Billing Mgr: ✗ View Usage Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✗ Auditor: ✓ Billing Mgr: ✓ Manage Secrets Owner: ✓ Admin: ✓ Deploy Mgr: ✓ Member: ✓ Auditor: ✗ Billing Mgr: ✗ Manage AI Providers Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✗ Auditor: ✗ Billing Mgr: ✗ Manage Custom Domains Owner: ✓ Admin: ✓ Deploy Mgr: ✓ Member: ✗ Auditor: ✗ Billing Mgr: ✗

Project-level permissions+

Create Projects Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✓ Auditor: ✗ Billing Mgr: ✗ View Projects Owner: All Admin: All Deploy Mgr: Assigned Member: Assigned Auditor: All Billing Mgr: All Update Projects Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✓ Auditor: ✗ Billing Mgr: ✗ Delete Projects Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✗ Auditor: ✗ Billing Mgr: ✗ Manage Project Members Owner: ✓ Admin: ✓ Deploy Mgr: ✗ Member: ✗ Auditor: ✗ Billing Mgr: ✗ View Deployments Owner: ✓ Admin: ✓ Deploy Mgr: ✓ Member: ✓ Auditor: ✓ Billing Mgr: ✗ Create Deployments Owner: ✓ Admin: ✓ Deploy Mgr: ✓ Member: ✓ Auditor: ✗ Billing Mgr: ✗ Manage Deployments Owner: ✓ Admin: ✓ Deploy Mgr: ✓ Member: ✓ Auditor: ✗ Billing Mgr: ✗ View Deployment Logs Owner: ✓ Admin: ✓ Deploy Mgr: ✓ Member: ✓ Auditor: ✓ Billing Mgr: ✗ Legend: • "All" = Access to all projects in the organization • "Assigned" = Only projects where the user is explicitly added as a member

How do project scopes work?+

Project scope determines which projects a user can access: • All Projects: Owner, Admin, Auditor, and Billing Manager can see all projects in the organization. • Assigned Projects Only: Members and Deployment Managers can only see projects they have been explicitly added to. To add a user to a project, go to the project settings and add them as a project member.

Custom domains

Bring your own domain to production deployments.

Add custom domains

#custom-domains
  1. 1Open the Deployment Details page (deployment must be RUNNING).
  2. 2Scroll to Custom Domains and click "Add Domain".
  3. 3Enter your domain (example: app.example.com or example.com).
  4. 4Follow the DNS configuration instructions shown in the UI.
  5. 5For subdomains: add a CNAME record pointing to your deployment subdomain.
  6. 6For apex domains: add both an A record (server IP) and TXT record (verification token).
  7. 7Wait for DNS propagation (often minutes, sometimes longer).
  8. 8Click "Verify DNS" to confirm the domain is configured correctly.
  9. 9SSL certificates are issued automatically after verification.

Cloud deployment providers

Enable Cloud Run, App Runner, or Container Apps as deployment targets for your NEXUS AI.

Enable GCP Cloud Run deployments

Cloud Build builds images, Artifact Registry stores them, and Cloud Run runs them.

#gcp-cloud-run
  1. 1Enable required APIs.

    Command

    gcloud services enable \
      run.googleapis.com \
      cloudbuild.googleapis.com \
      artifactregistry.googleapis.com \
      storage.googleapis.com \
      logging.googleapis.com
  2. 2Create an Artifact Registry Docker repository.

    Command

    gcloud artifacts repositories create nexusai-deployments \
      --repository-format=docker \
      --location=us-central1
  3. 3Create a Cloud Storage bucket for build source archives.

    Command

    gsutil mb -l us-central1 gs://YOUR_UNIQUE_BUILD_BUCKET_NAME
  4. 4Grant required IAM roles.

    Command

    # Replace: PROJECT_ID, BACKEND_SA, PROJECT_NUMBER, BUCKET
    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member=serviceAccount:BACKEND_SA \
      --role=roles/run.admin
    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member=serviceAccount:BACKEND_SA \
      --role=roles/cloudbuild.builds.editor
    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member=serviceAccount:BACKEND_SA \
      --role=roles/logging.viewer
    gsutil iam ch serviceAccount:BACKEND_SA:objectAdmin gs://BUCKET
    
    gcloud projects add-iam-policy-binding PROJECT_ID \
      --member=serviceAccount:[email protected] \
      --role=roles/artifactregistry.writer
    gsutil iam ch serviceAccount:[email protected]:objectViewer gs://BUCKET
  5. 5Deploy from a project and confirm build and runtime logs appear in the deployment logs view.

Enable AWS App Runner deployments

CodeBuild builds and pushes to ECR, and App Runner runs the service.

#aws-app-runner
  1. 1Create an ECR repository for deployment images.

    Command

    aws ecr create-repository --repository-name nexusai-apps --region us-east-1
  2. 2Create an S3 bucket for CodeBuild source archives.

    Command

    aws s3api create-bucket --bucket YOUR_UNIQUE_BUCKET_NAME --region us-east-1 --create-bucket-configuration LocationConstraint=us-east-1
  3. 3Create an IAM user/role for the backend and grant permissions for App Runner, CodeBuild, ECR, S3, and CloudWatch Logs (and IAM role creation if you want NEXUS AI to auto-create CodeBuild/App Runner roles).
  4. 4Deploy from a project and confirm build logs and runtime logs appear in the deployment logs view.
  5. 5Custom domains are supported for App Runner deployments via the in-app Custom Domains UI.

Enable Azure Container Apps deployments

ACR Tasks builds and pushes images, and Container Apps runs the service.

#azure-container-apps
  1. 1Login, select a subscription, and register required providers (one-time).

    Command

    az login
    az account set --subscription "YOUR_SUBSCRIPTION_ID"
    
    az provider register --namespace Microsoft.App
    az provider register --namespace Microsoft.ContainerRegistry
    az provider register --namespace Microsoft.OperationalInsights
    az provider register --namespace Microsoft.Storage
  2. 2Create a resource group and service principal with Contributor role.

    Command

    RG_NAME="nexusai-core"
    RG_LOCATION="eastus"
    az group create --name "$RG_NAME" --location "$RG_LOCATION"
    
    SUB_ID=$(az account show --query id -o tsv)
    az ad sp create-for-rbac \
      --name "nexusai-backend" \
      --role Contributor \
      --scopes "/subscriptions/$SUB_ID/resourceGroups/$RG_NAME" \
      --query "{AZURE_CLIENT_ID:appId,AZURE_CLIENT_SECRET:password,AZURE_TENANT_ID:tenant}" \
      -o table
    
    echo "AZURE_SUBSCRIPTION_ID=$SUB_ID"
  3. 3Set Azure environment variables in the backend (.env).

    Command

    AZURE_TENANT_ID=...
    AZURE_CLIENT_ID=...
    AZURE_CLIENT_SECRET=...
    AZURE_SUBSCRIPTION_ID=...
    AZURE_DEFAULT_REGION=eastus
    AZURE_RESOURCE_GROUP_PREFIX=nexusai
    AZURE_ACR_NAME_PREFIX=nexusai
    AZURE_ACR_REPO=nexusai-apps
    AZURE_CONTAINERAPPS_ENV_PREFIX=nexusai
    AZURE_LOG_ANALYTICS_WORKSPACE_PREFIX=nexusai
    AZURE_STORAGE_ACCOUNT_PREFIX=nexusai
    AZURE_STORAGE_CONTAINER=build-sources
  4. 4If you want a single shared resource group, set AZURE_RESOURCE_GROUP=RG_NAME. For per-project resource groups, scope the service principal at the subscription level.
  5. 5Option 2 (per-project RGs): grant Contributor at subscription scope so the backend can create new resource groups.

    Command

    SUB_ID=YOUR_SUBSCRIPTION_ID
    SP_APP_ID=YOUR_SERVICE_PRINCIPAL_APP_ID
    az role assignment create --assignee "$SP_APP_ID" --role Contributor \
      --scope "/subscriptions/$SUB_ID"
  6. 6The backend will auto-create: Resource Group, ACR, Container Apps Environment, Log Analytics workspace, and Storage account.
  7. 7Custom domains are supported for Container Apps deployments via the in-app Custom Domains UI.

Azure required permissions (AuthorizationFailed fix)

Use this when deployments fail with `Microsoft.Resources/subscriptions/resourcegroups/write`.

#azure-required-permissions
  1. 1Set variables for your subscription, resource group, and service principal.

    Command

    SUB_ID="YOUR_SUBSCRIPTION_ID"
    RG_NAME="nexusai-shared"
    SP_APP_ID="YOUR_AZURE_CLIENT_ID"
  2. 2Ensure the resource group exists before assigning RBAC at RG scope.

    Command

    RG_LOCATION="eastus"
    az group show --name "$RG_NAME" --subscription "$SUB_ID" >/dev/null 2>&1 || \
      az group create --name "$RG_NAME" --location "$RG_LOCATION" --subscription "$SUB_ID"
  3. 3Grant Contributor at resource-group scope (recommended for a shared group).

    Command

    az role assignment create \
      --assignee "$SP_APP_ID" \
      --role Contributor \
      --scope "/subscriptions/$SUB_ID/resourceGroups/$RG_NAME"
  4. 4Verify the role assignment exists.

    Command

    az role assignment list \
      --assignee "$SP_APP_ID" \
      --scope "/subscriptions/$SUB_ID/resourceGroups/$RG_NAME" \
      --include-inherited -o table
  5. 5If you use per-project resource groups, grant Contributor at subscription scope instead.

    Command

    az role assignment create \
      --assignee "$SP_APP_ID" \
      --role Contributor \
      --scope "/subscriptions/$SUB_ID"
  6. 6After changing IAM, wait 5 to 10 minutes for propagation, then retry deployment. If your worker process has long-lived tokens, restart backend workers before retry.

Billing

Pricing and billing

#pricing
  1. 1Review plan limits on the Pricing page.
  2. 2Upgrade when you need more deployments or usage.
  3. 3Track spend and alerts in Billing settings.
  4. 4Download invoices and manage payment methods.

FAQ and troubleshooting

Common deployment validation errors, cloud permissions, and how to unblock yourself fast.

Deployment validation FAQ

Dockerfile validation is enforced to keep deployments secure and predictable.

#deployment-validation-faq
Why did my deployment fail with "Dockerfile validation failed"?+

This means your Dockerfile violates a security policy (base image allowlist, blocked ports, or dangerous build patterns). Fix: check your base image prefix, EXPOSE ports, and avoid restricted patterns like piping remote scripts into a shell.

Why did my deployment fail with "Exposing privileged port 25 is not allowed"?+

Issue: Your Dockerfile contains an `EXPOSE 25` instruction. In hardened environments, ports below 1024 are privileged and blocked by NEXUS AI deployments. Solution: Move the service to a non-privileged port (for example `2525` or `587`) and update your app configuration and Dockerfile accordingly.

Command

# Example (Postfix or SMTP-like service)
# Use a non-privileged port and update your service config accordingly.
ENV PORT=2525
EXPOSE 2525
Why did my deployment fail with "Exposing privileged port <port> is not allowed"?+

Ports below 1024 are considered privileged. NEXUS AI blocks privileged ports by default (common exceptions are 80 and 443). Fix: move your app to a non-privileged port like 3000 or 8080 and update your Dockerfile and app to listen on that port.

Why did my deployment fail with "Exposing port 22 is not allowed"?+

Ports associated with remote access protocols (SSH/Telnet/RDP/VNC) are blocked for safety. Fix: do not expose these ports. If you need admin access, use logs/metrics and application-level endpoints instead.

Why is the `postfix:latest` base image being rejected?+

Issue: NEXUS AI enforces a base-image allowlist so builds start from trusted, maintained sources. The `postfix:` prefix is not on the approved list. Solution: Rebuild using an approved base image prefix and install the needed mail utilities through the package manager. Approved prefixes: node:, python:, nginx:, alpine:, ubuntu:, debian:, postgres:, redis:, mongo:, mysql:, httpd:, php:, ruby:, golang:, openjdk:, rust:, gcc:.

Command

# Example approach: start from Debian and install packages
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends postfix && rm -rf /var/lib/apt/lists/*
# Configure Postfix to listen on a non-privileged port (e.g., 2525) in your main.cf
EXPOSE 2525
Why is my base image being rejected with "Base image ... is not allowed"?+

NEXUS AI only allows builds that start from approved base image prefixes. Fix: switch your Dockerfile `FROM` to an approved prefix such as `debian:`, `ubuntu:`, `alpine:`, `node:`, or `python:` and install what you need via the package manager.

Why is "curl | sh" / "wget | sh" blocked?+

Piping a remote script directly into a shell is a common supply-chain attack vector and is blocked. Fix: install packages using the OS package manager, pin versions where possible, and avoid remote script execution in Docker builds.

Why are privileged container options blocked (`--privileged`, `--cap-add`, host network, Docker socket)?+

These options can allow container escape or host compromise and are blocked by policy. Fix: remove privileged flags and redesign the service to run without elevated host access.

Why did my deployment fail with "Dockerfile is too large" or "too many instructions"?+

Very large Dockerfiles or extremely high instruction counts can be used for resource exhaustion and are blocked. Fix: reduce layers, remove repeated RUN steps, and prefer package manager installs in fewer commands.

Why is access to `/var/run/docker.sock` blocked?+

Mounting the Docker socket into a container can allow full host control and is treated as a container escape risk. Fix: remove Docker socket usage. If you need to build/run containers, use the platform's deployment workflow instead.

Why did my deployment fail with "Dockerfile appears to contain hardcoded secrets"?+

The validator detects secret-like values in Dockerfiles to prevent accidental credential leaks. Fix: remove secrets from Dockerfile and provide them at runtime using Secrets Vault and environment variables.

Troubleshooting guide for deployment failures+

If you receive `DEPLOYMENT_FAILED` after a validation error, use this checklist:

Command

Error Message                         | Meaning                               | Action Item
------------------------------------- | ------------------------------------- | -------------------------------------------
Dockerfile validation failed          | Configuration violates security policy | Check port numbers and base image prefixes
Base image ... is not allowed         | Image source is untrusted              | Switch to an approved prefix (ubuntu/alpine)
Exposing privileged port              | Port is < 1024                         | Move app to a port like 8080/3000/2525
How do I request a new base image or port exception?+

If your project strictly requires a base image or port that is not allowed by default, open a security review ticket with a clear business justification. Email: [email protected]

Ports and service readiness

Most "service won't start" issues come down to the container port and startup behavior.

#ports-and-readiness
Cloud Run says my revision is not ready. What should my app listen on?+

Your container must listen on the same port that Cloud Run routes traffic to. In NEXUS AI, the port is typically derived from your Dockerfile `EXPOSE` instruction (or defaults based on the generated Dockerfile). Fix: ensure your app listens on the exposed port and does not bind to localhost-only. Use 0.0.0.0 where applicable.

Why doesn't setting `ENV PORT=...` always fix Cloud Run?+

Cloud Run provides its own `PORT` environment variable and expects your app to bind to it. NEXUS AI also filters out user-provided `PORT` to avoid conflicts. Fix: update your app to read the `PORT` variable at runtime OR explicitly `EXPOSE` the correct port and configure your server to bind to that port.

Custom domains FAQ

DNS and SSL are the most common sources of custom domain issues.

#custom-domains-faq
DNS looks correct, but verification still fails. What should I check?+

Common causes: DNS propagation delay, wrong record name (host), or a conflicting existing record. Fix: confirm the exact record name/value from the UI and verify it from multiple resolvers.

Command

# Example checks
dig CNAME app.example.com +short
dig TXT example.com +short
dig A example.com +short
Why does Cloudflare proxy (orange cloud) break verification/SSL?+

When Cloudflare proxy is enabled, DNS responses and TLS termination can differ from what NEXUS AI expects for verification and certificate issuance. Fix: disable proxy (set the record to DNS-only) during verification and certificate issuance.

Verification succeeded but SSL is still pending. Why?+

Certificate issuance can lag behind DNS verification, especially right after a DNS change. Fix: wait a few minutes and re-check. If it persists, verify there are no restrictive CAA records blocking Let's Encrypt.

How do I check if CAA records are blocking Let's Encrypt?+

CAA records can restrict which certificate authorities are allowed to issue certificates for your domain. Fix: ensure your CAA records allow Let's Encrypt (or remove restrictive CAA records).

Command

dig CAA example.com +short
Why does my apex domain fail when my DNS provider doesn't support CNAME at root?+

Many DNS providers do not allow CNAME records at the root/apex (`example.com`). Fix: use the platform's apex-domain flow (A + TXT) or use a provider feature like ALIAS/ANAME if supported.

I added the records, but my browser still goes to the old site. Why?+

DNS and browser caches can keep old values temporarily. Fix: verify with `dig` from your terminal, wait for TTL expiry, and try an incognito window or flush DNS cache.

GCP Cloud Run troubleshooting

Most Cloud Run issues are IAM permissions or service accessibility.

#gcp-cloud-run-troubleshooting
I see: Permission "iam.serviceAccounts.get" denied+

This means the service account running NEXUS AI does not have enough IAM permissions in your GCP project (or the service account does not exist). Fix: grant the required roles to the backend service account. Replace `PROJECT_ID` and `BACKEND_SA`.

Command

PROJECT_ID="your-project-id"
BACKEND_SA="[email protected]"

gcloud projects add-iam-policy-binding "$PROJECT_ID" \
  --member="serviceAccount:$BACKEND_SA" \
  --role="roles/cloudbuild.builds.editor"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
  --member="serviceAccount:$BACKEND_SA" \
  --role="roles/run.admin"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
  --member="serviceAccount:$BACKEND_SA" \
  --role="roles/storage.admin"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
  --member="serviceAccount:$BACKEND_SA" \
  --role="roles/iam.serviceAccountUser"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
  --member="serviceAccount:$BACKEND_SA" \
  --role="roles/artifactregistry.writer"
How do I view Cloud Build logs for a deployment?+

Use the build ID shown in the deployment logs and fetch logs from Cloud Build.

Command

gcloud builds log BUILD_ID --project=PROJECT_ID
How do I find my GCP project number (for the Cloud Build service account)?+

The default Cloud Build service account uses the project number: `[email protected]`

Command

gcloud projects describe PROJECT_ID --format="value(projectNumber)"
Cloud Build can't push to Artifact Registry. What should I check?+

Most commonly, the Cloud Build service account is missing Artifact Registry write permission. Fix: grant `roles/artifactregistry.writer` to `[email protected]` in the same project/region as your repo.

I get "repository not found" or image push errors. What's a common misconfiguration?+

A frequent cause is a mismatch between: - Artifact Registry repo name/region - `GCP_ARTIFACT_REGISTRY_HOST` (should be `REGION-docker.pkg.dev`, no `https://`) Fix: confirm the repo exists and the host matches its region.

Cloud Build fails with "PERMISSION_DENIED". Which identity needs access?+

There are two identities involved: - The backend runtime service account (creates builds, reads logs, uploads source archives). - The Cloud Build service account (executes the build and pushes images). Fix: grant roles to the correct identity. A common miss is Artifact Registry write access for the Cloud Build service account and bucket read access for the build source archive.

I can't reach the public Cloud Run service URL+

Check that your service has a ready revision and that it is publicly invokable (or that you have authentication configured correctly).

Command

gcloud run services describe SERVICE_NAME \
  --region=us-central1 \
  --project=PROJECT_ID \
  --format="value(status.latestReadyRevisionName,status.latestCreatedRevisionName)"

# Check if the service is public (allUsers has roles/run.invoker)
gcloud run services get-iam-policy SERVICE_NAME \
  --region=us-central1 \
  --project=PROJECT_ID
How do I confirm Artifact Registry is set up correctly?+

List repositories in Artifact Registry for your project and confirm the repo/region match your backend configuration.

Command

gcloud artifacts repositories list --project=PROJECT_ID
How do I add a custom domain to a Cloud Run deployment?+

NEXUS AI does not support Cloud Run custom domains in-app yet, but you can configure it directly using Cloud Run domain mapping. 1) Create the mapping 2) Add the DNS records returned by the mapping status 3) Wait for the certificate to become ready

Command

gcloud config set project PROJECT_ID

gcloud run domain-mappings create \
  --service SERVICE_NAME \
  --domain app.example.com \
  --region us-central1

gcloud run domain-mappings describe app.example.com \
  --region us-central1 \
  --format="yaml(status.resourceRecords)"

AWS App Runner troubleshooting

Most App Runner issues are ECR/CodeBuild/IAM configuration or container startup behavior.

#aws-app-runner-troubleshooting
My App Runner deployment fails during build. What should I check first?+

Check that: - CodeBuild can read the S3 source archive bucket. - CodeBuild can authenticate to ECR and push images. - App Runner can pull from ECR (service role permissions).

I see `AccessDeniedException` / `iam:PassRole` errors. What do they mean?+

When AWS needs to attach a role to CodeBuild or App Runner, the caller must have permission to pass that role. Fix: ensure the IAM identity used by the backend has `iam:PassRole` for the specific role ARNs you are using.

How do I verify ECR exists and is accessible?+

List repositories and verify the expected repo is present in the region you configured.

Command

aws ecr describe-repositories --region us-east-1
Where do I look for AWS build and runtime logs?+

Build logs are typically in CodeBuild (CloudWatch Logs). Runtime logs are in App Runner (CloudWatch Logs). Fix: open the related CodeBuild project and App Runner service in AWS Console and inspect CloudWatch logs.

S3 bucket creation fails in us-east-1 with a LocationConstraint error. Why?+

In `us-east-1`, S3 bucket creation differs slightly from other regions and LocationConstraint can cause errors depending on the command used. Fix: for us-east-1, create the bucket without the LocationConstraint field.

Command

aws s3api create-bucket --bucket YOUR_UNIQUE_BUCKET_NAME --region us-east-1

Azure Container Apps / ACR troubleshooting

Most Azure deployment issues are permissions, ACR builds (Tasks), or Log Analytics setup.

#azure-container-apps-troubleshooting
ACR Tasks requests are not permitted for my registry+

Error (rewritten): Azure is blocking ACR Tasks for your Container Registry, so the platform cannot run the registry-side build step (ACR Tasks). This is not a Dockerfile validation issue—your Azure subscription/tenant/registry configuration is rejecting task requests. Fix options: 1) Check the registry SKU and upgrade if needed (Standard/Premium recommended). 2) Check for Azure Policy restrictions blocking ACR Tasks in your subscription. 3) If required, open an Azure Support request to allow ACR Tasks for the registry: http://aka.ms/azuresupport

Command

# Check ACR SKU
az acr show -n REGISTRY_NAME --query sku.name -o tsv

# Upgrade SKU (example: Standard)
az acr update -n REGISTRY_NAME --sku Standard
ACR build fails with authorization errors. What should I check?+

Ensure the backend service principal has Contributor on the Resource Group. Confirm `AZURE_TENANT_ID`, `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, and `AZURE_SUBSCRIPTION_ID` are correct.

Deployment failed with `Microsoft.Resources/subscriptions/resourcegroups/write`. What does this mean?+

The Azure service principal used by NEXUS AI does not have enough RBAC permissions on the target resource group (or subscription). Fix: grant Contributor on the target resource group for shared-RG mode, or Contributor on the subscription for per-project RG mode. Then wait for IAM propagation and retry deployment.

Command

SUB_ID="YOUR_SUBSCRIPTION_ID"
RG_NAME="nexusai-shared"
SP_APP_ID="YOUR_AZURE_CLIENT_ID"
RG_LOCATION="eastus"
az group show --name "$RG_NAME" --subscription "$SUB_ID" >/dev/null 2>&1 || \
  az group create --name "$RG_NAME" --location "$RG_LOCATION" --subscription "$SUB_ID"

az role assignment create \
  --assignee "$SP_APP_ID" \
  --role Contributor \
  --scope "/subscriptions/$SUB_ID/resourceGroups/$RG_NAME"

az role assignment list \
  --assignee "$SP_APP_ID" \
  --scope "/subscriptions/$SUB_ID/resourceGroups/$RG_NAME" \
  --include-inherited -o table
No runtime logs appear in the UI. Why?+

Container Apps logs are streamed via Log Analytics. Fix: verify the Log Analytics workspace exists and is attached to the Container Apps environment.

I see image pull errors in Container Apps.+

Confirm the ACR registry exists and the image tag was pushed by the ACR Task. Check that the Container App registry settings reference the correct login server.

Secrets and encryption FAQ (self-hosted)

Keys are encrypted at rest. Keep your encryption key stable.

#secrets-and-encryption-faq
I changed `ENCRYPTION_KEY` and now AI providers/secrets fail. What happened?+

Provider API keys stored by NEXUS AI are encrypted using `ENCRYPTION_KEY`. If you change it, previously stored encrypted values can't be decrypted anymore. Fix: restore the original `ENCRYPTION_KEY` OR re-save your provider credentials and secrets after changing it.

Quotas and limits FAQ

Plan limits affect AI requests and deployments.

#quotas-and-limits-faq
Why do I see "AI request quota exceeded for this month"?+

Your organization hit its monthly AI request limit for the current plan tier. Fix: upgrade your plan or wait for the monthly quota reset.

Why can't I deploy more containers?+

Your organization may have reached the concurrent deployment limit (running/building deployments). Fix: stop unused deployments or upgrade your plan limits.

Why did my deployment stop automatically?+

Deployments can auto-stop due to plan max runtime (or an auto-destroy setting) to control cost and resource usage. Fix: increase allowed runtime on your plan (if available) or redeploy when needed.

What's the difference between AI request quota and token usage?+

Request quota limits the number of generation calls you can make. Token usage measures the size of prompts and outputs (and impacts cost). Fix: if you hit request quota, upgrade or wait for reset. If costs are high, reduce max tokens and use smaller models.

External Database Connections

Connect your own Postgres, Supabase, or any external database. Run safe SQL, inspect schema, and let AI fix deployment errors automatically.

How external DB connections work

#ext-db-overview
  1. 1NEXUS AI lets you register any external PostgreSQL database (Supabase, Neon, RDS, self-hosted, etc.) as a DB Source.
  2. 2Your password is encrypted with AES-256-GCM and stored in the Secrets Vault — the raw credential is never persisted in plain text.
  3. 3Every query opens a fresh short-lived connection, runs inside a transaction with session-level timeouts, and disconnects immediately.
  4. 4A SQL safety layer blocks dangerous operations (DROP DATABASE, pg_exec, DELETE without WHERE, multi-statement SQL) before any query reaches your database.
  5. 5All executions are logged to the audit trail with a SHA-256 hash of the SQL — no plain-text query is stored.

Connect a Supabase database

Supabase runs on PostgreSQL. Use the Session Pooler for best compatibility with NEXUS AI short-lived connections.

#ext-db-supabase
  1. 1Step 1 — Find your connection details in Supabase: 1. Go to supabase.com and open your project. 2. In the left sidebar, click "Project Settings" (gear icon at the bottom). 3. Click "Database" from the settings menu. 4. Scroll down to the "Connection string" section. 5. Above the URI / PSQL / Golang tabs, find the "Connection pooler" dropdown — switch it from "Direct connection" to "Session pooler". 6. Copy the Host and Port shown — these differ from the direct connection values. Your Session Pooler details will look like:

    Command

    Host:     aws-0-us-east-1.pooler.supabase.com
    Port:     5432
    Database: postgres
    Username: postgres.<your-project-ref>
    Password: <your-database-password>
    SSL mode: require
  2. 2Step 2 — Register the DB Source in NEXUS AI:

    Command

    POST /api/db-sources
    Authorization: Bearer <token>
    
    {
      "name": "my-supabase",
      "host": "aws-0-us-east-1.pooler.supabase.com",
      "port": 5432,
      "dbName": "postgres",
      "username": "postgres.abcdefghijklmnop",
      "password": "<db-password>",
      "sslMode": "require",
      "engine": "POSTGRESQL"
    }
  3. 3Or use the MCP tool from Claude: nexusai_db_source_connect with the same fields.
  4. 4Test the connection: POST /api/db-sources/<id>/test — returns latency in ms on success.

Supabase connection modes explained

#ext-db-pooler-modes
  1. 1Supabase provides three connection modes. Choose based on your use case:

    Command

    Direct connection       port 5432  — long-lived, migrations, admin tasks
    Session pooler          port 5432  — recommended for NEXUS AI
    Transaction pooler      port 6543  — ultra-short-lived (pgBouncer, limited SET support)
  2. 2NEXUS AI recommends the Session Pooler (port 5432). It is compatible with all SQL features including SET commands, while still efficiently managing connections.
  3. 3Important: the Session Pooler username format is postgres.<project-ref> — for example: postgres.abcdefghijklmnop. You can copy this directly from the Supabase connection string shown after switching the pooler dropdown.
  4. 4Avoid the Transaction Pooler (port 6543) unless you have a specific reason — it uses pgBouncer in transaction mode which blocks certain PostgreSQL features.

Run a query against your external DB

#ext-db-query
  1. 1Preview mode runs EXPLAIN and safety analysis without committing any changes:

    Command

    POST /api/db-sources/<id>/query
    
    {
      "sql": "SELECT id, email FROM users LIMIT 10",
      "mode": "preview"
    }
  2. 2Execute mode runs the query for real. DML (INSERT/UPDATE/DELETE) and DDL require confirmed: true:

    Command

    POST /api/db-sources/<id>/query
    
    {
      "sql": "UPDATE users SET status = 'active' WHERE id = 'abc'",
      "mode": "execute",
      "confirmed": true
    }
  3. 3Results are capped at 1000 rows by default. Pass limits.maxRows to override (max 10000).
  4. 4SELECT queries run inside a READ ONLY transaction — they cannot mutate data even if the SQL tries to.

Inspect database schema

#ext-db-schema
  1. 1Fetch the full schema graph (tables, columns, indexes, constraints):

    Command

    GET /api/db-sources/<id>/schema
  2. 2Results are cached in Redis for 5 minutes. Pass ?refresh=true to bust the cache and re-fetch live.
  3. 3Via MCP: use nexusai_db_inspect_schema — Claude will see your full table structure and can write accurate SQL against it.

AI log-to-fix: auto-repair database errors

Paste a deployment log containing database errors. NEXUS AI extracts the error, inspects the live schema, and proposes a DDL fix.

#ext-db-log-to-fix
  1. 1Send the log snippet to the propose-fix endpoint:

    Command

    POST /api/db-sources/<id>/propose-fix
    
    {
      "logSnippet": "ERROR: relation \"profiles\" does not exist\nLINE 1: SELECT * FROM profiles WHERE user_id = $1",
      "deploymentId": "<optional-deployment-id>"
    }
  2. 2NEXUS AI returns a DDL proposal with a risk level (LOW / MEDIUM / HIGH):

    Command

    {
      "proposedSql": "CREATE TABLE profiles (id UUID PRIMARY KEY, user_id UUID NOT NULL, ...);",
      "explanation": "The profiles table is missing. This creates it with a user_id foreign key.",
      "riskLevel": "LOW"
    }
  3. 3Review and apply the fix:

    Command

    POST /api/db-fix-proposals/<proposalId>/apply
    
    { "dbSourceId": "<id>" }
  4. 4Via MCP: nexusai_db_propose_fix → review → nexusai_db_apply_fix.

MCP tools for Claude and ChatGPT

#ext-db-mcp-tools
  1. 1Eight MCP tools are available once you connect NEXUS AI to Claude or ChatGPT:

    Command

    nexusai_db_source_list      — list all registered DB sources
    nexusai_db_source_connect   — register a new external DB
    nexusai_db_source_delete    — remove a DB source
    nexusai_db_inspect_schema   — get full schema graph
    nexusai_db_query_preview    — dry-run a SQL query
    nexusai_db_query_execute    — run SQL (confirmed=true for writes)
    nexusai_db_propose_fix      — analyze logs → propose DDL
    nexusai_db_apply_fix        — apply a reviewed fix proposal
  2. 2Claude will automatically inspect your schema before writing SQL, so queries match your actual table and column names.

SQL safety rules

#ext-db-safety
  1. 1The following operations are always blocked regardless of confirmed flag:

    Command

    DROP DATABASE
    ALTER ROLE / CREATE ROLE
    CREATE EXTENSION
    COPY ... PROGRAM
    pg_read_file, pg_ls_dir, pg_exec
    Multi-statement SQL (more than one statement per request)
    DELETE or UPDATE without a WHERE clause
  2. 2SELECT queries are auto-allowed and run in a READ ONLY transaction.
  3. 3DML (INSERT/UPDATE/DELETE) and DDL (CREATE TABLE, ALTER TABLE, etc.) are allowed but require confirmed: true.
  4. 4DDL that drops or modifies columns is flagged HIGH risk and surfaced in the proposal before apply.

Troubleshooting external DB connections

#ext-db-troubleshooting
Connection to Supabase times out or is refused.+

Most likely cause: wrong port or username format. Fix: In Supabase Project Settings → Database → Connection string, set the "Connection pooler" dropdown to "Session pooler". Use the Host and Port shown there (port 5432) with username format postgres.<project-ref>.

Queries fail on Supabase with "unsupported startup parameter".+

PgBouncer in transaction mode blocks certain SET commands outside a transaction. Fix: NEXUS AI uses SET LOCAL inside the transaction, which pgBouncer allows. If you see this error on a custom query, avoid session-level SET commands in your SQL.

My query was blocked by the SQL safety layer.+

The safety service rejected the statement before it reached the database. Fix: check the reasons field in the response. Common causes are: DELETE/UPDATE without WHERE, blocked keywords (DROP DATABASE, pg_exec), or multiple statements separated by semicolons.

Schema introspection shows no tables.+

The connected user may not have access to the public schema, or your tables are in a different schema. Fix: grant SELECT on information_schema to the connecting user, or ensure tables are in the public schema. Supabase users should use the postgres superuser or a role with schema access.

The log-to-fix proposal generated invalid SQL.+

The AI proposal is validated through the SQL safety layer before being stored. If it fails, the endpoint returns an error instead of a bad proposal. Fix: provide more context in the logSnippet (include the full stack trace and error line). You can also manually write the DDL and execute it via /query with confirmed: true.

NEXUS AI CLI

Deploy, manage, and automate everything from your terminal using the nexus command-line interface.

Installation

#cli-install
  1. 1Install globally from npm:

    Command

    npm i nexusapp-cli
  2. 2Or run one-off without installing:

    Command

    npx nexusapp-cli auth login
  3. 3Verify the installation:

    Command

    nexus --version
    nexus --help

Authentication

The CLI uses browser-based login (like GitHub CLI) to obtain a persistent access token.

#cli-auth
  1. 1Log in interactively — opens your browser to complete authentication:

    Command

    nexus auth login
  2. 2Point to a self-hosted instance:

    Command

    nexus auth login --api-url http://localhost:3001 --web-url http://localhost:3002
  3. 3Use a pre-existing nxk_* token directly (useful for CI/CD bootstrap):

    Command

    nexus auth login --token nxk_abc123...
  4. 4Check who you are logged in as:

    Command

    nexus auth whoami
  5. 5Log out and revoke the token:

    Command

    nexus auth logout
  6. 6For CI/CD pipelines, skip interactive login entirely with environment variables:

    Command

    export NEXUSAI_TOKEN=nxk_your_token_here
    export NEXUSAI_API_URL=https://nexusai.run
    nexus deploy list

Deploy from a container image

Use nexus deploy create to deploy any pre-built container image.

#cli-deploy-image
  1. 1Deploy nginx:latest on Docker:

    Command

    nexus deploy create --image nginx:latest --port 80 --name my-site --provider docker
  2. 2Deploy to Google Cloud Run and wait for it to go live:

    Command

    nexus deploy create \
      --image node:20-alpine \
      --port 3000 \
      --name api-prod \
      --provider gcp_cloud_run \
      --env NODE_ENV=production \
      --env PORT=3000 \
      --wait
  3. 3Available providers: docker, gcp_cloud_run, aws_ecs_fargate, azure_container_apps.
  4. 4Images must include a tag and use an allowed base (node:, nginx:, python:, alpine:, ubuntu:, debian:, postgres:, redis:, mongo:, mysql:, httpd:, php:, ruby:, golang:, rust:, gcc:).

Deploy from a Git repository

Use nexus deploy source to build and deploy directly from a repo — no Dockerfile required.

#cli-deploy-source
  1. 1Deploy a public GitHub repo:

    Command

    nexus deploy source --repo https://github.com/you/app --name my-app --wait
  2. 2Specify branch, framework, and build commands:

    Command

    nexus deploy source \
      --repo https://github.com/you/api \
      --branch main \
      --framework node \
      --install-command "npm ci" \
      --build-command "npm run build" \
      --start-command "node dist/index.js" \
      --environment PRODUCTION \
      --wait
  3. 3Deploy a private repo using a stored secret token:

    Command

    # First store the token as a secret
    nexus secret create --name GITHUB_TOKEN --environment production
    
    # Then deploy referencing that secret
    nexus deploy source \
      --repo https://github.com/you/private-app \
      --repo-secret GITHUB_TOKEN \
      --wait
  4. 4Auto-destroy a staging environment after 4 hours:

    Command

    nexus deploy source \
      --repo https://github.com/you/app \
      --branch feature/new-ui \
      --environment STAGING \
      --auto-destroy 4 \
      --wait

Manage deployments

All commands accept a deployment name or UUID interchangeably.

#cli-deploy-manage
  1. 1List all deployments (shows name and ID):

    Command

    nexus deploy list
    nexus deploy list --status RUNNING
    nexus deploy list --project <project-id>
  2. 2Get full details for a deployment:

    Command

    nexus deploy get my-app
  3. 3Watch live status updates every 3 seconds:

    Command

    nexus deploy status my-app --watch
  4. 4Stream logs in real time:

    Command

    nexus deploy logs my-app --follow
    nexus deploy logs my-app --type build --lines 200
  5. 5Stop, start, or delete:

    Command

    nexus deploy stop my-app
    nexus deploy start my-app
    nexus deploy delete my-app --yes
  6. 6Scale replicas up or down (1–10):

    Command

    nexus deploy scale my-app 3
  7. 7Redeploy with the same config (triggers a fresh build for source deployments):

    Command

    nexus deploy redeploy my-app --wait
  8. 8Roll back to the previous version:

    Command

    nexus deploy rollback my-app
    # Roll back to a specific prior deployment:
    nexus deploy rollback my-app --target <old-deployment-id>

Secrets management

#cli-secrets
  1. 1List all secrets (values are never shown):

    Command

    nexus secret list
    nexus secret list --environment production
  2. 2Create a secret (prompted securely if --value is omitted):

    Command

    nexus secret create --name DATABASE_URL --environment production
    # or inline (not recommended — stays in shell history):
    nexus secret create --name API_KEY --environment staging --value "sk-abc123"
  3. 3Update a secret value:

    Command

    nexus secret update <secret-id>
  4. 4Delete a secret:

    Command

    nexus secret delete <secret-id> --yes

Project management

#cli-projects
  1. 1List projects:

    Command

    nexus project list
  2. 2Create a project:

    Command

    nexus project create --name "Backend API"
  3. 3Delete a project:

    Command

    nexus project delete <project-id> --yes

Custom domains

#cli-domains
  1. 1Add a custom domain to a deployment:

    Command

    nexus domain add my-app api.example.com
  2. 2The CLI prints the DNS TXT record you need to add for ownership verification. After adding it to your DNS provider, trigger verification:

    Command

    # List domains to get the domain ID
    nexus domain list my-app
    
    # Verify (DNS changes can take up to 48h)
    nexus domain verify my-app <domain-id>
  3. 3Remove a domain:

    Command

    nexus domain remove my-app <domain-id>

CI/CD integration (GitHub Actions)

Use NEXUSAI_TOKEN and NEXUSAI_API_URL environment variables for non-interactive pipeline use.

#cli-cicd
  1. 1Store your token as a GitHub Actions secret (NEXUSAI_TOKEN), then add a deploy step:

    Command

    name: Deploy to NEXUS AI
    on:
      push:
        branches: [main]
    
    jobs:
      deploy:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v4
    
          - name: Install NEXUS AI CLI
            run: npm install -g nexusapp-cli
    
          - name: Deploy
            env:
              NEXUSAI_TOKEN: ${{ secrets.NEXUSAI_TOKEN }}
              NEXUSAI_API_URL: https://nexusai.run
            run: |
              nexus deploy source \
                --repo ${{ github.repositoryUrl }} \
                --branch ${{ github.ref_name }} \
                --name my-app \
                --environment PRODUCTION \
                --wait
  2. 2Deploy a Docker image built in the same pipeline:

    Command

          - name: Build and push image
            run: |
              docker build -t myregistry/app:${{ github.sha }} .
              docker push myregistry/app:${{ github.sha }}
    
          - name: Deploy to NEXUS AI
            env:
              NEXUSAI_TOKEN: ${{ secrets.NEXUSAI_TOKEN }}
            run: |
              nexus deploy create \
                --image myregistry/app:${{ github.sha }} \
                --port 8080 \
                --name api-prod \
                --provider gcp_cloud_run \
                --wait

Configuration reference

#cli-config
Where is the config file stored?+

After login, credentials are saved to ~/.nexusai/config.json with mode 0600 (owner read/write only). The file contains apiUrl, token, and tokenId.

Command

cat ~/.nexusai/config.json
Which environment variables does the CLI respect?+

NEXUSAI_TOKEN overrides the saved token (useful in CI/CD). NEXUSAI_API_URL overrides the saved API base URL. NEXUSAI_WEB_URL overrides the frontend URL used during browser login.

Command

NEXUSAI_TOKEN=nxk_... NEXUSAI_API_URL=https://nexusai.run nexus deploy list
How do I use --json for scripting?+

Every read command supports --json to output raw API data, making it easy to pipe into jq or other tools.

Command

nexus deploy list --json | jq '.[] | {name, status, url}'
How do I skip confirmation prompts in scripts?+

Pass --yes to any destructive command (stop, delete, rollback, redeploy, domain remove) to skip the interactive confirmation.

Command

nexus deploy delete old-app --yes
nexus deploy stop my-app --yes

Troubleshooting

#cli-troubleshooting
Cannot reach NEXUS AI API+

The CLI cannot connect to the API. Check the API URL in your config or set NEXUSAI_API_URL to the correct address.

Command

cat ~/.nexusai/config.json
# Fix:
nexus auth login --api-url http://localhost:3001 --web-url http://localhost:3002
Session expired. Run 'nexus auth login'+

Your access token was revoked or expired. Log in again to get a fresh token.

Command

nexus auth logout
nexus auth login
Insufficient scope error+

The token was created with the wrong scopes. Log out and back in — the browser auth page now creates tokens with all required scopes automatically.

Command

nexus auth logout
nexus auth login
Deployment not found: "my-app"+

The name does not match any deployment in your organization. Run nexus deploy list to see all deployment names and IDs.

Command

nexus deploy list
Security validation failed: Base image not allowed+

Images must include a tag (e.g. nginx:latest, not nginx). Allowed base images: node:, python:, nginx:, alpine:, ubuntu:, debian:, postgres:, redis:, mongo:, mysql:, httpd:, php:, ruby:, golang:, rust:, gcc:.

Command

# Wrong:
nexus deploy create --image nginx --port 80
# Correct:
nexus deploy create --image nginx:latest --port 80

Deploy your first container now.

Generate & deploy with AI using cloud-agnostic runtime and enterprise-grade orchestration.

Get started today

Ship faster. Scale effortlessly.

Deploy your first app in minutes. No DevOps team required. Full AI observability from day one.

No credit card required · Free tier available