Skip to main content

Native API Integration

Every Genum prompt comes with a ready-to-use HTTP API. You can trigger AI behavior programmatically, using versioned prompts as deterministic, testable logic components.


Authentication

Genum uses Bearer token authentication. API Keys are managed under:

  • Settings → Project → API Keys

These keys are scoped by project and represent access to specific prompt environments and integrations. Create API Key


API Endpoint

Use the following endpoint to run any versioned prompt:

POST https://api.genum.ai/api/v1/prompts/run

Headers

{
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY"
}

Request Body

{
"id": "YOUR_PROMPT_ID", // Required: Prompt ID
"question": "Your input text here", // Required: Input to process
"memoryKey": "optional-key", // Optional: Context memory
"productive": true // Optional: Use committed version (default: true)
}

productive: true ensures that only committed and tested prompts are executed.


Response Format

{
"answer": "Generated response",
"tokens": {
"prompt": 10,
"completion": 20,
"total": 30
},
"response_time_ms": 500,
"chainOfThoughts": "Optional reasoning chain",
"status": "Optional status (e.g. NOK: error message)"
}

Error Handling

{
"error": "Error message"
}

Errors may result from:

  • ❌ Invalid API key
  • ❌ Missing or incorrect prompt ID
  • ❌ Exceeded rate limits
  • ❌ Upstream model failure

Why Native API?

  • Use Genum as a stable runtime layer
  • Integrate with CI/CD workflows
  • Reuse your testable prompts in production scenarios
  • Control prompt execution through memory and version flags

Native API makes your prompts portable, auditable, and production-grade.