Native API Integration
Every Genum prompt comes with a ready-to-use HTTP API. You can trigger AI behavior programmatically, using versioned prompts as deterministic, testable logic components.
Authentication
Genum uses Bearer token authentication. API Keys are managed under:
Settings → Project → API Keys
These keys are scoped by project and represent access to specific prompt environments and integrations.

API Methods
- Run Prompt
- Get Single Prompt
- Get All Prompts
Endpoint
POST https://api.genum.ai/api/v1/prompts/run
Headers
{
"Authorization": "Bearer YOUR_API_KEY"
}
Request Body
{
"id": "YOUR_PROMPT_ID", // Required: Prompt ID
"question": "Your input text here", // Required: Input to process
"memoryKey": "optional-key", // Optional: Context memory
"productive": true // Optional: Use committed version (default: true)
}
productive: trueensures that only committed and tested prompts are executed.
Response Format
{
"answer": "Generated response",
"tokens": {
"prompt": 10,
"completion": 20,
"total": 30
},
"response_time_ms": 500,
"chainOfThoughts": "Optional reasoning chain",
"status": "Optional status (e.g. NOK: error message)"
}
Error Handling
{
"error": "Error message"
}
Errors may result from:
- ❌ Invalid API key
- ❌ Missing or incorrect prompt ID
- ❌ Exceeded rate limits
- ❌ Upstream model failure
Endpoint
GET https://api.genum.ai/api/v1/prompts/{id}
Headers
{
"Authorization": "Bearer YOUR_API_KEY"
}
Response
{
"id": 616,
"value": "Your prompt content here",
"languageModelId": 1,
"name": "New Prompt 1",
"languageModelConfig": {
"tools": [],
"max_tokens": 16384,
"temperature": 1,
"response_format": "text"
},
"assertionType": "STRICT",
"assertionValue": "",
"commited": false,
"projectId": 63,
"createdAt": "2025-01-01T00:00:00.000Z",
"updatedAt": "2025-01-01T00:00:00.000Z",
"languageModel": {
"id": 1,
"name": "gpt-4o",
"vendor": "OPENAI",
"promptPrice": 2.5,
"completionPrice": 10,
"contextTokensMax": 128000,
"completionTokensMax": 16384,
"description": "GPT-4o is ...",
"createdAt": "2025-01-01T00:00:00.000Z",
"updatedAt": "2025-01-01T00:00:00.000Z"
}
}
Endpoint
GET https://api.genum.ai/api/v1/prompts
Headers
{
"Authorization": "Bearer YOUR_API_KEY"
}
Response
{
"prompts": [
{
"id": 1,
"name": "prompt name",
"assertionType": "STRICT",
"languageModelId": 1,
"createdAt": "2025-01-01T00:00:00.000Z",
"updatedAt": "2025-01-01T00:00:00.000Z",
"commited": false,
"_count": {
"memories": 2,
"testCases": 0
},
"branches": [
{
"promptVersions": []
}
]
}
]
}
Why Native API?
- Use Genum as a stable runtime layer
- Integrate with CI/CD workflows
- Reuse your testable prompts in production scenarios
- Control prompt execution through memory and version flags
- Retrieve prompt metadata for integration and monitoring
- List all prompts for discovery and management
Native API makes your prompts portable, auditable, and production-grade.