Skip to main content

Integration Concepts

Genum is designed for deep integration — whether you're orchestrating workflows, enforcing CI/CD policies, or delivering production-grade prompts across AI vendors.

Integration Architecture


Unified Prompt Execution via Genum

Every prompt in Genum is automatically exposed as an API endpoint. This allows external systems to:

  • Read versioned prompt specifications
  • Execute prompts over Genum’s secure, vendor-abstracted runtime
  • Validate outputs using approved models and configurations

You can use Genum either as:

  • A headless delivery system to inject validated prompts into third-party platforms
  • A centralized execution layer that handles routing, logging, scoring, and fallback

Integration Modes

Genum supports two primary integration models:

1. API-based Integration

  • Use RESTful endpoints to execute prompts
  • Fetch prompt metadata and commit history
  • Choose prompt versions (latest, last_committed)

2. Custom Node Integration

  • Available for tools like:

    • n8n
    • more to come
  • Nodes automatically fetch and run prompts with memory keys and input values

  • Ideal for low-code orchestration pipelines


CI/CD-Aligned Prompt Delivery

With versioning built in, Genum enables:

  • Executing committed versions of prompts
  • Locking prompts to stable builds
  • Preventing accidental drift or overwrite of test-passed logic

This makes prompt execution deterministic, traceable, and audit-ready — just like traditional software.


Vendor Abstraction Layer

Genum handles prompt execution across multiple LLM providers:

  • OpenAI/*
  • Gemini/*
  • more to come

You don’t need to change your integration logic per vendor — Genum verifies vendor compatibility and will support intelligent routing and failover.


Whether you're integrating for runtime execution, automated pipelines, or model governance — Genum is the backbone of production-grade prompt delivery.