Core is responsible for private model execution. When a request reaches your environment through Gateway, Core handles the actual inference work using local model runtimes and customer-owned systems.
This matters because it keeps the intelligence layer within infrastructure under your control. Rather than depending entirely on public third-party AI providers, SecludedAI Core makes local execution a first-class part of the platform.
Core is designed to be flexible enough for chat, structured generation, and future private AI workflows.