PRIVATE INFERENCE ENGINE

Where Your AI Actually Runs

SecludedAI Core is the private execution engine that runs the model workloads powering your environment. It lives on infrastructure you control and serves as the intelligence layer behind Gateway.

Computer hardware prepared for deployment

Private Model Execution

Core is responsible for private model execution. When a request reaches your environment through Gateway, Core handles the actual inference work using local model runtimes and customer-owned systems.

This matters because it keeps the intelligence layer within infrastructure under your control. Rather than depending entirely on public third-party AI providers, SecludedAI Core makes local execution a first-class part of the platform.

Core is designed to be flexible enough for chat, structured generation, and future private AI workflows.

Key Points

  • Private model execution
  • Customer-owned infrastructure
  • No public-facing exposure required
  • Supports chat and structured generation
  • Built to work behind Gateway

SecludedAI Core FAQ

What is SecludedAI Core?

SecludedAI Core is the private inference engine in the platform. It runs the actual model workloads on infrastructure the customer controls.

Why does Core matter so much?

Core is what turns SecludedAI into a private AI system instead of just another hosted interface. It is the layer where the intelligence actually runs, which is why keeping it customer-owned is so important to the platform philosophy.

Does Core need to be exposed to the internet?

No. Core is designed to remain inside the customer environment and work behind Gateway. It is not intended to be the public-facing part of the system.

What kinds of AI tasks can Core support?

Core is designed to support chat-style use cases, structured generation, and future private AI workflows. The exact capabilities depend on the customer's deployed model environment and configuration.

Can Core run different local model runtimes?

That is part of the platform direction. The system is being positioned to support flexible model execution rather than locking the customer into one provider-style mindset.

Does SecludedAI Core require powerful hardware?

The hardware requirements depend on the intended use case, model size, and performance expectations. Some users may start with approved existing hardware, while others may want a guided build or bundled private AI appliance.