Platform
What's under the hood
Thinklio is built as organisational infrastructure, not a demo framework. Here is what that means in practice.
Section 01
Governance is the product
Every tool execution, delegation, and sensitive operation is evaluated against a layered policy engine before it runs. The evaluation considers the agent's own configuration, any assignment-level restrictions set when the agent was deployed to a channel, the trust level of the requesting user, the permissions of their team, and any organisation-wide rules that apply. The outcome is one of three decisions: allow, deny, or require approval from a designated human reviewer.
Trust levels provide graduated permissions across a spectrum of risk. A read-only trust level grants access to retrieval tools and informational operations. A low-risk write level permits actions with limited blast radius — creating drafts, updating personal records, sending notifications to the requesting user. A high-risk write level requires explicit per-operation approval before any action is taken. Each level is configurable per agent, per assignment, and per context, giving administrators precise control without requiring a new agent for every permission variant.
Cost controls operate at three levels: organisation, team, and agent. Budgets are enforced pre-execution — the agent checks its remaining budget before committing to a step, not after. Cost attribution is tracked at step level, so you can see not just what a workflow cost in total, but which tool calls, which model inferences, and which delegations drove that cost. There are no surprises at the end of the month.
Allow
Action is within policy boundaries and trust level.
Require approval
Action is possible but requires explicit human sign-off.
Deny
Action falls outside the agent's permitted scope.
Section 02
Four layers of knowledge
| Layer | Owner | Grows from | Visible to |
|---|---|---|---|
| Agent | Agent creator | Configuration and learning | All users of this agent |
| Organisation | Org admins | Curated policies and procedures | All org members |
| Team | Team (collective) | Team interactions | Team members only |
| User | Individual | Personal interactions | Private to that user |
The layers form a precedence hierarchy. Organisation-level policies and knowledge override everything beneath them. If your organisation has defined a policy that prohibits an agent from disclosing certain information, that constraint applies regardless of what the agent itself has been configured to do, what the team has set, or what a user requests. This is intentional: the org layer is the governance layer, and its authority is absolute within the platform.
User-layer knowledge is strictly private. No administrator, team member, or other agent can inspect what a user has shared in their personal interactions. The user layer exists to make agents genuinely useful at an individual level — to remember preferences, working styles, and context — without requiring that privacy be sacrificed for personalisation. Knowledge stored at the user layer is also portable: users can export it or delete it independently of any agent configuration.
Section 03
Durable execution from first principles
Every agent interaction runs inside a durable execution harness that tracks each step independently. The agent assembles context, reasons about the request, takes action — calling tools, delegating to other agents, or dispatching longer-running tasks — observes the results, and responds. That cycle is not a single function call. It is a structured loop with discrete, inspectable stages.
Each step has its own lifecycle. If the agent service restarts mid-execution, it picks up from the last completed step, not from the beginning. If a tool call fails, the agent retries that specific step without re-running everything before it. If a budget limit is hit, the agent stops cleanly and tells the user why. Partial progress is preserved; nothing is silently discarded.
This is not a wrapper around an LLM API call. It is a state machine that treats every operation as a recoverable, auditable unit of work. The execution record persists after the interaction completes, giving you a full trace of exactly what happened, in what order, and at what cost.
The practical consequence is that agents behave predictably under conditions that break naive implementations: network partitions, deployment rollouts, long-running tasks that span multiple sessions. Production reliability is not retrofitted — it is the starting assumption.
Section 04
Channel-agnostic by design
Agents are defined independently of the channels through which they are accessed. An agent does not know whether the request arrived via Telegram, a web chat widget, or a direct API call — it receives a normalised event from the platform and processes it using the same execution harness regardless of origin. This means you define an agent once and deploy it across multiple channels without duplicating configuration or diverging behaviour.
Channel assignment is where the connection between agent and delivery mechanism is made. Assignments carry their own configuration: override instructions specific to that channel, trust level applied to incoming users, and any channel-specific restrictions layered on top of the agent's base policy. This design keeps the agent definition stable while giving operators the flexibility to tune behaviour per deployment surface.
Three API surfaces
Channel API
Connects external messaging platforms and delivery surfaces to the platform, translating channel-native events into the normalised format agents consume.
Platform API
Exposes agent management, assignment configuration, knowledge operations, and audit log access for programmatic administration and integration.
Integration API
Enables agents to call external services as tools — databases, internal APIs, third-party SaaS — with outbound requests subject to the same policy evaluation as any other agent action.
Launch channels: Telegram, web chat, HTTP API.
Section 05
Agents that work together
An agent can call another agent as a tool. The calling agent passes a task to the sub-agent, waits for the result, and incorporates it into its own reasoning. From the platform's perspective, the sub-agent call is an auditable step with its own trace, its own cost attribution, and its own policy evaluation — not a black box handed off to another process.
Delegation always narrows permissions, never widens them. A sub-agent inherits the intersection of the calling agent's permissions and its own configured permissions. There is no mechanism for a calling agent to grant a sub-agent capabilities it does not already hold independently. This prevents privilege escalation through composition and means you can reason about a composed workflow without needing to trace every possible delegation path. Cycle detection is built in — agents cannot form delegation loops. Cost from sub-agent calls rolls up to the calling agent's budget, so a composed workflow is always bounded by the top-level budget constraint.
Agent Studio is the visual workspace for building and inspecting composed workflows. You can see which agents a given agent delegates to, inspect the permission boundaries at each delegation point, and run composed workflows step by step during development to verify behaviour before deploying to a live channel.
Section 06
Where it runs
The backend is written in Go: statically compiled, low-overhead, suited to the concurrency patterns that durable execution demands. Persistent state lives in PostgreSQL — with pgvector for semantic search across knowledge layers — and Redis handles ephemeral coordination, job queuing, and rate-limit counters. All services are stateless at the application level, which means horizontal scaling is straightforward and deployments do not require coordinated restarts.
Infrastructure runs on Hetzner Cloud in the EU, giving the platform a clear data residency story for European customers and GDPR-ready data handling from the ground up. Workloads are containerised with Docker, and the deployment model keeps the platform surface area small: no serverless cold-start latency, no opaque managed services between the execution harness and the database that holds execution state.
Backend
Go
Database
PostgreSQL + pgvector
Cache / Queue
Redis
Region
Hetzner Cloud EU
Go deeper
Download detailed documents covering Thinklio's architecture, deployment patterns, and the business case for governed autonomous operations.
Technical Overview
Architecture, execution model, security, and deployment for technical evaluators.
Download PDFManagement Overview
Organisational leverage, governance, pricing, and the business case for autonomous operations.
Download PDFDeployment Scenarios
Three illustrative scenarios showing how agents work at individual, team, and organisational scale.
Download PDFReady to see it in action?
The best way to understand the platform is to put an agent through its paces. Get in touch and we'll walk you through a live demonstration.