Corner Dark
Corner Dark
Corner Dark
Corner Dark

Apollo-1 Agents

Complete steerability, unparalleled control

Transformer-based LLMsdon't make good agents

Traditional language models rely on transformer architectures, which excel at pattern recognition and language generation. They are particularly effective for a variety of tasks such as text generation, translation, and summarization.

However — when it comes to agentic use cases, situations where the model needs to perform actions, make decisions, or interact with tools — transformer-based LLMs face significant challenges.

Corner Dark
Corner Dark

that’s where
Apollo-1 comes in

Transparency

A transparent,
white-box
model

Controllability

An advanced instruction environment

Predictability

A structured reasoning process

Tool-use

100% successful tool-use

Fine-tuning

Fine-tuning
via human feedback

Better suited
for agents

The neuro-symbolic approach bridges the gap between the flexibility of neural networks and the precision of symbolic logic. By merging neural networks with symbolic reasoning, Apollo-1 offers several key advantages.

Symbolic reasoning provides insight into the model's decision-making process. Users can trace how the model arrives at specific responses, enhancing trust and accountability.
Explainability
Placeholderresize icon
Components can be modified or replaced without affecting the entire system. This flexibility facilitates updates, customization, and scalability.
Modularity
Placeholder 1resize icon
Explicit rules can be incorporated directly into the reasoning process. This ensures compliance with regulations, policies, and domain-specific requirements.
Rule integration
Placeholderguidelinesresize icon
Data efficiency

Symbolic components reduce the need for extensive datasets. By leveraging existing knowledge bases and rules, Apollo-1 performs effectively even with limited training data.

Data Eff Newresize icon

A higher level of understanding & precision

At the heart of Apollo-1’s capabilities lies the Structured Interaction State. This state is a symbolic, parameterized representation of each interaction, created by collecting and organizing sensory data.

By converting unstructured inputs into a structured format, Apollo-1 achieves a higher level of understanding and precision in its responses.

The structured representation is readable by both humans and software, facilitating transparency and ease of interpretation.
Human & machine readable
Human & Machineresize icon
It includes intents, entities, sentiments, and contextual information essential for accurate processing.
Captures contextual nuances
Captures Contextualresize icon
This structured state is the basis upon which rules, policies, and reasoning processes are applied.
Enables advanced reasoning
Enables Structured Reasoningnewresize icon
Corner Dark
Corner Dark
Feature
01Transparency
02Controllability
03Predictability
04Tool Use
05Fine-tuning
Transformer-based Agents
Black-box model with opaque reasoning
System prompts are the primary control mechanism
Outcomes are often unpredictable
Struggles with consistently generating structured outputs e.g. JSON
Fine-tuning for agents is challenging
Apollo-based Agents
White-box model with transparent reasoning processes
Control levers are available throughout the system
Outcomes are nearly deterministic
Tool-native with 100% success rate
Supports continuous fine-tuning, even on fine-tuned models
Corner Dark
Corner Dark

a new level of
safety & accuracy

Combining generative and rule-based approaches minimizes the risks typically associated with AI outputs.

Content moderation

Adheres to predefined rules to avoid generating inappropriate or harmful content.

Data Eff 2resize icon
Provides consistent and accurate information, reducing misinformation.
Information accuracy
Captures Contextualnewnewresize icon
The reasoning process can be reviewed and audited, meeting compliance and governance standards.
Auditability
Captures Contextualnewcenterresize icon