Tako AI v1.5: Your New Okta AI Sidekick

When we launched v1.0 and v1.3, we pioneered Multi-Agent Orchestration —a “committee” of specialized agents (Planner, SQL Analyst, API Runner) working together. This approach allowed us to tackle complex identity tasks by splitting them across specialized roles.
Read more about the previous version here

But AI moves fast, and so we needed to adapt.

As models like Gemini 2.5 Flash and Claude 4.5 Haiku became exponentially faster and cheaper, we saw an opportunity to evolve. We realized that passing context between five different agents sometimes introduced noise and latency. The industry trend is shifting from rigid “Agent Workflows” to adaptive agentic processes.

So, we evolved. Tako AI v1.5.0-beta collapses the committee into a single, powerful Autonomous Engineer.

We’ve rebuilt the core engine around the ReAct (Reason and Act) pattern. Instead of passing tasks around, Tako now acts like a seasoned developer: it thinks about a problem, writes the code to solve it, tests it in a sandbox, and—here’s the kicker—fixes its own mistakes if something goes wrong and produces an end script to generate results to answer the user’s query.

Tako Ai react architecture

Breaking the “Pick Two” Rule

You know the old project management saying: Good, Fast, Cheap—pick two.

In the world of AI, this has been a painful reality. You usually have two choices:

  1. The “Smart but Slow/Expensive” Route: Use massive reasoning models (like GPT-o1 or Claude Opus) that cost a fortune and take forever to reply.
  2. The “Fast but Risky” Route: Use cheaper, faster models and cross your fingers that they don’t hallucinate or write broken code.

With Tako v1.5, our Autonomous AI Agent for Okta, you get all three.

Here’s how we broke the rule: Tako can now use the smaller models (like Gemini 2.5 Flash, Claude 4.5 Haiku, or GPT-4.1) that are cheap and fast. And because the new ReAct engine validates every step by actually running the code it writes, we get good (accuracy) through the agent’s self-correction loop—without needing slow, expensive reasoning models.

The result? Enterprise-grade reliability at a fraction of the cost.

It Reads the Manual So You Don’t Have To

One of the biggest challenges with AI is “hallucination”—when the AI makes up an API parameter that doesn’t exist.

Tako v1.5 solves this with Context Engineering. Before it writes a single line of code, it pulls in the relevant slices of the Local Okta API JSON document (covering over 107+ endpoints) and the Database Schema (if DB sync is used). It knows exactly which endpoints exist, what parameters they accept, and what your data looks like.

It’s like having an engineer who has the documentation open on one screen while they code on the other.

The “Self-Healing” Autonomous AI Agent for Okta

The coolest part of v1.5 isn’t just that it writes code—it’s that it debugs it.

We’ve all been there: you write a SQL query with a typo in a column name, or a Python script that references a key that doesn’t exist. Usually, that means stopping, reading the error log, tweaking the code, and trying again.

Tako does that loop for you, automatically, in milliseconds.

  1. Drafts Code: It writes the script based on the docs.
  2. Security Scan: A built-in shield checks the code for unsafe operations before it runs.
  3. Executes: It runs in a secure sandbox.
  4. Fixes: If it hits an error (like a SQL syntax error or a KeyError), it sees the error, reasons about the fix (e.g., “I used the wrong column name—let me check the schema”), and rewrites the code.

You don’t see the failure; you just see the final, polished result. It’s self-healing code generation, and it feels like magic.

See the Agent Think in Real-Time

One of the most fascinating parts of v1.5 is watching it work. The Live Progress UI doesn’t just show a loading spinner—it shows you exactly what Tako is thinking.

You’ll see it reason through your query: “I need user data from the last 90 days… that’s in the database… I’ll write a SQL query first.” Then it drafts the code, runs the security scan, executes it in the sandbox, and—if something breaks—you’ll see it pause, analyze the error, and rewrite the code with a fix.

It’s surprisingly human-like. You start to see the LLM’s “thought process” unfold step-by-step, and it builds confidence that this isn’t just a black box—it’s a reasoning system you can trust.

Safe by Design

We know that giving an AI access to your identity infrastructure sounds scary. That’s why we built v1.5 to be Paranoid by Default.

  • Read-Only: It defaults to read-only permissions.
  • Sandboxed: Every script runs in an isolated container that can’t touch your local system.
  • Zero Cloud Dependencies: Your data stays in your environment. We don’t train on it, and we don’t store it.

Smarter Data Access: The Best of Both Worlds

In v1.3, Tako could access both APIs and the database—but it had to commit to a plan upfront. If the plan was wrong or a step failed, the entire query would stall.

v1.5 fixes that.

Now, Tako makes smarter decisions while it’s working. If your database has the data needed to answer your query, it writes SQL and fetches it instantly. If the answer isn’t in the database, it iterates through the right API endpoints instead. And if the answer requires both? It pulls what it can from the database, then adds the missing pieces from live API calls—all in one seamless execution.

And here’s the best part: the database sync is completely optional. If you don’t want to sync your Okta data locally, Tako works perfectly fine with just live API calls. The database is there for speed (avoiding rate limits on large queries), but it’s not required.

  • Want real-time data without any setup? API-only mode.
  • Have 50,000 users and need fast analytics? Optional database sync for performance.

Tako adapts to your infrastructure, not the other way around.

“Why Not Just Use the Okta MCP Server?”

We get this question a lot. If you’re already using the Okta MCP Server in tools like Claude Desktop or Cursor, that’s great and we built that too! But Tako v1.5 solves a different problem.

MCP is powerful for developers who live in their IDE. It gives you real-time Okta data right where you code. But it requires:

  • Per-User Setup: Every developer needs to configure MCP on their local machine or IDE.
  • Technical Expertise: Users need to know how to structure prompts, interpret raw API responses, and handle errors.
  • Scale Limitations: When you’re dealing with tens of thousands of users, groups, or applications, MCP tools can struggle with context limits and output formatting.

Tako v1.5 is designed for teams, not just developers.

  • Centralized Deployment: One Docker container serves your entire team—no per-user configuration.
  • Built for Non-Technical Users: IT managers, security analysts, and help desk staff can ask questions in plain English and get structured, actionable results (formatted tables, CSV exports, visual dashboards).
  • Handles Enterprise Scale: Whether you’re querying 50 users or 50,000, Tako’s iterative engine processes massive datasets without hitting token limits or losing context.
  • Full Control: You control the UI, the output format, the security policies, and the deployment environment. It’s not just an API—it’s a complete identity operations platform.

Think of it this way: MCP is like having a power tool in your workshop. Tako is like having a dedicated engineer on your team who knows how to use that tool and delivers the finished product.

If you need ad-hoc queries while coding, use MCP. If you need a reliable, self-service platform for your entire organization, use Tako.

Ready to Try It?

Tako AI v1.5.0-beta is available now. It’s faster, smarter, and cheaper to run than ever before.

  • Runs Everywhere: Docker support for AMD64 and ARM64 (Apple Silicon/AWS Graviton).
  • Open Source: Inspect the code, contribute, or fork it on GitHub.

We’d love to hear your feedback. Join the discussion on GitHub or reach out to us directly.

Get Started with Docker in 10 Minutes →

Get Started with Tako AI

Ready to experience the future of identity management? Tako AI is available now with our signature 10-minute Docker deployment.

Resources:

Connect with the Team:

  • General Support: support@fctr.io
  • Direct Development: dan@fctr.io
  • Feature Requests: GitHub Issues

© 2025 Fctr. Built with ❤️ for the identity management community.

Leave a Reply