I still believe AI-native systems will replace traditional ERPs. That conviction hasn't changed. What changed was my understanding of sequencing.
Kacti AI started as a semantic execution platform. The idea: replace rigid ERP architectures with a compiler-driven system that could transform business intent into executable workflows. Describe how your operations work. The platform compiles it into live, deterministic logic.
Our semantic model works. We built Business Operations as our first live module with a design partner, a small e-commerce brand running print-on-demand workflows that Shopify couldn't model natively. Our analyzer interpreted operational signals, reasoned about transitions, and recommended when the business should shift from on-demand production to pre-printing inventory.
That module validated a lot. The semantic layer works in real conditions. It can reason about structural changes to business logic, not just static rules. Architecturally, it proved the approach is viable.
But strategically, something else surfaced.
The problem wasn't big enough.
Business Operations reduced friction. It solved real pain. But workarounds existed. The urgency wasn't strong enough to drive aggressive adoption. And to solve larger commerce problems, we'd need deep integration with existing ERP systems or build full infrastructure ourselves, both of which dramatically increase complexity for an early-stage company.
The real issue: sequencing
ERP replacement is a massive milestone. It requires rebuilding complex workflows, handling deep integrations, managing long enterprise sales cycles, and carrying heavy operational overhead. Meanwhile, the AI landscape is moving fast. Startups are winning by focusing narrowly, proving value quickly, and iterating.
I realized that ERP replacement was not our core innovation. Our core innovation is semantic intelligence: the ability to give AI true structural understanding across complex information. ERP was one application of that. And not the right first application.
As an early-stage company, we need earlier wins. Targeting ERP forces us into dependencies and infrastructure we don't control yet. That slows us down.
So the shift wasn't abandoning the vision. It was repositioning the starting point.
Why analysis before execution
Most AI systems today assume the retrieval layer already gathered all relevant information. Then they reason over that subset. But if retrieval is incomplete, reasoning is incomplete. No one tells the language model what it's missing.
This is the gap we're solving.
We're building the AI Analyzer, a system that transforms unstructured information into structured, explainable analysis. Not summaries. Not chat responses. Structured reasoning artifacts: claim maps, evidence graphs, timeline models, gap analysis. Outputs you can inspect, trace, and build upon.
The insight is this: semantic correctness has to be proven in analysis before it can be trusted in execution. If your system can't reliably structure and reason over complex information, it has no business executing workflows based on that reasoning.
Analysis is the proving ground. Execution is the horizon.
Why legal is the first domain
The move into legal came from personal experience. Working closely with litigation attorneys on a legal matter showed me what case preparation actually looks like from the inside. It is deeply semantic work. Gathering evidence, connecting timelines, identifying contradictions, anticipating counterarguments, constructing a coherent narrative across many documents.
AI tools today help. But they're not enough.
Legal cases are dense networks of documents, events, people, and timelines. Missing one critical fragment can change the entire narrative. The cost of incomplete reasoning is measured in outcomes.
Legal is the right proving ground because it is structurally rich, highly semantic, high-stakes, and demands correctness. It also removes heavy infrastructure dependency. We don't need to replace a massive enterprise system. We can focus directly on document analysis and semantic modeling.
We're working with litigation attorneys as design partners and receiving advisory from paralegals with experience across multiple law firms, helping us align the system with how case preparation actually works in practice. We've already identified the core pain points: finding the points that prove and disprove the theory of a case, and ensuring retrieval completeness across all documents.
Where we are now
We are in early design partner stage. We've conducted multiple interviews with litigation attorneys and paralegal advisors to define requirements. We bring firsthand experience from both attorney and client perspectives.
On the technical side, we are developing a novel approach to retrieval that replaces algorithmic search with reasoning-guided navigation over knowledge graphs. Current systems retrieve first, then reason. We're closing that gap so that retrieval itself becomes a reasoning task, and when information reaches the language model, it already carries structured semantic understanding, including awareness of what's missing.
Right now, we're building a narrow first prototype focused on identifying evidence that supports case narratives in litigation and evidence that works against them.
Where we are
We are currently building and validating the architecture and the semantic approach. Close collaboration with practicing attorneys keeps us grounded in real workflow.
What is hard
Ensuring true retrieval completeness across large document sets. Modeling relationships at scale. Maintaining entity and timeline coherence. Transitioning from prototype to production.
We are past the idea stage. We are in the proving stage.
What was hard to let go
This part is personal.
A large portion of my career has been dedicated to ERP systems. My internship involved building ERP modules in manufacturing. I've spent years researching and developing in that space. I hold patents in enterprise systems.
ERP isn't just a market opportunity for me. It's part of my professional identity.
Letting go of ERP-first was difficult because I still believe in the Semantic Execution Platform. I believe our approach, embedding true semantic reasoning at the core, is fundamentally superior to superficial AI wrappers bolted onto legacy architectures.
The tension wasn't belief. It was sequencing.
I had to separate long-term conviction from short-term strategy. What made it clear: if we succeed in legal, if we prove deep semantic reasoning in a high-stakes domain, we build credibility, customers, and resources. That success becomes the foundation for returning to the larger enterprise vision.
Narrowing focus is not shrinking the vision. It's protecting it.
The long view
The path forward is clear:
Semantic analysis → semantic optimization → semantic execution.
We prove structured reasoning works in legal case intelligence. We expand the analyzer to business optimization and structural diagnostics. We activate the execution layer when semantic correctness is commercially validated.
The semantic platform remains the foundation. The analyzer is the bridge. Execution is the horizon.
We are building the engine first.
If you're interested in AI-driven analysis in legal, compliance, or other high-stakes domains, I'd love to connect.
You can grab a time in my calendar here: Talk to founder
