Vivek Khandelwal
CoFounder
Over the last year, I have had 760 conversations in my Claude account. Almost 2per day. Another 300 in Perplexity. Over a year of asking questions, refining prompts, building context. Between Claude, Perplexity, and WisprFlow, my AI stack feels very complete, and competent.
Earlier today someone asked me to try Grok. My reaction was almost rude. I straight up refused. Not because Grok is bad. I am happy to try new models, but for personal use, I always defaulted to Claude. Not sure if it's the UX, or because Claude "knows me" now. It has my taste. My preferences. The way I think through problems. Starting over with a new LLM feels like explaining yourself to a stranger when your best friend already gets you. Every founder, operator I have met over the last few months has picked their LLM chat app.
ChatGPT people. Claude people. Grok people.
And none of them want to switch. The intertia to move is massive. Clearly retention and usage charts are through the roof. If that's how I feel about switching chat apps, imagine how enterprises feel about switching AI platforms. The switching cost isn't money. It's accumulated "intelligence". One could argue that most of these AI applications are not intelligent and then are highly likely to be thin wrapper. But. The system knows you and in case of enterprise - the workflows. It has context you didn't explicitly provide but built up over hundreds of interactions.
Now multiply that resistance by a thousand. That's what enterprise AI switching costs look like.
Introducing Enterprise Vendor Lock-in In the AI Era
Organziations tend to believe that they are not locked in because Salesforce lets them bring their own LLM key. ServiceNow too. HubSpot. Bring your own foundational model. Closed source. Open Source. SLM. LLM. Anything.
"We control the model," they say. "We can switch anytime."
No. You can't.
The foundation model is just the execution layer. Your enterprise intelligence lives in the vendor's orchestration - the layer that decides what information matters when, how documents relate, how chunking is done, which policies override others, how to reason across your specific business context. The reliability of the output aside - the orchestration layer is simply put - not yours. Sure - a vendor would allow you to bring in your meta data, SOPs, may be even how you want to chunk and more.
You uploaded SOPs, customer histories, institutional knowledge. The vendor transformed this into "understanding". That understanding is encoded in their proprietary semantic layer. Their retrieval logic. Their orchestration engine.
You kept the engine. They kept the car. Don't mistake this - you can't move the engine and fit it in another car body.
No Such Thing As Migrating Intelligence
Here's the real test: If your vendor raised prices 40% next year, could you actually leave?
Not theoretically. Actually.
In 2021 - Moving from Hubspot to Salesforce required budget approval, moving data, replicating workflows. You could get all of that done. Brute force your way through it. Salesforce would even help you move in - buy contracts, give onboarding support and more.
In 2025 - Moving from HubSpot to Salesforce isn't like switching from Claude to ChatGPT. It's worse. It's switching when your entire company has built workflows around that intelligence. When your sales team's muscle memory depends on it. When your service processes are wired into it.
Once your organization starts using it? The inertia becomes insurmountable. You're not locked into the LLM. You're locked into the orchestration layer that makes the LLM useful for your specific business.
What Actually Matters
Intelligence isn't the model you call. It's the structure that tells the model what to retrieve, how to reason, what context matters. If that structure lives in vendor-proprietary format, you don't own your intelligence. You rent it.
And unlike my Claude account - which costs me $20 a month - your enterprise is paying hundreds of thousands. Millions. With switching costs that make my resistance to trying Grok look trivial.
The question isn't whether to use AI. It's whether you'll own the intelligence you build, or spend the next decade paying rent to whoever got there first.
The Litmus Test and Question To Ask Your AI Vendor
The test is simple: Can you export not just your data, but the intelligence layer - all the learned relationships, orchestration logic, and reasoning patterns, and use it, say elsewhere?
If the answer is no, you're building equity in someone else's platform.