Creating a Chatbot That Doesn’t Suck

Creating a chatbot that doesn't suck

Back in 2017, at the peak of the Alexa craze, I dove headfirst into conversational AI alongside a team of professors and PhDs at the University of Michigan. We thought we’d cracked the code on messy human language, convinced we had created the secret sauce that would finally make chatbots stop sucking. 

Back then, I even wrote an article explaining why. Human language is messy, complicated, constantly evolving. Teaching AI to understand slang, sarcasm, and weird grammar felt nearly impossible. One misstep in understanding casual terms (like thinking “cheddar” was something you’d put on a sandwich, not your bank account) and the whole thing fell apart. The result? Users quickly learned that typing out their question usually ended in frustration. And no one has patience for frustration.

We had actually solved this problem way back then, but consumer sentiment didn’t budge: chatbots still sucked. We’d only solved half the puzzle.

Seven years later, computers that understand messy human language are the norm, expected even. But ask anyone off the street today if they love chatbots, and odds are you’ll get an eye-roll, groan, or some colorful language. Why? Because typing into a chatbot still feels riskier than just clicking a couple of buttons. We trust buttons. We know exactly what we’ll get. It’s predictable, even if it’s annoying.

The barrier here isn’t tech anymore; it’s trust.

We don’t mind conversational interfaces. We’ve all gotten comfortable asking Siri for the weather or Alexa to play our favorite podcast. But with anything remotely complex, like shopping, customer support, or finances, users bail. Why type out your question and risk confusion when a GUI gives you certainty, even if it means navigating through five extra clicks?

That’s where predictive AI changes the game entirely. It’s not about the chat; it’s about anticipating the chat. If an AI agent already knows what you’re likely to ask next, it significantly lowers that perceived risk. It stops feeling like you’re rolling dice on whether the chatbot “gets” you. Instead, it feels like genuine help, right when you need it.

Think about it: you’re browsing shoes, and before you even articulate your confusion, the chatbot prompts you, “Not sure which size to choose? Here’s how they run.” Suddenly, you’re not risking disappointment; you’re accepting helpful guidance.

Here at alby, we applied this first on Product Detail Pages, but the power of prediction isn’t just limited to one spot. Shoppers, and really, all users, now expect proactive help everywhere. Predictive AI meets people exactly where they hesitate, exactly where uncertainty creeps in.

Trust is built in these tiny interactions. Get a few right, and suddenly, the user sees your chatbot as reliable, helpful, even delightful.

This isn’t about bringing chatbots back from the dead. It’s about rethinking their entire existence. It’s about realizing that to truly “get” humans, technology has to do more than just understand what we say—it needs to anticipate our next question, our next hesitation, our next need.

We believe we’ve made a chatbot that doesn’t suck. Something that allows you to have a real conversation with your customers, with an underlying understanding of who they are, and what they need. If you want to try it for yourself, head to alby.com.

See what Agent OS can do for your teams.