AI-Powered Features Reality

What Most Teams Get Wrong About "AI-Powered" Features

8 min read

"We want to add AI to our product." We hear it in almost every discovery now. But when we dig deeper, most teams don't actually want AI. They want faster search. Or smarter automation. Or better data visibility. Problems that have had good solutions for years. We've built real ML into three products. Here's when it was worth the complexity and when simpler engineering was actually better.

Why Everyone Wants AI Now

It's 2024. Every client meeting includes the question: "Can we add AI to this?" Not "should we," but "can we?" As if the ability to train a model on their data is a feature in itself.

The reason is simple: AI has become real. GPT-4 is actually good at things. LLMs can process natural language. ML models can find patterns in data. For the first time, the promise of AI isn't hype — it's tangible. Clients see that and want in.

But there's a gap between "AI is now possible" and "AI is the right solution to your problem." Most teams haven't thought about what that gap is.

What They Actually Need

When a client says "we want to add AI," what they usually mean:

"We want search to be smarter." "Currently users have to know exactly what they're searching for. We want the system to understand intent." → Actually a full-text search problem with better indexing, not ML.

"We want recommendations." "Show each user content relevant to them." → Might be collaborative filtering, might be simpler: just show what similar users engaged with (no ML needed). Or use embeddings (light ML) instead of training a full recommendation engine.

"We want to detect anomalies." "Alert us when something goes wrong before the user notices." → Might be statistical thresholds (no ML). Might be simple anomaly detection (decision trees, not neural networks). Rarely needs deep learning.

"We want to automate decisions." "Approve this order automatically, reject that one." → Might be rule-based logic (faster). Might need classification (ML). Rarely needs the complexity of a production ML pipeline.

None of these naturally require AI. They require good engineering.

Three Real Examples

1. The Recall Management System — We Didn't Use ML

Client: "We want to predict which customers are affected by a recall based on their purchase history." This sounds like a classification problem. We built it anyway with pure SQL queries on purchase history, a rules engine, and some manual mapping. It was 80% as good as an ML model and required no ML infrastructure, no model training, and no "maybe it works" uncertainty. Sometimes a good database query beats a neural network.

2. The Eflex Energy System — We Used ML (Predictive Analytics)

Client: "Predict which of our 1000+ distributed renewable energy assets will fail in the next 30 days so we can schedule maintenance." This is a legitimate prediction problem. We trained models on 3 years of historical device data, identified patterns in voltage sag, temperature, efficiency degradation. The models caught failures 2-3 weeks before they happened. ROI was clear: predictive maintenance costs less than emergency repair. ML was the right tool here.

3. The Early Idea That Became Nothing

Client: "Build a system that understands emails and routes them to the right department automatically." This sounded like NLP. We evaluated it. The client had ~50 email types. Could be handled with 20 regex patterns and a rule engine. We built that instead. If they ever got to 500+ types with subtle distinctions, we'd revisit ML. But at the current scale, rules won.

When ML Actually Makes Sense

We've learned to ask specific questions before recommending ML:

1. Is the pattern too complex for rules? Can you describe the rule in code? If yes, don't use ML. If it's "predict if a device will fail" — not "device will fail if voltage > X and temp > Y" — then maybe ML.

2. Do you have data? ML needs training data. If you're starting from scratch, you don't have it. Eflex had 3 years of device telemetry. That's why ML worked. A new business with no historical data shouldn't train a model.

3. Is the cost of getting it wrong manageable? If an ML model mispredicts, what happens? If it's a recommendation (show wrong article), fine. If it's medical (wrong diagnosis), not fine. ML introduces a type of failure that rules don't: misclassification. That's okay for some problems, not others.

4. Can you measure success? You need a clear metric. "This model is 92% accurate" means what? For Eflex, we could measure: "percentage of failures predicted 2+ weeks early." Clear. For vague goals like "smarter," ML is a bad fit.

5. Will you maintain it? ML models degrade. Data distributions change. The model trained on 2022 data performs worse on 2024 data. You need to retrain, monitor, alert. That's ongoing work. If a client isn't ready for that, don't build the model.

When to Skip ML Entirely

Most of the time, a combination of good engineering, smart data modeling, and simple algorithms beats ML. Skip the model when:

  • You can describe the rule: "Users in premium tier get feature X" doesn't need ML. Code it.
  • You don't have enough data: ML needs volume. If you're predicting something rare (fraud in a small business), you don't have enough examples to train on.
  • Latency matters: ML inference adds latency. If the user is waiting, a simpler algorithm is better.
  • Explainability matters: You need to explain decisions. "The model said yes" isn't an explanation. Rules are explainable. ML models are not.
  • You can't measure success: If you don't know if it's working, don't build it.

The hard truth: if a client wants "AI" more than they want "solved," they're not ready for ML. Real ML projects start with a specific problem, not with "add AI."

Should You Add AI to Your Product?

We help teams figure out if ML is actually the solution to their problem, or if simpler engineering is better.

Talk about your AI idea