Lessons from building things.
Not a blog. Not thought leadership. Just honest notes from real projects — what we got right, what we'd do differently, and what we think about now.
When AI agents run the build — and when they need overriding.
Agentic AI workflows changed how we work in 2025. Not in the way most people talk about — not "AI writes code while engineers watch." The real shift was in where human judgment got more important, not less. When an agent can independently plan, execute, and iterate across a multi-step task, your job as an engineer stops being about writing code and starts being about knowing when the agent is wrong.
We've now run agentic pipelines using Claude Code and MCP-connected workflows across three projects. The pattern that emerges: agents are fast and mostly correct on well-defined subproblems. They're confidently wrong on anything that requires business context, compliance judgment, or understanding of what the user actually meant. You can't tell the difference from the output alone — which is why senior review isn't optional. It's the whole point.
We rebuilt a hospital portal and the hardest part wasn't the HIPAA compliance.
When we took on a patient portal for one of America's largest pediatric health systems in 2018, we thought the challenge was technical. HIPAA, Sitecore, 100k+ user migration, legacy data. But three months in, we realised the real problem was people — specifically, the 14 internal stakeholders with 14 different definitions of "done."
What we learned: every enterprise healthcare project is really two projects. The software project, and the internal alignment project. You cannot succeed at the first without running the second in parallel. We now build stakeholder alignment workshops into our discovery phase by default — not as a nice-to-have, but as a delivery dependency.
The first time AI actually saved a project — and what we got wrong the time before.
In 2023 we tried to use AI to accelerate the Harbour Vessel project. It didn't work the way we expected. In 2024 we tried again on the Recall Management system. It did. Here's the actual difference — and it's not which model we used.
We co-founded a product and it taught us more about client work than any client project did.
Maanavar was our own edtech platform. We built it during COVID, tested it with 500 students, and eventually wound it down. The experience of being both the client and the agency — at the same time — permanently changed how we run engagements.
Why we stopped using "launch date" as a success metric.
Launch day is a vanity milestone. The Oliver Brown POS went live on schedule — and then spent three weeks in intensive post-launch tuning. That tuning is what actually made the product work. Here's what we measure instead now.
What most teams get wrong about "AI-powered" features — and the three that actually work.
Every client wants AI in the product now. Most of the time, what they actually want is faster search, smarter automation, or better data visibility — problems that have had good solutions for years. We've built real ML into three products. Here's when it was worth it and when it wasn't.
How We Shipped EduHire in 3 Weeks — SaaS MVP Real Timeline
Real case study: teacher-school matching platform with video interviews, payments, and authentication — shipped in 21 days. Week-by-week breakdown of discovery, architecture lock, core build, testing, and launch. What actually shipped vs. what waited for v1.1.
AI Automation for Business: Real Examples & ROI
Invoice processing. Order consolidation. Support automation. Reporting. Real client examples from 8 years of automation projects. What makes automation work, what fails, actual cost savings, and 3-month payback timelines. How to avoid the common mistakes.
Best LMS Platforms for Schools in India — Real Insights
We built Maanavar EdTech during COVID. Here's what we learned: why global LMS platforms fail in Indian classrooms, what features actually matter (offline access, regional languages, teacher-friendly setup, assessment), which platforms work, and when to build custom instead of buying.
We write when we have something worth saying.
No newsletter. No content calendar. Notes get added after major project milestones — when the lesson is real, not manufactured.