DoseSpot, an Interra Health solution, is a certified ePrescribing platform trusted by over 300 healthcare clients, from digital health firms and telehealth providers to EHR companies and dental service organizations. As the company modernizes its core ePrescribing experience, its engineering and QA teams are building new features rapidly—and automated testing needs to keep pace.
DoseSpot had invested in AI coding tools, but early adoption was inconsistent. Developers had access to GitHub Copilot, but lacked structured workflows to apply it effectively to testing work. Without a repeatable methodology, AI-assisted test creation remained dependent on individual experimentation rather than becoming a team capability.
The urgency came from a concrete deadline: DoseSpot’s QA team needed to accelerate a backlog of 150+ manual test cases for their new platform experience, and a legacy tool decommissioning deadline made the timeline urgent. The team had identified two parallel workflows—UI automation and API automation—each requiring migration to new tooling. With limited QA resources and a launch date approaching, the team needed a structured, repeatable approach that would enable any team member to transform manual tests into automated tests independently.
DevClarity first configured AI coding tools and set up the foundational infrastructure for AI-assisted test development. Next came AI coding training—two half-day sessions with hands-on work inside DoseSpot’s codebase, plus a 90-minute deep dive on AI-first testing and QA workflows.
From there, DevClarity worked directly with DoseSpot’s QA engineers over hands-on sessions to build a complete testing toolkit. The engagement produced:
Throughout the engagement, the team built strong review habits—catching AI over-eagerness early and maintaining quality oversight at every step. As the team learned the patterns, one insight stood out:
“That information should be put in the Copilot instructions folder so that no one ever has to have this conversation with AI again.”
— Lead QA Engineer
DoseSpot’s QA team saw measurable performance gains: +50% velocity increase on repetitive tasks and +27% increase in overall weekly output. AI coding time jumped 32.3%, with many developers now using AI tools daily for more than half their coding time. The team established consistent AI-assisted testing practices across both UI and API workflows.
Beyond the metrics, the engagement produced durable workflow changes:
When asked which tool change had the greatest impact, developers responded: “Using our AI coding tool in agent mode has been a game changer.”
What DoseSpot received. What once required manual test creation for every new feature now follows a systematic AI-assisted pattern. What began as a project to automate a 150-test backlog became a repeatable methodology the entire QA team can own—with every workflow, prompt template, and coding standard committed to version control and ready to scale.
Learn how DoseSpot streamlined QA and achieved 27% faster velocity with AI-first testing workflows.
Talk to DevClarity →