How DoseSpot Achieved 27% Faster Velocity and Streamlined QA with AI-First Testing

+50%
velocity on repetitive tasks
+27%
overall weekly output
32.3%
AI coding time increase
Michelle Kohut, Chief Technology Officer
Michelle Kohut
Chief Technology Officer · DoseSpot

DoseSpot, an Interra Health solution, is a certified ePrescribing platform trusted by over 300 healthcare clients, from digital health firms and telehealth providers to EHR companies and dental service organizations. As the company modernizes its core ePrescribing experience, its engineering and QA teams are building new features rapidly—and automated testing needs to keep pace.

Challenge

DoseSpot had invested in AI coding tools, but early adoption was inconsistent. Developers had access to GitHub Copilot, but lacked structured workflows to apply it effectively to testing work. Without a repeatable methodology, AI-assisted test creation remained dependent on individual experimentation rather than becoming a team capability.

The urgency came from a concrete deadline: DoseSpot’s QA team needed to accelerate a backlog of 150+ manual test cases for their new platform experience, and a legacy tool decommissioning deadline made the timeline urgent. The team had identified two parallel workflows—UI automation and API automation—each requiring migration to new tooling. With limited QA resources and a launch date approaching, the team needed a structured, repeatable approach that would enable any team member to transform manual tests into automated tests independently.

Solution

DevClarity first configured AI coding tools and set up the foundational infrastructure for AI-assisted test development. Next came AI coding training—two half-day sessions with hands-on work inside DoseSpot’s codebase, plus a 90-minute deep dive on AI-first testing and QA workflows.

From there, DevClarity worked directly with DoseSpot’s QA engineers over hands-on sessions to build a complete testing toolkit. The engagement produced:

  • Four-step testing workflow. A systematic process (plan → setup infrastructure → manage page objects → write tests) that turns manual testing steps into automated Playwright tests.
  • Project context in version control. Configuration files encoding DoseSpot’s coding standards, folder structure, naming conventions, and business domain context—so every AI interaction follows project standards out of the box.
  • Reusable prompt library. A collection of prompt templates committed directly to the repository, giving the team a consistent starting point for test creation.
  • Plan-First Pattern. Using Copilot’s Plan mode before code generation to validate AI direction upfront—catching hallucinations and incorrect implementations before they reach the codebase.

Throughout the engagement, the team built strong review habits—catching AI over-eagerness early and maintaining quality oversight at every step. As the team learned the patterns, one insight stood out:

“That information should be put in the Copilot instructions folder so that no one ever has to have this conversation with AI again.

— Lead QA Engineer

Results

DoseSpot’s QA team saw measurable performance gains: +50% velocity increase on repetitive tasks and +27% increase in overall weekly output. AI coding time jumped 32.3%, with many developers now using AI tools daily for more than half their coding time. The team established consistent AI-assisted testing practices across both UI and API workflows.

Beyond the metrics, the engagement produced durable workflow changes:

  • Structured, repeatable methodology. The four-step testing workflow is now self-sustaining—any team member can create tests independently without relying on a single expert.
  • Knowledge embedded in version control. The configuration files and prompt templates live in the team’s repos, so AI-assisted patterns scale with the codebase.

When asked which tool change had the greatest impact, developers responded: “Using our AI coding tool in agent mode has been a game changer.

What DoseSpot received. What once required manual test creation for every new feature now follows a systematic AI-assisted pattern. What began as a project to automate a 150-test backlog became a repeatable methodology the entire QA team can own—with every workflow, prompt template, and coding standard committed to version control and ready to scale.

+50%
velocity on repetitive tasks
+27%
overall weekly output
32.3%
AI coding time increase

Double your dev team's output with AI

Learn how DoseSpot streamlined QA and achieved 27% faster velocity with AI-first testing workflows.

Talk to DevClarity →