Are you getting the most out of AI coding tools?

AI coding tools like Windsurf, Cursor, and Github Copilot are changing the world of software development - increasing developer productivity seemingly overnight and enabling experienced engineers to do the work of full teams.

We've all heard these stories of other companies' successes, but how do we know if WE are getting the impact that AI coding tools have promised? How do we know if we're using these tools to their fullest potential?

At DevClarity, we work with engineering teams to systematically leverage AI coding tools every day, and we've started seeing some patterns in the process.

Below are our best questions for evaluating if your team is getting the most out of AI coding tools.

Adoption

What percentage of your team has licenses to AI coding tools?

Ideally, this should be 100%.

What percentage of your team are daily active users of your AI coding tools?

A common path we see: early days 0-25%. With AI adoption initiative + license provisioning, this may reach 50%. Dedicated training can push this to 80%+, which is where we would target.

Cursor and Windsurf Team plans provide admin dashboards with these analytics.

What percentage of your code is written by AI?

Rather than a specific target number, the value here is in observing trends over time and trends by users. For example, adoption from junior developers typically occurs early in the AI adoption process. In contrast, an increase in AI code written by senior developers is a strong indication of later-stage adoption and is usually more impactful.

That being said, in our experience, leading teams see 30%+ of new code being written by AI. Windsurf cites 40-60% of newly committed code being generated by AI for their customers.

Project Rules

Have you written company-specific prompts that are automatically passed to your team's AI coding tools to improve AI output?

In Windsurf, these are called "workspace rules." In Cursor, "project rules." In GitHub, "instructions & prompts." Providing this context to AI is critical to improve AI output and reduce the friction your developers feel with AI coding tools.

What percentage of code repositories have these prompts / project rules set up?

This is often an area of the lowest hanging fruit. The difference in AI coding outputs between a company that has 0% project rules coverage and 100% is vast. It's all about providing the proper context.

Is each developer maintaining their own set of project rules or have they been committed to your code repositories?

  • When were they last updated?
  • Who owns them?

These questions are about organizational context management. Having a clear process & toolset shared by the entire team significantly improves AI coding outcomes. This is something that DevClarity helps with.

Outcomes

Very simply, are you getting more done?

Developer productivity metrics are a notoriously controversial topic. How you measure it may vary by company, but as a senior leader, you should know if you're getting more done.

Looking at trends in velocity and cycle time can be good indicators, especially at the individual level. But, beware of the possibility that story point estimates and ticket sizes may change as AI adoption progresses. The only "metric" that truly matters is delivering the outcomes expected by the business (see below).

Are you able to start on projects sooner than expected because you've finished up other work?

This is more qualitative but comes directly from a PE-backed CTO we recently talked with. He made increasing developer output with AI his #1 board-level initiative. He said his team has seen 30-50% productivity gains, both self-reported and measured, and a noticeable improvement in pushing key initiatives across the finish line.

SDLC

Are requirements improving?

Your team should combine product requirements docs with codebase context in their editors to create codebase-aware developer tickets. This will improve requirements before work is ever done.

Is your test coverage increasing with AI tools?

Test coverage sets the foundation for quality. It prepares you for the increase in velocity that will come with AI-enabled development.

Luckily, testing is an area where AI can make a material impact when leveraged well. In a world where AI can rapidly analyze & generate code, test coverage should be 85%+.

Are developers still manually fixing papercuts, such as simple syntax, spelling, and grammar fixes?

Follow-up: Do you have an asynchronous coding agent available for your developers?

Asynchronous coding agents, like Devin and OpenAI's Codex, are perfect for handling these types of tasks. We routinely use them at DevClarity to do low value work in the background while we focus on the highest priority efforts. They are improving in sophistication daily.

Are your developers still manually writing all PR descriptions?

AI is faster and generally much more comprehensive at writing PR descriptions. Then, developers can quickly review, edit, and submit in a fraction of the time.

Does AI perform the first code review on all PRs?

AI is faster and generally much more comprehensive at this as well. AI can take the first cut reviewing a PR, then the developer can scan, dive in where necessary, and accept or request changes in a fraction of the time.

Code Readiness

Do you have a documentation process that is self-improving with AI?

AI can easily update documentation as new code is written or refactors are performed. This creates better context for developers and AI alike.

Is language & framework documentation available to developers inside their IDE? Is documentation available to AI agents?

In addition to project rules, developers and AI tools should have access to official documentation of the languages & frameworks used in your applications.

Do you have clear commenting standards established to improve code readiness for AI?

Comments are rich context for AI coding tools and agents. Having established commenting standards improves AI outputs.

Have you established your desired patterns & anti-patterns?

Similar to hiring new developers, the likelihood of success with AI agents increases when you can clearly explain the patterns (and anti-patterns) to follow while writing code.

Can you onboard a new developer in less than 30 days?

AI coding tools can rapidly scan, document, and explain codebases. Developer ramp-up time should be faster than ever.

Conclusion

AI coding tools are powerful, but only as powerful as the processes that surround them. By measuring adoption, enforcing project rules, tracking outcomes, optimizing your SDLC, and ensuring code readiness, your AI experiments can be turned into sustainable productivity gains.

DevClarity helps engineering teams get the most out of AI coding tools in 30 days. If you're interested in learning more, reach out today.