AI is Reshaping Hardware Verification: Insights from Siemens’ Vision

Discover how AI is transforming hardware verification in 2025, as discussed by Abhi Kolpekwar (VP at Siemens) on "The DV Digest" podcast. Explore the evolving role of verification engineers, the impact of artificial intelligence on EDA tools, and the real-world shift from hype to practical results in chip design and testing.

I recently came across an insightful episode from “The DV Digest” podcast featuring Abhi Kolpekwar, VP of Siemens’ verification division and former Cadence EDA leader. The discussion tackles questions I’ve been wrestling with in my previous posts: How is AI actually changing our field, what’s realistic versus hype, and what does the future look like for verification engineers?

“Listen to the full conversation on YouTube: AI is Changing Verification – The DV Digest

The Productivity Problem Nobody Talks About

One of the key points that validates something I wrote about in my 2024 verification state analysis is this: we’ve been optimizing the wrong metrics. For years, the EDA industry has focused on making simulation engines faster and faster. But as Kolpekwar points out, raw tool speed isn’t the bottleneck anymore, it’s end-user productivity.

This matters because it directly connects to the employment trends I discussed in my AI employment study analysis. If AI tools are going to impact verification engineering jobs, it won’t be by making simulations 10% faster—it’ll be by automating the tedious debugging and analysis work that currently requires human engineers.

Siemens’ Approach: QuestaOne Platform

Siemens is launching QuestaOne, a unified verification platform built around three core principles:

1. Faster Engines – Still important, but not the only focus
2. Faster Engineers – AI-assisted workflows for debugging and analysis
3. Fewer Workloads – Integration and automation to reduce repetitive tasks

What caught my attention is their approach to AI integration. Rather than building a monolithic AI solution, they’re creating an LLM-agnostic, open platform. This addresses one of my biggest concerns from my post on why AI struggles with hardware testing: vendor lock-in and lack of transparency.

The open approach matters because:

  • Different companies have different requirements for model selection
  • Privacy and security needs vary widely across organizations
  • The AI landscape is evolving rapidly—betting on one vendor would be risky
  • Engineers need to understand and trust the AI’s reasoning

Where AI Actually Helps Today (And Where It Doesn’t)

The podcast features real-world results from major companies like MediaTek, ARM, Microsoft, and Rambus. The pattern aligns with my thesis that AI works best as an augmentation tool, not an automation replacement:

AI excels at accelerating repetitive verification tasks:

  • Debugging failures and analyzing waveforms
  • Test creation and coverage analysis.
  • Documentation and code explanation

AI still struggles with:

  • Verification strategies requiring domain expertise
  • Understanding complex system-level interactions
  • Making trustworthy decisions without human validation

This is exactly the augmentation versus automation distinction I made in my employment study post. When AI enhances human capabilities rather than replacing them entirely, we see productivity gains without massive job displacement. When companies try to use AI for full automation, they run into the fundamental limitations I outlined in my “Why AI Struggles” article.

The Natural Language Trap

One interesting discussion point was whether natural language will become the primary interface for verification tools. Kolpekwar’s perspective is refreshingly pragmatic: natural language is great for high-level intent, but technical abstractions like RTL and SystemVerilog aren’t going anywhere.

When I’m debugging a complex UVM testbench with race conditions and coverage holes, I don’t want to describe my needs in English paragraphs. I need precise control over:

  • Exactly which signals to trace
  • Specific timing relationships to examine
  • Detailed coverage bin analysis
  • Exact constraint modifications to make

Agentic Workflows: Promise and Risk

The most forward-looking part of the discussion focused on agentic AI workflows – systems where AI agents can:

  1. Break down complex verification tasks into subtasks
  2. Execute those tasks with some autonomy
  3. Coordinate with other agents and human engineers
  4. Learn and adapt based on results

This shifts the verification engineer’s role from operating tools to designing and managing AI-driven verification flows. Kolpekwar calls this the move toward “verification scientists” – engineers who creatively design verification strategies while AI handles execution details.

Junior engineers need structured ways to build fundamental understanding even as AI handles more execution work.

Here’s where I see both promise and risk:

The Promise:
This could address the productivity bottleneck without completely automating away junior engineers. Instead of junior engineers manually running regressions and analyzing basic coverage, they’d be designing and supervising AI agents that do this work. The job becomes more strategic and less tedious.

The Risk:
This creates exactly the employment gap I warned about in my AI employment study analysis. Junior engineers who would have learned verification fundamentals through tedious manual work may never develop that foundation. We risk creating a generation of “verification scientists” who can prompt AI agents but don’t understand what those agents are actually doing.

The solution isn’t to avoid AI tools!
Instead, we need to rethink verification education and onboarding. Junior engineers need structured ways to build fundamental understanding even as AI handles more execution work.

Real-World Results: Separating Signal from Noise

MediaTek, ARM, Microsoft, and Rambus are seeing 20-40% productivity gains in debugging and coverage analysis.

These aren’t theoretical benefits, they’re deployed solutions at leading semiconductor companies. But notice what’s being measured: productivity improvements and debugging speedups, not “AI replaced X verification engineers” or “full automation of verification.”

The Critical Questions Nobody’s Asking

While the Siemens vision is impressive, there are some hard questions I wish the podcast had explored more deeply:

1. What happens to verification career progression?
If AI automates the tasks that train junior engineers, how do people develop expertise? The “verification scientist” role sounds great for experienced engineers who already understand the fundamentals, but how do you become one?

2. How do we validate AI-assisted verification?
In hardware, we verify the verifiers. When AI is helping write tests, analyze coverage, and debug failures, who verifies the AI?

3. How do we make AI verification tools fail safely?
A bad AI suggestion in software development might cost some debugging time. A bad AI suggestion in hardware verification might mean missing a critical bug that costs millions in silicon respins.

Looking Ahead

Full automation in verification is theoretically possible but practically distant. The gap between “AI can help with debugging” and “AI can fully verify a complex SoC design” is enormous. Human knowledge, creativity, and judgment remain critical for building trustworthy verification flows that can scale to modern designs.

But the trajectory Siemens is showing is clear: AI is moving from experimental curiosity to core verification capability.

The challenge is navigating this transition thoughtfully. We need to embrace AI’s genuine productivity benefits while being honest about its limitations. We need to find ways to train the next generation of verification engineers even as entry-level tasks become automated. And we need to ensure that our rush to adopt AI doesn’t compromise the verification quality that keeps the semiconductor industry running.

My Commitment

I’ll continue tracking how AI actually impacts verification work – If you’re using AI verification tools, I want to hear about what’s actually working and what isn’t.

What’s your experience with AI in verification? Drop a comment or reach out! I’m always interested in real-world experiences as we navigate this transition together.

Newsletter Updates

Enter your email address below and subscribe to my newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *