The Challenge: Manual Regression Configuration Nightmare
In any verification team, managing regression runs efficiently is crucial for maintaining productivity and achieving coverage goals. In our SoC verification environment at Microchip, we faced a significant challenge with regression configuration that was costing the team valuable time and resources.
The Three-File Problem
Our regression setup relied on three separate plaintext configuration files:
- Include file: Listed all test names
- Runs file: Specified test names with the number of runs and RTL/GLS selection
- Ignore file: Override mechanism to exclude specific tests from runs
The manual process of keeping these files aligned was error-prone and time-consuming. Engineers had to:
- Edit multiple files for any test configuration change
- Ensure perfect synchronization between files
- Hope that misalignments wouldn’t cause silent failures
The worst part? Misconfigurations often resulted in failed overnight regressions, discovered only the next morning. This meant losing an entire night of coverage collection and simulation time! MAN! What a mess!
The Solution: PyQt-Based Regression GUI
I developed a Python GUI tool using PyQt that transformed our regression setup process from a manual, error-prone task into an intuitive, visual experience.

Why Do I Think It Was A Good Solution:
1. Automatic Environment Detection
The tool reads project environment variables set by our standard initialization scripts, making it project-agnostic and user-independent. It automatically adapts to any project structure where the user is working, and is duplicated automatically for every project, so no hassle there, either.
2. Intelligent Test Discovery
- Scans test directories using pattern matching
- Identifies test files based on naming conventions
- Filters out non-test files and directories
3. Visual Test Configuration
The GUI presents tests as interactive “cubes” in a grid layout, where each cube displays:
- Test name
- Number box for run count configuration
- RTL/GLS checkbox for simulation mode selection
- Visual indicators for current configuration state
4. File Synchronization Engine
The core parsing engine uses regular expressions combined with a custom parser to:
- Read the three configuration files
- Parse their current state
- Generate properly formatted output using f-strings
- Ensure perfect alignment between all files
5. State Management
- Save: Direct overwrite of configuration files.
- Save As: Create named configuration sets.
- Load: Import previous configurations through file dialogs.
- Auto-save of per-user configuration state.
6. Safety Features
- Unsaved changes prompt when closing
- Validation for zero-run tests in case there is a test marked to run but number of runs is 0
- Error dialogs for environment variable or path issues.
- Exception handling for file access problems
7. Maintenance Tools
A “Clean” button removes duplicate files created by the simulator during test runs, keeping the test directories organized.
File Parsing Strategy
The tool uses a combination of regular expressions and custom parsing logic to handle the plaintext configuration files. The f-string formatting ensures that the output maintains the exact format expected by our regression infrastructure.
Results and Impact
Quantitative Improvements
- 70% reduction in regression setup time: What previously took 10-15 minutes of careful file editing now takes 2-3 minutes of visual configuration
- 30% improvement in coverage closure: Engineers could now confidently run more comprehensive test combinations
- Zero overnight regression failures due to configuration errors
Qualitative Benefits
- Developer Confidence: Engineers could start regressions and go home, knowing the configuration was correct
- Reduced Learning Curve: New team members could configure regressions immediately without understanding the file formats
- Team Productivity: Less time debugging configuration issues meant more time for actual verification work
Future Enhancements
Based on team feedback, several enhancements were planned or implemented:
Version 2.0 Features
- Plusarg Management: Enable/disable specific plusargs per test by modifying makefiles
- Test Categorization: Group tests by functionality or block ownership
- User Tagging: Allow engineers to tag and filter tests they own or currently debugging
- Auto-Documentation: Extract and display test descriptions from .sv file comments
Proposed Features
- Test dependency visualization – Maybe show agents and blocks active for each test
- Coverage trend integration – Show last coverage the single test has reached
Lessons Learned
- UI Design Matters: The visual representation of tests as “cubes” made the configuration intuitive
- Environment Independence: Reading environment variables made the tool portable across projects
- User Feedback Loop: Early adoption by team members drove feature development