Selenium Community Live - Episode 7

Selenium Community Live - Episode 7

The Selenium Community Live seventh episode happened on June 19th, 2025 with Christian Bromann, creator of WebDriver.IO. He shared his vision on how AI agents and agentic workflows are reshaping the testing industry. As a project council member at the OpenJS Foundation, Christian brought valuable insights into the current state and future potential of AI in automation testing.

You can watch the episode on YouTube here- Episode 7 on YouTube

Meet the Speakers:

  1. Christian Bromann

Understanding AI Agents and Agentic Workflows

The past two years have been transformative for AI, starting with ChatGPT’s experimental launch and evolving into sophisticated agentic systems. But what exactly makes something “agentic”?

According to Christian, agentic systems typically contain both non-deterministic and deterministic tasks:

  • Non-deterministic paths: Interactions with Large Language Models (LLMs) that produce varied outputs
  • Deterministic tasks: Code execution that behaves predictably

This distinction is crucial when applying AI to real-world applications.

Two Types of AI Workflows in Testing

1. LLM-Augmented Workflows

  • Use LLMs to analyze datasets with human oversight
  • More deterministic and predictable outputs
  • Limited autonomy but higher reliability for narrow tasks
  • Example: Text processing and data analysis

2. Complete Agent Workflows

  • Agents with reasoning capabilities operating autonomously
  • Less deterministic but capable of handling complex tasks
  • Can work in unpredictable environments
  • Example: V0.dev for coding React applications

Current Industry Landscape

Christian explored several innovative companies leveraging AI for QA automation:

QA Wolf: Combines both workflow types, allowing users to record playback sessions while providing fine-grained control through natural language commands during the recording process.

Momentic.ai: A Y Combinator company building agents that generate entire automation workflows from single prompts, making test authoring remarkably simple.

Browser Use: An open-source browser automation tool that evolved into its own company, enabling API-driven cloud browser automation.

Directus AI: Recently announced as a ChatGPT-like interface with browser capabilities for data extraction and test generation.

Alumnium: Alex’s project integrating AI directly into Selenium, allowing commands like “do” or “check” within existing automation tests.

The Reality Check: Are We There Yet?

Despite the exciting developments, Christian offers a measured perspective: “We are not quite there yet.”

Many AI testing platforms currently produce subpar results, including tests without assertions or unreliable automation scripts. Rather than replacing testers, AI is positioned to enhance existing workflows and capabilities.

Impact on the Testing Pyramid

Christian predicts AI will reshape the traditional testing pyramid:

  • Unit Testing: May decrease in importance as developers write more AI-generated code
  • Integration and End-to-End Testing: Will become more crucial for validating AI-generated code functionality
  • New Testing Categories: AI-driven exploratory testing and intelligent monkey testing will emerge

As Christian notes: “There’s no value in writing unit tests for code generated by AI, and no value in AI generating unit tests for AI-generated code.”

Practical Implementation: A Real-World Example

Christian demonstrated a practical WebDriver.IO implementation that showcases AI-enhanced testing. Instead of writing multiple assertions, testers can use a single AI-powered command to validate application state:

// Traditional approach with multiple assertions vs.
// AI-enhanced single validation
await browser.aiValidate("validate that I have three todo items shown on the page that are all grocery items");

This approach works by:

  1. Capturing the application’s accessibility tree and element properties
  2. Passing this state to an LLM with a validation prompt
  3. Receiving a true/false result with explanatory hints

The key insight: Whatever you provide as state to the LLM determines what the AI can reason about.

Building Your Own Browser Agent

For those interested in creating custom browser agents, Christian outlined a architecture involving:

  • Tool-calling agents with access to browser actions (find, click, getText)
  • State observers monitoring browser changes
  • Agent history maintaining context of previous actions
  • Verification commands for state validation

However, he cautions about the trade-off between convenience and accuracy: “The most annoying thing in the world is having flaky tests, so it’s hard to justify using AI when it compromises accuracy.”

Performance Considerations

One significant challenge with current AI automation tools is speed. Christian’s testing revealed:

  • AI-driven workflows taking 13-14 seconds for simple login processes
  • Traditional frameworks completing the same tasks in 1-2 seconds

This performance gap makes real-time AI automation impractical for most testing scenarios, though pre-generated test scripts remain viable.

The Exciting Future: Model Context Protocol (MCP)

Christian’s most compelling vision involves the Model Context Protocol and web components. Instead of LLMs parsing entire DOM structures, imagine if web components could expose their functionality directly to AI agents:

  • A login component exposing a login() method
  • Shopping cart components revealing addItem() and checkout() capabilities
  • Navigation components offering routing functions

This semantic approach would make browser automation more reliable by providing AI with meaningful, component-level context rather than raw HTML structures.

Key Recommendations for the Testing Community

For Automation Engineers:

  • Focus on AI-enhanced tooling (like Cursor) rather than full AI automation services
  • Use AI to generate framework code, then refine manually
  • Avoid over-reliance on current AI automation platforms

For Exploratory Testers:

  • Leverage AI tools that can generate reusable test scripts from manual exploration
  • Develop strong prompt engineering skills (BDD experience is valuable here)
  • Stay informed about evolving AI capabilities

Universal Advice:

  • Don’t fear AI—embrace it as an enhancement tool
  • Experiment with AI integration in current workflows
  • Maintain healthy skepticism about AI automation promises
  • Continue developing traditional testing skills alongside AI capabilities

Community and Open Source

Christian emphasized the WebDriver.IO community’s commitment to supporting contributors financially, noting that the recent VS Code extension was developed by a community member who was compensated for their work. This model demonstrates how open-source projects can sustainably grow while supporting their contributors.

Conclusion

The integration of AI into testing workflows represents an evolution, not a revolution. While current AI automation tools show promise, they’re not yet ready to replace human testers or traditional automation approaches. Instead, the real value lies in AI’s ability to enhance productivity, generate test code more efficiently, and handle exploratory testing scenarios.

As Christian concluded: “Stay close to all developments in the space, try things out, find out how AI can help you be more efficient, and don’t be afraid of it.”

The future of testing will likely involve a thoughtful blend of AI enhancement and human expertise, with new paradigms like the Model Context Protocol potentially revolutionizing how we approach browser automation.


Watch the Recording

Couldn’t join us live? Watch the entire episode here - 📹 Recording Link: Watch the Event Recording on YouTube

Stay tuned as we bring the next! Subscribe here to the Selenium HQ Official YouTube Channel.