Open AI Testing

AI enables a global testing community to share, reuse, and scale quality.

How to Run Tests

testers.ai makes it as easy as a single click to run open and your own custom test agents against your website

IDE Extensions

Seamless integration with your favorite code editors

Browser Extensions

One-click testing directly from your browser

Command Line Interfaces

Full CI/CD integration for automated testing pipelines

Execute all Open Test Agents with the click of a button. From static analysis to dynamic testing, get comprehensive coverage across your entire development workflow.

White-labeled versions available for your organization.

Learn More

Join the OpenTest AI Community

Connect with a global network of testing professionals, share AI testing agents, and accelerate your quality assurance through collaborative innovation

Lightning Fast

AI-powered test generation that creates comprehensive test suites in seconds, not hours. Reduce your testing time by 90%.

No Lock-in

Use our platform alongside your existing tools and workflows. No vendor lock-in, full data portability.

Fully Transparent

Understand exactly how tests work with full visibility into AI decision-making and test logic.

Community Powered

Join a thriving community of developers contributing and improving test cases for everyone.

Production Safe

Safe, non-invasive testing that works seamlessly on production systems without disruption.

Enterprise Ready

Built for scale with enterprise-grade security, compliance, and performance monitoring.

Meet Open Testing Agents

These are Open AI checking and testing agent prompts. Static Checks can be run against static artifacts such as screenshots, network logs, console logs, page text, DOMs, etc. Dynamic checks are interactive tests to be executed that are stateful and have a sequence of interactions and assertions.

Static Agent Checks

Analysis of static artifacts like screenshots, logs, DOMs, and content

Sharon - Security Tester Security
Pete - Privacy Tester Privacy
Mia - Usability Tester Usability
Jason - GenAI Code Tester GenAI Code
Alejandro - Accessibility Tester Accessibility
Fatima - Error Message Tester Error Messages
Sophia - Content Quality Tester Content Quality
Tariq - Performance Tester Performance
Hiroshi - WCAG Compliance Tester WCAG
Marcus - OWASP Security Tester OWASP
Zanele - GDPR Compliance Tester GDPR
Mei - Search Box Tester Search Box
Diego - AI Chatbot Tester AI Chatbot
Leila - Search Results Tester Search Results
Kwame - Product Details Tester Product Details
Zara - News Content Tester News Content
Priya - Shopping Cart Tester Shopping Cart
Yara - Social Profiles Tester Social Profiles
Hassan - Checkout Tester Checkout
Amara - Social Feed Tester Social Feed
Yuki - Homepage & Landing Pages Tester Homepage
Anika - Contact Page Tester Contact Page
Mateo - Pricing Page Tester Pricing Page
Zoe - About Page Tester About Page
Zachary - Video Player Tester Video Player
Sundar - Legal Policies Tester Legal Policies
Samantha - Careers & Jobs Tester Careers
Richard - Forms Tester Forms
Ravi - Booking Tester Booking
Rajesh - Cookie Consent Management Tester Cookie Consent

Dynamic Agent Checks

Interactive tests with stateful sequences and assertions - similar to traditional test cases with test steps and assertions

Signin - User Authentication Tester Signin
Search - Product Search Tester Search for Product

Community

Help expand our collection of AI testing agents by submitting your own specialized testing agent. Join our community of developers building the future of automated testing.

🏆 Top Experts

Jason Arbon
🥇 1
23 Agent Definitions
Expert

Join the OpenTest AI Community

Become a member of our growing community of AI testing professionals

early but growing fast!

100s

Community Members

23

Static Agents

44

Dynamic Agents

1000s

Webpages Tested

Contribute to the Community

Help expand our collection of AI testing agents

Submit Static Agent Check

Static checks analyze static artifacts like screenshots, logs, DOMs, and content.

Submit Dynamic Agent Check Definition

Dynamic checks are interactive tests with sequences of actions and assertions.

Ready to Test Smarter?

Join thousands of developers who are already shipping better software with AI-powered testing.

OpenTest.AI Bug Format

AI-powered bug reports with unprecedented context and actionable insights

Why AI Bug Reports Matter

Traditional bug reports are often incomplete, lacking context, and require significant investigation. Open AI-Generated bug reports provide comprehensive analysis with actionable solutions.

Key Goals of OpenTest.AI Bug Format

Maximum Context

Include as much relevant context as possible - console logs, network calls, page elements, and user interactions.

AI Fix Prompts

Provide an AI-prompt that can be used to fix the issue, making it easy for developers to understand and implement solutions.

Balanced Analysis

Argue for and against why this is a bug, providing comprehensive reasoning from multiple perspectives.

Severity Judgment

Provide initial assessment of issue severity with clear reasoning and priority recommendations.

Stateful Reviews

Issues are stateful with human ratings, comments, and expert review capabilities for collaborative improvement.

Smart Routing

Automatically suggest which type of engineer should handle each issue for optimal resolution.

Sample Bug Report Format

{
  "bug_title": "string",           // Issue title/description
  "bug_description": "string",     // Detailed description
  "bug_confidence": number,        // Confidence level (typically 1-10)
  "bug_priority": number,          // Priority level (typically 1-10)
  "bug_type": "string",            // Issue category/type
  "bug_severity": "high|medium|low",
  "category": "string",            // Coverage category

  // Fix prompt - this is the key field you mentioned
  "prompt_to_fix_this_issue": "string",  // AI-generated prompt for fixing the issue

  // Additional fix-related fields
  "suggested_fix": "string",       // Suggested solution approach
  "bug_why_fix": "string",         // Why fixing this issue matters

  // Alternative field names (depending on context)
  "prompt_to_fix_issue": "string", // Alternative field name

  // UI state properties (when exported)
  "trash": boolean,                // User marked as trash
  "star": boolean,                 // User starred
  "comment": "string",             // User comment

  // Additional metadata
  "tester": "string",              // AI persona who found the issue (e.g., "Alejandro", "Mia")
  "byline": "string",              // Tester role description
  "image_url": "string",           // Tester avatar
  "what_type_of_engineer_to_route_issue_to": "string", // Target engineer role

  // Context data
  "possibly_relevant_page_console_text": "string",
  "possibly_relevant_network_call": "string",
  "possibly_relevant_page_text": "string",
  "possibly_relevant_page_elements": "string"
}

Our Sponsors

We're grateful to our sponsors who help make OpenTest AI possible. Their support enables us to provide free resources and tools to the testing community.

Bio

OpenTest AI Community Charter

Jason Arbon
Community Founder
×

Board Members

Jason Arbon

President & Founding Board Member

LinkedIn

Phil Lew

Founding Board Member

LinkedIn

Jonathon Wright

Founding Board Member

LinkedIn

1. Mission & Purpose

OpenTest AI exists to advance the practice of testing in two major areas:

  • Testing AI-Based Systems – developing methods, strategies, and resources to evaluate the safety, quality, and reliability of AI models and applications (e.g., LLMs, generative AI systems, and ML-powered services).
  • Applying AI to Test Other Systems – using AI-driven approaches, tools, and agents to improve the testing of traditional software, platforms, and digital products.

The community provides a free, accessible, and collaborative environment where practitioners and researchers can share strategies, resources, and tools across both domains.

The community aims to:

  • Create and share AI quality strategies for both AI-based systems and AI-enabled testing.
  • Provide resources such as prompt libraries, evaluation suites, test plans, and AI-driven testing utilities.
  • Advance best practices for AI quality, safety, and reliability in both categories.
  • Discuss future-facing issues before they become widespread, spanning both how AI is tested and how AI changes testing itself.
  • Explore non-technical topics, including how teams adopt AI responsibly in development and quality workflows.

OpenTest AI is built on the principle of being practical, rigorous, and grounded, avoiding hype and self-promotion in favor of contributions that are universally useful.

2. Community Values

  • Practicality: Share strategies and tools that can be applied to both testing AI and using AI for testing.
  • Openness: Ensure resources are free and accessible to all.
  • Rigor: Promote practices backed by evidence, research, or real-world application.
  • Integrity: Avoid marketing-driven agendas or personal recognition-seeking.
  • Collaboration: Encourage contributions across both domains from academia, industry, and practitioners.

3. Governance Structure

3.1 Founding President

Jason Arbon serves as the Founding President of OpenTest AI.

  • The President has authority to appoint and remove Board members.
  • The President holds 51% of all voting power and maintains formal veto power over Board and community decisions, including membership and sponsorship.
  • This structure is intended as a temporary stewardship model to protect the mission and ensure stability in the early stages.

3.2 Board of Directors

  • Board members are appointed by the President.
  • Responsible for guiding direction, governance, and community alignment.
  • Meet monthly, with topic-specific sessions as needed.

4. Data Licensing

All user-contributed content (test cases, bug reports, prompts, LLM tests, and ratings) is shared under the CC0 1.0 Universal (Public Domain Dedication) license.

  • Contributors dedicate their content to the public domain, waiving all copyright and related rights.
  • Anyone can freely use, modify, distribute, and build upon this content for any purpose, without attribution or restrictions.
  • This permissive licensing model encourages maximum reuse and collaboration within the testing community.
  • By submitting content, contributors confirm they have the right to dedicate it to the public domain.

4.1 Disclaimer & Limitation of Liability

THE DATA AND CONTENT ON THIS SITE ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, ACCURACY, COMPLETENESS, OR NON-INFRINGEMENT.

No Warranty: OpenTest AI, its operators, contributors, and sponsors make no representations or warranties regarding the accuracy, reliability, completeness, or suitability of any test cases, bug reports, analysis, prompts, or other content on this site.

User Responsibility: Users are solely responsible for evaluating, validating, and verifying any content before use. All content should be reviewed and tested in appropriate environments before application to production systems.

No Liability: To the fullest extent permitted by law, OpenTest AI, its operators, contributors, sponsors, and affiliates shall not be liable for any direct, indirect, incidental, special, consequential, or punitive damages arising from the use, misuse, or inability to use any content from this site, including but not limited to damages for loss of profits, data, business interruption, or other intangible losses.

No Endorsement: The presence of content on this site does not constitute an endorsement, recommendation, or guarantee of its accuracy, effectiveness, or safety. Users should exercise independent judgment and professional expertise when applying any content.

Third-Party Content: Content is provided by community contributors. OpenTest AI does not verify, validate, or guarantee the accuracy, completeness, or safety of user-contributed content.

Use at Your Own Risk: By using any content from this site, you acknowledge that you do so at your own risk and agree to hold OpenTest AI and its operators harmless from any claims, damages, or liabilities arising from such use.

5. Membership & Participation

5.1 Lurkers

  • Open to all, no registration required.
  • Free access to resources and open-source artifacts.

5.2 Members

  • Open to all with free registration.
  • Benefits: access to updates, ability to rate artifacts, public membership badges.
  • Expectations: rate artifacts, maintain civility, avoid hype or spam.

5.3 Contributors

  • Submit artifacts (prompt libraries, evaluation suites, strategies, tools).
  • Contributions reviewed to ensure contributor owns or has rights to share, contributions are open-sourced, and no proprietary or confidential content is included.
  • Contributors must provide a real name and a public profile link (e.g., LinkedIn, GitHub, website).
  • This attribution is published alongside the artifact to ensure accountability, discourage spam or malicious submissions, and provide important context for evaluating contributions.

6. Code of Conduct (Short Form)

  • Be Civil & Respectful – Treat others constructively.
  • No Hype or Spam – Avoid self-promotion or unsubstantiated claims.
  • Stay Practical & Useful – Focus on actionable, broadly valuable contributions.
  • Share Responsibly – Only submit content you own or can open-source.
  • Openness & Integrity – Keep the mission centered on AI quality.

7. Benefits & Recognition

  • Lurkers: Free use of shared resources.
  • Members: Membership badges, ratings milestone badges, opt-in updates.
  • Contributors: Contributor badges, peer-reviewed recognition, public attribution (name + link), résumé visibility.
  • Board Members: Elevated badges and leadership recognition.
  • Sponsors & Technical Partners: Public acknowledgment for enabling community value.

8. Deliverables & Outputs

OpenTest AI produces and maintains resources that are practical, open, and reusable across both categories:

  • Prompt Libraries – for probing AI models and for generating AI-assisted tests.
  • Evaluation Suites & Test Plans – for assessing AI quality and for validating AI-based test automation.
  • Strategies & Best Practices – covering both how to test AI systems and how to adopt AI in testing workflows.
  • Case Studies & Reports – real-world examples from both focus areas.
  • Research & News Updates – covering developments in both AI-system evaluation and AI-assisted QA.
  • Community Events & Workshops – bringing together practitioners from both categories.
  • Member Directory – recognition of contributors and their domain of expertise.

9. Community Platforms & Infrastructure

  • Website – central hub for artifacts and updates.
  • Discord Forum – main space for discussions and collaboration.
  • LinkedIn Community – professional networking and outreach.
  • Twitter/X – updates, highlights, announcements.
  • GitHub (future) – long-term repository for open-source contributions.

10. Growth Roadmap

  • Stage 1 – Stewardship: testers.ai stewardship, ensuring balance between the two domains.
  • Stage 2 – Shared Governance: growth in contributors and sponsors across both categories, with balanced recognition.
  • Stage 3 – Independence: nonprofit/foundation spin-off, positioned as the global hub for both testing AI and AI in testing.

Join OpenTest AI Community

×