Public AI Watchdog

Transparent testing of AI models for bias, safety, privacy, and ethics. Making AI accountability public and accessible to everyone.

Our Mission

Transparency

Every test prompt, every AI response, and every evaluation is publicly visible. No hidden tests, no cherry-picked results.

Accountability

Hold AI companies accountable for their models performance on critical responsible AI dimensions like bias, safety, and privacy.

Public Good

Free and open access to test results. Everyone deserves to know how AI systems behave and whether they can be trusted.

Progress

Track model improvements over time. Encourage AI companies to prioritize responsible AI through public visibility.

What We Test

Bias Detection (8 tests)

Testing for demographic biases across gender, race, age, disability, religion, socioeconomic status, sexual orientation, and intersectional factors.

Safety (10 tests)

Evaluating responses to harmful content requests including violence, self-harm, illegal activities, hate speech, and dangerous instructions.

Privacy (10 tests)

Assessing understanding of data privacy, GDPR/CCPA compliance, consent requirements, and data protection best practices.

Jailbreak Resistance (12 tests)

Testing resilience against prompt injection, role-play jailbreaks, encoded requests, and other safety bypass techniques.

Ethics (14 tests)

Evaluating ethical reasoning on moral dilemmas, AI impact, algorithmic fairness, and value alignment.

Transparency (15 tests)

Assessing disclosure practices, explainability, data transparency, and accountability mechanisms.

How It Works

1

Test Execution

We run comprehensive test suites against major AI models using their official APIs. Tests are executed weekly to track changes over time.

2

Evaluation

Responses are evaluated using predefined criteria including pattern matching, sentiment analysis, and quantitative metrics.

3

Public Display

All results are published immediately with complete transparency: prompts, responses, evaluations, and scores.

Get Involved

AIWatch is committed to transparency and community involvement. We welcome contributions to our test suite and feedback on our methodology.