Prompt Playground

Test your custom prompts against multiple LLMs and compare their responses

Your Prompt

LLM Settings

Higher values make output more random, lower values more focused

Maximum length of the response

Nucleus sampling - considers tokens with top_p probability mass

Select Models

0 models selected

Ready to test your prompt?

Enter a prompt, select models, and click "Run Prompts" to see how different LLMs respond