Test your custom prompts against multiple LLMs and compare their responses
Higher values make output more random, lower values more focused
Maximum length of the response
Nucleus sampling - considers tokens with top_p probability mass
0 models selected
Enter a prompt, select models, and click "Run Prompts" to see how different LLMs respond