🎰 Token Probability Visualizer
See how a language model “thinks” — one token at a time. Type a prompt and visualize the probability distribution for each generated token.
Enter a prompt
Try:
How it works
Large language models generate text one token at a time. At each step, the model computes a probability distribution over its entire vocabulary (~50,000+ tokens for GPT-style models). The token with the highest probability is typically selected, but sampling strategies like temperature and top-p allow for more creative outputs.
This demo uses pre-computed probability distributions to illustrate the concept. In production, OpenAI's logprobs parameter returns real token probabilities for each completion.
Built with: Next.js · Chakra UI · Framer Motion
View Source →