- Bot & Beyond
- Posts
- AI Parameters to minimize hallucinations
AI Parameters to minimize hallucinations
How setting the correct temperature and top p will help improve the AI responses

When using an LLM API, are you getting accurate, relevant, and reliable responses? Or do you sometimes get responses that sound plausible but are completely made up?
If you've experienced hallucinations you're not alone. It's one of the biggest challenges when working with AI automations in production environments.
The good news is that you can control this to some extent by setting the correct values in parameters such as temperature and top p.
By the way, have you noticed these settings in n8n or make.com AI nodes?
π‘ Pro Tip: These options are often hidden in the "Advanced Settings" section of AI nodes. Make sure to enable advanced options to access them.

n8n OpenAI options

AI options in make
Before looking at how to set these parameters with correct values, letβs see how AI works behind the scenes.
When an AI generates a response, it predicts the most likely next word based on probability distributions learned during training. It does this repeatedly, building the response word by word.
Without proper constraints, AI might generate text that sounds plausible but is factually incorrect β this is what we call a hallucination.
From what I have learned so far, there are three parameters that controls how responses are generated.
Temperature: Controlling Creativity vs. Accuracy
Top-p (Nucleus Sampling): Dynamic Filtering
Top-K: Simple Probability Filtering
Temperature
This is the most important and commonly available parameter. It controls how randomly the AI selects words:
Low Temperature (0.1 - 0.3): AI strongly favours the most probable words β factual, consistent, predictable responses
Medium Temperature (0.4 - 0.8): AI balances predictability with some variation β natural-sounding responses
High Temperature (0.9 - 1.5): AI explores more creative options β varied but potentially inaccurate responses
Top p
Top p filters words dynamically until their combined probability reaches p%.
For example let's say you've automated email summarization in n8n, where AI extracts key details.
Original Email: "Your appointment with Dr. Smith is confirmed for Friday at 2 PM. Please arrive 15 minutes early and bring your insurance card. Parking is available in the south lot."
AI's Word Probability Distribution:
"Your appointment is on Friday at 2 PM." (50%)
"Arrive 15 minutes early." (20%)
"Bring your insurance card." (15%)
"Parking available in south lot." (10%)
"The doctor offers free coffee." (3%)
"The clinic has a new fish tank." (2%)
If Top-p = 0.85, the AI would only consider the first four options (which sum to 85% probability), filtering out the less likely, potentially fabricated details about coffee and fish tanks.
Top-K
Top-K is less commonly exposed in automation platforms but works as a simpler filtering mechanism:
It keeps only the K most likely words and removes all others from consideration.
Imagine you've built an AI-powered customer support bot using n8n:
User: "How do I reset my password?"
AI's Word Predictions:
Click 'Forgot Password' on the login page. (50%)
Contact support to reset your password. (30%)
Try turning off your computer and on again. (10%)
Reset your browser cache and cookies. (5%)
Sacrifice a goat under a full moon. (1%)
If K = 3, the AI will only consider the first three sensible answers, eliminating less helpful or nonsensical options.
If Top-K is available in your platform, values between 40-50 are generally safe for most applications.
How Hallucinations Happen
Hallucinations occur when AI generates information that sounds plausible but isn't factually correct.
Understanding the causes helps you prevent them in your automations.
What Increases Hallucination Risk?
High Temperature (T > 0.8) β More randomness in word selection
High Top-p (p > 0.9) β AI considers too many low-probability words
Complex or ambiguous prompts β Gives AI too much freedom to interpret
Asking for information outside training data β Forces AI to "guess"
No data validation β AI isn't retrieving facts from a trusted source
Finding Your Perfect Balance
The ideal parameter settings depend on your specific use case. Here's a simple framework to find your perfect balance:
Start conservative: Begin with low Temperature (0.2) and moderate Top-p (0.6)
Test with real inputs: Run your automation with typical user inputs
Adjust gradually: If responses are too rigid or repetitive, increase parameters slightly
Monitor and refine: Track accuracy and adjust as needed
The goal isn't to eliminate all randomness! Some variation makes AI responses feel natural.
The key is finding the right balance between creativity and reliability for your specific automation needs.
Different automation scenarios require different parameter settings - Hereβs a general guideline.
Task Type | Temperature | Top-p | Goal |
---|---|---|---|
Data extraction | 0.0 - 0.2 | 0.5 | Maximum accuracy and consistency |
Summarization | 0.3 - 0.5 | 0.6 | Factual but natural-sounding |
Customer support | 0.4 - 0.6 | 0.7 | Helpful and somewhat conversational |
Content generation | 0.7 - 0.9 | 0.9 | Creative while remaining coherent |
Brainstorming | 1.0+ | 0.95 | Maximum creativity and variation |
What's your experience with AI parameters in automation workflows? Have you found particular settings that work well for specific tasks?
I would love to hear your thoughts.