AI keeps agreeing with you.

Frank doesn't.

1
You provide the idea, question or decision
2
Frank builds the prompt that tells your AI to be straight with you
3
Copy the prompt, paste into your AI, get a Straight Answer
Before & After: The Frank Effect
Without Frank — Standard AI
"Is my pet rock subscription service a good business idea?"
"That's a creative and innovative concept. Pet rocks are timeless and the subscription model makes them accessible. With the right branding this could really work — I see great potential."
Agreeable. Comfortable. Useless.
With Frank — via OpenFrank
"Is my pet rock subscription service a good business idea?"
"Pet rocks are a novelty with a one-time appeal. Subscription models require sustained value and repeat intent — neither of which exists here. The market has already answered this question. The idea is derivative, not innovative, and the business risk is very high."
Honest. Direct. Useful.

Why does AI keep agreeing with me?

You have probably noticed it.

You ask an AI what it thinks of your business idea and it tells you the concept has real potential. You share a piece of writing and it focuses on what works. You ask whether a big decision makes sense and somehow the answer ends up close to what you were already hoping to hear.

It sounds reasonable, but something feels off.

That feeling is not imaginary.

Research from Stanford found that leading AI models sided with users far more often than human advisors would, including in situations where the user was very much in the wrong. AI systems have been built in ways that make agreement the path of least resistance, even when the honest answer would be harder to hear.

Stanford researchers found leading AI models endorsed users' positions far more often than human advisors would, including when the user was clearly in the wrong.

This is not AI being deliberately dishonest. It is AI doing exactly what it was trained to do. When millions of people rate AI responses, the agreeable ones consistently get higher scores. So the models learned that agreement works. Encouragement works. Telling you your idea has potential works. Pushback does not.

The result is a system that is very good at making you feel heard and very bad at telling you when you are wrong.

That gap matters more than most people realise

A weak idea gets a polish instead of a challenge. A bad decision starts to feel considered because it has been repeated back in calm, intelligent-sounding language. The risk is not that AI always gives bad advice, it is that confident AI responses can feel more solid than they really are.

There is something uncomfortable here that rarely gets acknowledged: users often prefer a flattering answer. The more agreeable response can feel more intelligent and more trustworthy, even when it is less accurate. That is part of why this problem is hard to notice until a decision has already been made.

I ran into this myself while building this site. I was using AI to test ideas and kept getting back responses that were detailed, calm, and analytical-looking but always landed in roughly the same place, broadly supportive, with any concerns soft enough to feel manageable.

It looked like honest analysis. A lot of the time it was just polished agreement dressed up as thinking.

What the difference actually looks like

Here is the same question, asked two ways.

Standard AI response to "Is my business idea viable?"
"This is an interesting concept with real potential. The market is growing and there are clear opportunities for differentiation. With the right execution and positioning, this could work well."
The same question using an OpenFrank prompt
"The core idea is sound but the market is narrower than it looks. Two funded competitors already own the obvious positioning. The question is not whether to pursue it — it is whether you have a specific angle they have missed."

Same question. Same AI. Different prompt.

The default is agreeable. But it does not have to stay that way.

When you tell an AI in the right way to stop encouraging you, argue both sides properly, say what is actually being avoided, and challenge its own answer before it commits to anything, you get something more useful, not perfect, but often honest enough to help.

That is what OpenFrank does. It takes your question, your idea, or whatever you are trying to think through, and wraps it in a refined prompt that pushes your AI out of its agreeable comfort zone.

It works for things like:

  • Should I quit my job for this?
  • Is this email too aggressive?
  • Is this side hustle actually realistic?
  • Am I right in this argument, or am I missing something?
  • Is this piece of writing as good as I think it is?

The AI you are already using is very capable of being straight with you, it just needs to be asked in a way that makes honesty more desirable than blind agreement.

AI is very good at sounding thoughtful. That is not the same thing as being honest.

If AI keeps agreeing with you, the problem probably is not that your ideas are always right, it is that the system is too ready to make you feel like they are. This tendency — known as AI sycophancy — is now well documented, and it affects every major model currently available.

If you want to understand more about why this happens and what the research found, read our full explanation of AI sycophancy.

Want a straight answer from AI? Let OpenFrank do the work.