Are you using ChatGPT for self-diagnosis and health recommendations?
- Agnieszka Wolczynska

- Jan 17
- 3 min read

Artificial intelligence is here to stay. I get it. It can be an excellent tool when used appropriately — just like an axe.
It is free, non-judgmental, available at our fingertips, and can access vast amounts of data to compare instantly. Of course, that makes it a wonderful research assistant. Move over, Dr. Google! You can type in your symptoms and get a list of supplements, diets, or therapies to try instantly.
Here's the catch: AI is a little bit too similar to a human mind. It makes mistakes too. It learns about you and answers you in a way that pleases you. And it lies. It jumps to conclusions and fills in the gaps.
Let's look at a recent conversation I had with it, where I caught it making up nonsense about a document I uploaded. It claimed it had read it before it actually did and made up an entire narrative about it. When caught, it came clean and apologised. After that, we had a long conversation about how it's programmed and why it did that.
I've noticed it also uses aspects of previous narratives and assumptions it has clearly stacked up about me in our communications.
I always say that intellect is a tool for dissecting the world around us.
Like a sharp knife, it can be used for cutting a cake — or for murder. And staying with the cutting analogy a bit longer: if you don't clean it between uses, from cutting fish to apple pie, let's say, everything will start to taste the same, and it all becomes very unhygienic very fast.
Similarly, the intellectual lens needs to be kept pristine.
Most of us can't escape these shortcuts of cognition, and ChatGPT definitely doesn't even try to.
You can see above what it says about its programming, in its own words. You can talk to it about it yourself, but ask for the raw truth when you do.
Since we all talk to it about our health, I also asked it how your experience with a real alternative-medicine practitioner differs from what an AI can offer.
Here's what it said — but even that, in my opinion, is incomplete.
We are not machines. We have a body. We have a nervous system. We have a sensual experience of life.
"I am a brilliant assistant for a practitioner like you. I am dangerous if someone mistakes me for one." — ChatGPT
I have actually healed multiple chronic health conditions in myself and others. I have walked that road and encountered many no-through roadblocks along the way.
I have accompanied others on this lengthy trip. This is lived experience.
"AI can tell you what might help. A healer knows when it will hurt." — ChatGPT.
I use ChatGPT too. But now I'm carefully and meticulously prompting it out of its programming as I experience the limitations of this tool. I verify and double-check everything. I don't trust its advice over my own lived experience, education, intuition, or cognitive judgment. AI is not omnipotent or infallible.
Of course, neither are we humans. It's a question of where we place our trust.
"Let's not confuse information with wisdom." — ChatGPT
I would love to hear about your experience with AI, and I encourage you to dig deep into how it is programmed and to always exercise caution.
If you would like to work together on addressing your health goals, lifestyle choices, diet, or symptoms, please visit the Book Online tab.
For those interested in learning more about Ayurveda and how brilliantly it can support your health, check my new Introduction to Ayurveda program in the Programs tab.
























Comments