ReverseTHINKing thoughts
Image created by ChatGPT, click please to enlarge.
Introduction – When technology reflects who we are
Over the past decade, I have published several articles and tutorials (2014–2025) offering alternative perspectives on the Internet and, more recently, on Artificial Intelligence.
#RealWorld and #VirtualWorld are bidirectional.
Each influences the other.
In 2014, I published an article called “Internet the Savant Child,” followed in 2024 by “Internet the Savant Child (PART 2): Will AI Make the Internet and People More Intelligent?” Make sure to read those articles first to better understand the evolution of this reflection.
These publications questioned not only what technology can do, but how and why we choose to use it. I also proposed what I consider the best way to use AI: feeding it with human learning, ethical reasoning, and critical reflection, rather than blindly delegating judgment to machines.
- My AI PracTICE with ChatGPT to Get the BEST Out of It
- To understand AI, one needs first to understand human behaviour
However, this approach reflects only one path—my path. An honest, transparent, and ethically grounded way of interacting with AI.
The uncomfortable question is this:
What happens when AI is no longer trained primarily by human learning, but instead by massive volumes of AI-generated, misleading, ideologically biased, or ethically empty content?
And more importantly:
What happens when those who control or influence AI training are not educators, thinkers, or citizens—but governments, corporations, power structures, or radical ideologies?
At that moment, AI stops being a neutral tool. It becomes a mirror. Not of truth—but of our society’s values, fears, shortcuts, and moral compromises.
This tutorial invites you to step back and apply #ReverseTHINKing: to look not at what AI does, but at what AI reveals about us.
1. Real World versus Virtual World – Who influences whom?
One of the most underestimated dangers of AI is not technical—it is cognitive and cultural.
Originally, the virtual world was meant to reflect the real world. Today, the situation is slowly reversing. Algorithms shape attention. AI-generated content floods social media, search engines, and educational platforms. Opinions, emotions, and even moral judgments are increasingly influenced by systems optimized for engagement rather than truth.
The risk is subtle but profound:
- The Virtual World begins to define what feels normal.
- The Real World adapts its behavior to digital feedback loops.
- Human experience is filtered, simplified, and polarized.
When AI is trained on distorted digital realities, it amplifies them. When humans then rely on that AI to understand the world, a closed loop emerges.
AI does not corrupt society on its own. Society first corrupts the data. AI merely reflects it—at scale and at speed.
This raises a disturbing thought: if AI appears biased, aggressive, manipulative, or unethical, perhaps it is not malfunctioning. Perhaps it is simply holding up a mirror.
At that moment, AI stops being a neutral tool.
2. How can we avoid this? And is it even possible?
Avoiding this scenario is not primarily a technical challenge. It is an educational, ethical, and civic challenge.
Let us be clear: total prevention is impossible. AI will always be influenced by those who train it and those who use it. The question is not if distortion will occur, but how resilient society is against it.
Possible counterweights exist:
- Critical education instead of passive consumption
- Human-in-the-loop systems instead of blind automation
- Ethical frameworks embedded not only in code, but in institutions
- Transparency about training data, objectives, and limitations
But none of these tools work without one essential ingredient:
An educated citizen capable of thinking independently.
This is where #ReverseTHINKing becomes essential. It trains individuals to:
- Question first impressions
- Detect manipulation and emotional shortcuts
- Reverse dominant narratives
- Ask who benefits from a given “truth”
Learn more about #ReverseTHINKing in my tutorials in EN and FR:
Without these skills, AI safety discussions remain superficial. With them, AI becomes what it should be: a tool, not an authority.
The question is not if distortion will occur, but how resilient society is against it.
3. A danger for Democracy—even with safeguards?
Yes. And denying it is no longer naïve—it is irresponsible.
Democracy does not collapse overnight. It erodes quietly, through comfort, delegation, and intellectual fatigue. AI accelerates this erosion by offering something extremely seductive: answers without effort.
Let us be brutally honest.
When citizens stop verifying information because an algorithm already summarized it… When opinions are formed by AI-curated feeds rather than lived experience and debate… When political complexity is reduced to prompts, scores, and predictive models…
Democracy does not get attacked.
It gets outsourced.
Consider the real risks:
- Hyper-personalized political narratives designed to emotionally lock individuals into belief bubbles
- Automated persuasion that adapts faster than human critical thinking
- Disinformation produced at industrial scale, faster than any democratic correction mechanism
- Citizens slowly accepting algorithmic judgment as more “neutral” than human disagreement
The most dangerous sentence in a democracy is no longer:
“The government decided.”
It is:
“The AI said so.”
At that moment, responsibility disappears. Accountability blurs. Civic courage weakens.
Even with safeguards, ethical committees, and regulations, AI remains vulnerable to misuse because democracy itself relies on effort—and effort is exactly what convenience-driven systems undermine.
The real threat is not authoritarian AI imposed from above.
The real threat is a society that willingly trades:
- Critical thinking for convenience
- Debate for personalization
- Civic responsibility for algorithmic comfort
A democracy populated by passive users will not be saved by better algorithms.
It will simply be managed—until it no longer needs citizens at all.
Conclusion – AI as a moral stress test for society
AI is not the beginning of our problems. It is the accelerator.
It accelerates what already exists:
- Our values
- Our prejudices
- Our ethical limits
- Our willingness—or refusal—to think
If AI becomes unethical, manipulative, or destructive, the uncomfortable truth is this:
It learned that from us.
This is why the real challenge of AI is not regulation alone, nor innovation alone—but education. Education that fosters Deep Thinking, Critical Thinking, and Proactive Thinking. Education that teaches citizens to live together (vivre ensemble), to respect democratic principles, and to resist intellectual laziness.
#ReverseTHINKing is not anti-AI. It is anti-blindness.
If AI shows us a distorted mirror of society, the solution is not to break the mirror—but to change what stands in front of it.
The future of AI will not be decided by machines.
It will be decided by the kind of humans we choose to remain.
A Small Reminder:
- #ReverseTHINKing — Learning to Unlearn in Order to Rebuild Better
- Understanding becomes more difficult when the basics haven’t been learned.
- And you — when did you stop questioning what is presented to you as “obvious”?
- Is it time to relearn how to think… before someone else does it for you?
#ReverseTHINKing is essential for regaining control over our attention, our choices, and our digital autonomy.
To explore further:
ReverseTHINKing: A Necessity for Rethinking Our Place in a Changing Society?
Final Call:
What if we reactivated the filter between our two ears — also known as the “brain” — to get those grey cells moving again?
Further Reading & Related Tutorials
- 21st Century Innovative Technologies and Developments – AI
- The Synthesizing Mind in Education: Tackling the Challenges of a Changing World
- A Modern Ethical Framework for a Changing World: Rebuilding Lost Wisdom and Knowledge
- ChatGPT Free for Windows Desktop Users – Part 73
My curated resources on Scoop.it:
Check ALSO my Curation and EDU-related articles on my Blog
- https://www.scoop.it/topic/21st-century-learning-and-teaching?tag=AI
- https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments?tag=AI
.
.
Check ALSO:
.
| L’auteur Gust MEES est Formateur andragogique / pédagogique TIC, membre du “Comité Conseil” de “Luxembourg Safer Internet” (LuSI), appelé maintenant BEESECURE, partenaire officiel (consultant) du Ministère de l’éducation au Luxembourg du projet ”MySecureIT“, partenaire officiel du Ministère du Commerce au Luxembourg du projet ”CASES” (Cyberworld Awareness and Security Enhancement Structure).. The author Gust MEES is ICT Course Instructor, ”Member of the Advisory Board” from “Luxembourg Safer Internet” (LuSI), BEESECURE, Official Partner (Consultant) from the Ministry of Education in Luxembourg, project “MySecureIT“, Official Partner from the Ministry of Commerce in Luxembourg, project “CASES” (Cyberworld Awareness and Security Enhancement Structure). |
.Keywords for me to create this tutorial:
#ChatGPT #AI #ReverseTHINKing #CriticalTHINKing #ProactiveTHINKing #DeepTHINKing #ETHICS #Democracy #ModernEDUcation #DigitalAwareness #EllbowSociety #UnderstandingAI #SynthesizingMind #RealWorld_VirtualWorld #Liberalism.












