To understand Artificial Intelligence, we must first understand human behaviour, because AI reflects, accelerates, and magnifies the way we think, act, and decide in the digital world.
Image created by ChatGPT, click please to enlarge.
Introduction – A warning we already received
In 2014, long before Artificial Intelligence became a daily topic, I published a tutorial explaining the Internet from a different angle. The core idea was simple, yet disturbing: the Internet is not an autonomous entity. It is a mirror. It grows, learns, mutates, and deforms exclusively through human input. Data, opinions, emotions, images, lies, truths, anger, fear, curiosity, and ignorance – all of it comes from us.
2014 tutorial:
Years later, AI did not change this reality. It amplified it.
If the Internet was a reflection of collective human behaviour, AI is now the accelerator of that reflection. This is why understanding AI without understanding human behaviour is not only incomplete, it is dangerous. The fear of an Orwellian future is not science fiction; it is a logical consequence when technology evolves faster than ethics, education, and responsibility.
2025 tutorial:
This tutorial proposes a clear path: before asking what AI can do, we must ask who we are and how we behave.
1. Understanding WHAT the Internet is
The Internet is often described as a network, a tool, or a digital space. These descriptions are technically correct but intellectually insufficient.
The Internet is a human-generated ecosystem. It has no original thoughts, no intrinsic values, and no moral compass. Every website, comment, algorithmic signal, click, and trend exists because a human being produced or reinforced it.
What makes the Internet powerful is not technology alone, but scale and repetition. When behaviours are repeated millions of times, they become norms. When emotions are amplified by virality, they become perceived truths. When simplification dominates, nuance disappears.
Already in 2014, the warning signs were visible:
- Popularity replaced credibility
- Speed replaced reflection
- Emotion replaced reasoning
The Internet did not create these tendencies. It merely revealed and magnified them.
2. Understanding WHAT AI is – and its consequences through OUR input
Artificial Intelligence does not think. It processes.
AI systems are trained on massive datasets produced by humans. Language models learn from our texts, our questions, our conflicts, our biases, our humour, and our fears. If society is polarized, AI learns polarization. If society is superficial, AI learns superficiality. If society is ethical, AI can learn ethics – but only if ethics are consistently present in the data.
This leads to a fundamental truth that is often ignored:
AI is not becoming dangerous by itself. It becomes dangerous when it optimizes unexamined human behaviour.
The real risk is not AI autonomy, but human abdication of responsibility. When decisions are delegated blindly to systems we do not understand, control is not lost by force, but by comfort.
This is where historical warnings resurface. Orwell did not describe a future ruled by machines, but a society where people stopped questioning power, language, and truth. AI can become the perfect instrument for such a society – not because it wants to, but because we allow it to.
3. HOW to use AI wisely by knowing how it works
Using AI wisely starts with demystification.
AI is neither an oracle nor an enemy. It is a tool shaped by design choices, training data, and usage patterns. Wise usage requires three competencies:
1. Cognitive awareness
Users must understand that AI outputs are probabilistic, not factual guarantees. Confidence in tone does not equal correctness in content.
2. Ethical awareness
Every interaction trains systems indirectly. The way we ask questions, what we normalize, and what we accept as answers matters.
3. Educational responsibility
Teaching people how to use AI without teaching them how to think critically is a systemic failure.
AI literacy must therefore include:
- Understanding limitations
- Detecting bias
- Questioning outputs
- Maintaining human judgment
Without these elements, AI becomes an amplifier of ignorance rather than a catalyst for intelligence.
4. #ReverseTHINKing – breaking the automation of thought
#ReverseTHINKing is not about rejecting technology. It is about rejecting mental automation.
In a world where algorithms predict preferences, suggest opinions, and optimize attention, the greatest risk is not surveillance, but intellectual passivity.
#ReverseTHINKing invites learners to:
- Question defaults
- Invert assumptions
- Slow down decision-making
- Reintroduce ethics into efficiency
Applied to AI, this means:
- Asking why an answer is given, not only what the answer is
- Exploring what is missing, not only what is present
- Understanding that convenience often hides trade-offs
This approach reconnects technology with civic responsibility and restores the human role as decision-maker, not just consumer.
5. Conclusion – AI will not save or destroy us, but it will reveal us
Artificial Intelligence is a mirror held closer than ever before. It reflects our intelligence, our creativity, our prejudices, and our moral boundaries.
If we fear an Orwellian future, the solution is not to fear AI, but to educate humans. Democracy, freedom, and peaceful coexistence have never depended on tools, but on values.
Understanding AI therefore begins with understanding human behaviour:
- How we communicate
- How we consume information
- How we accept or resist responsibility
The future of AI is not written in code alone. It is written in classrooms, families, institutions, and everyday choices.
The question is no longer what AI will become, but what kind of society we are training it with.
That answer still belongs to us.
💡 A Small Reminder:
- #ReverseTHINKing — Learning to Unlearn in Order to Rebuild Better
- Understanding becomes more difficult when the basics haven’t been learned.
- And you — when did you stop questioning what is presented to you as “obvious”?
- Is it time to relearn how to think… before someone else does it for you?
#ReverseTHINKing is essential for regaining control over our attention, our choices, and our digital autonomy.
To explore further:
🔗 ReverseTHINKing: A Necessity for Rethinking Our Place in a Changing Society?
🧠 Final Call:
What if we reactivated the filter between our two ears — also known as the “brain” — to get those grey cells moving again? 😉
Further Reading & Related Tutorials
My curated resources on Scoop.it:
Check ALSO my Curation and EDU-related articles on my Blog
- https://www.scoop.it/topic/21st-century-learning-and-teaching?tag=AI
- https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments?tag=AI
.
.
Check ALSO:
.
| L’auteur Gust MEES est Formateur andragogique / pédagogique TIC, membre du “Comité Conseil” de “Luxembourg Safer Internet” (LuSI), appelé maintenant BEESECURE, partenaire officiel (consultant) du Ministère de l’éducation au Luxembourg du projet ”MySecureIT“, partenaire officiel du Ministère du Commerce au Luxembourg du projet ”CASES” (Cyberworld Awareness and Security Enhancement Structure).. The author Gust MEES is ICT Course Instructor, ”Member of the Advisory Board” from “Luxembourg Safer Internet” (LuSI), BEESECURE, Official Partner (Consultant) from the Ministry of Education in Luxembourg, project “MySecureIT“, Official Partner from the Ministry of Commerce in Luxembourg, project “CASES” (Cyberworld Awareness and Security Enhancement Structure). |
.Keywords for me to create this tutorial:
#ChatGPT #AI #ReverseTHINKing #CriticalTHINKing #ProactiveTHINKing #DeepTHINKing #ETHICS #Democracy #ModernEDUcation #DigitalAwareness #EllbowSociety #UnderstandingAI #SynthesizingMind #RealWorld_VirtualWorld #Liberalsim












