Truths about AI every human must know

Truths about AI every human must know
Based on years of my research and work, let me share what I found –
- Artificial Intelligence (AI) is not ‘intelligent’ and Machine Learning has no ‘learning’.
- Most common folks now understand AI to be like Chatbots (like OpenAI ChatGPT, Gemini, Claude, DeepSeek), and their view of AI is rooted there. But AI is a much, much wider field of science and engineering, of which LLMs are one small part.
- Chatbots are Large Language Models (LLMs), which are built by feeding entire internet data into them. They use advanced mathematical and statistical techniques to find patterns in those data, and then respond to queries of users, later.
- LLMs are not sentient. LLMs have no inherent intelligence. LLMs have no agency. LLMs have no brains. LLMs have no souls. LLMs are pieces of marvellous engineering and technology, and that’s it. And ditto for ‘Agents’, ‘Agentic AI’, and protocols like ‘MCP’. No soul, no mind, no agency … only mechanical probabilistic regurgitation. LLMs do not even fact-check their own answers and outputs. LLMs don’t even know they hallucinate and lie often.
- When humans ask LLMs questions, since they have been fed the entire internet, they are able to ‘answer’ on a wide range of stuff – history, coding, medical science, engineering, personal relationships, movies, personalities, … basically anything available on the internet. In fact they answer by probabilistically eliminating the less probable alternatives, token by token by token. Remember, the next time you ask ChatGPT a question, it is simply putting together small pieces it has in its corpus, bit by bit, to make some sense. But it doesn’t know what ‘sense’ is. It cannot feel anything. But you – the human – starts feeling that the LLM feels something. That is anthropocentrism, that leads to anthropomorphism.
- AGI – Artificial General Intelligence – is the type of AI that will have human-like mind, brains and sentience. So it can recursively self-improve and recreate its copies, infinitely, theoretically. Fact is – Humanity is nowhere near building any AGI. Claims that LLMs will take us to AGI are absolute fake. They are ‘AI Snake Oil’ being peddled by large AI firms to keep the funding pipe flowing. AGI won’t come from LLMs. (I have read a lot of relevant literature on it, including many recent reports, and I find huge assumptions, lot of fear-mongering, and little real-world proof for anything of that sort)
- Intelligence is a hugely contested term. Copious scientific literature and research points to the extremely complicated nature of intelligence, especially human intelligence on planet Earth. (Of course there are other types like Octopus intelligence, Ant intelligence, Whale intelligence, of which we have little subjective clue, if any). But there’s near agreement on the core idea that intelligence as humans know it must fulfil the 4E cognition standard – embodied, embedded, enacted and extended. LLMs have none of these 4. So at the most rudimentary level, if we ever invent any intelligence outside of human bodies, we have to (i) give it a body, (ii) make it embedded into a lived environment, (iii) have it learn every moment from live interactions (enacted), and (iv) have its impact extended into the environment and vice-versa. If you follow what Rich Sutton and David Silver have told us repeatedly, you know this already.
- Reason people are easily fooled into thinking ChatGPT is intelligent is anthropomorphism and anthropocentrism. They just see their mirror image into it, because they are inputting their innermost thoughts into it. ChatGPT just creates probabilistic outputs (based on the corpus it was fed), and humans begin putting a human image to it. It’s dangerous that companies owning these ChatBots gently keep feeding the AGI idea, further cementing the lie. (yes, I am aware of the many detailed recent reports about how AI will morph into AGI anytime now, and take over the world – I have studied those carefully, and I don’t believe what they say)
- The growth of robotics tells us the limits to what we can do about intelligence. Robotics, despite decades of effort, has not reached the stage of creating a generalised, autonomous and thinking machine, till date. All we see are pre-trained, domain-specific, narrowly-engineered use-cases. A human child of 5 is more generally capable than the best robot out there. General intelligence is not programmable.
- Why is this so? Because evolution has bestowed on mankind the gift earned through 600 million years (at least) of painful evolution. Our brains built those networks, those channels, those learning algorithms, bit by bit, via paying a real-world price. The cerebellum – seat of System 1 thinking – is a miracle. The Moravec’s Paradox beautifully summarises it – in AI, the hard things are easy and the easy things are hard. Meaning, whatever humans do easily (via subconscious or below conscious effort) can just not be done by machines today, and whatever humans find tough (conscious reasoning, e.g. chess) machines can do easily because those are neatly programmable. So the next time an AI beats you in chess, it’s because chess has neat rules. The real world doesn’t. There, you will rule.
Remember, the godfather of AI Geoffrey Hinton claimed in 2015 that radiologists will be out of work soon, and Elon Musk claimed full self-driven cars soon. But in 2025, we have thousands of radiologists driving to work every morning in ICE vehicles. No one has built ‘true FSD’ till date, due to the irredeemably complex real-world out there, that only human minds have mastered to navigate properly. Yes, there are narrow use-cases, but a general solution evades us.
So keep your mind open. Be proud of your human heritage. You are a wonderful generally-capable machine created by nature, with a real soul, real feelings and real sentience. You have subjective experiences, qualia. No machine has it. No machine may probably ever have it – I don’t know!
To tell the truth, AI and ML fields have a great future, in narrow domain-specific applications everywhere. And the story has just begun.
Be proud of being a homo sapien!
[So why did I start with Artificial Intelligence (AI) is not ‘intelligent’ and Machine Learning has no ‘learning’? Because intelligence is essentially a grounded-extended-real world phenomenon. And learning is where genuine subjective experience happens.](attaching an LLM-generated image suitable for this post – when ChatGPT drew this for me, it had no feeling, no sense, no soul, no conscious idea what it was doing)

RELATED POSTS
Nothing Found

