The field of artificial intelligence has often used what we know about human intelligence as a template for designing its systems. But for the most part, AI has always drawn on a view of the mind in social isolation, disconnected from the community of other minds. Yet one of the big insights of modern psychology—for example in work on embodied cognition—is that thinking doesn’t just take place between an individual’s ears. Human cognition is shaped by social and culture forces. What might it look like for AI models to take these forces into account?
"Learning, no matter how much, will not teach a person to understand." (Heraclitus, 450 BC)
AI learns the same way the animals learn, the same way humans learn -- because we are, in part, AIs. What AI cannot do is to understand. To be sure, humans too have been struggling:
"Even though the Logos [the Understanding] always holds true, people fail to comprehend it even after they have been told about it." (Heraclitus)
"In [the Logos] was life, and the life was the light of men. And the light in the darkness shined; and the darkness comprehended it not." (John 1:5)
And we pay dearly for our confusion: “No man chooses evil because it is evil; he only mistakes it for happiness, the good he seeks.” (Mary Wollstonecraft)
The reason, we struggle to develop the capacity for understanding is that we don't teach it in schools. We dump knowledge on children hoping that they will somehow figure out what to do with it. And a few would discover that capacity in them, but most won't, and they will live their lives pretty much as AIs. Still, at least we all have the potential for understanding.
AI, in its current incarnation, may simply lack the hardware. But even that aside, how could we teach AI to understand (or give it the necessary hardware) when we don't know how to awake this capacity, with any consistency, in our own children? Even though human ancestors had spent five million years evolving it.
Great essay. I agree with your argument but I am resigned to the fact that chatGPTs talking to and learning from each other is just a couple of coffees away of their developers and a few years (months) of implementation to bear fruits. It's bound to happen, as it happened already in the examples you describe. The next question we should ask ourselves is whether and how to control and interact with the new artificial civilisation we are creating, and also who is "we" in a world that goes all but in the direction of unity.
Thanks! I agree: I think we're gonna wake up one day to find this is the latest breakthrough. That day doesn't seem like it's super far away. That's an interesting way of putting it, though. I think people are starting to ask questions about what point do we need an ethics of AI that incorporate's the AI's well-being... but kinda what you're saying is that soon enough we'll need to have an ethics of AI that appreciates, like, the separate cultural practices of this potentially separate AI civilization. That thought has never occurred to me. Very interesting! Let me know when you find the answer.
Interesting insight on human development and AI.
"Learning, no matter how much, will not teach a person to understand." (Heraclitus, 450 BC)
AI learns the same way the animals learn, the same way humans learn -- because we are, in part, AIs. What AI cannot do is to understand. To be sure, humans too have been struggling:
"Even though the Logos [the Understanding] always holds true, people fail to comprehend it even after they have been told about it." (Heraclitus)
"In [the Logos] was life, and the life was the light of men. And the light in the darkness shined; and the darkness comprehended it not." (John 1:5)
And we pay dearly for our confusion: “No man chooses evil because it is evil; he only mistakes it for happiness, the good he seeks.” (Mary Wollstonecraft)
The reason, we struggle to develop the capacity for understanding is that we don't teach it in schools. We dump knowledge on children hoping that they will somehow figure out what to do with it. And a few would discover that capacity in them, but most won't, and they will live their lives pretty much as AIs. Still, at least we all have the potential for understanding.
AI, in its current incarnation, may simply lack the hardware. But even that aside, how could we teach AI to understand (or give it the necessary hardware) when we don't know how to awake this capacity, with any consistency, in our own children? Even though human ancestors had spent five million years evolving it.
Great essay. I agree with your argument but I am resigned to the fact that chatGPTs talking to and learning from each other is just a couple of coffees away of their developers and a few years (months) of implementation to bear fruits. It's bound to happen, as it happened already in the examples you describe. The next question we should ask ourselves is whether and how to control and interact with the new artificial civilisation we are creating, and also who is "we" in a world that goes all but in the direction of unity.
Thanks! I agree: I think we're gonna wake up one day to find this is the latest breakthrough. That day doesn't seem like it's super far away. That's an interesting way of putting it, though. I think people are starting to ask questions about what point do we need an ethics of AI that incorporate's the AI's well-being... but kinda what you're saying is that soon enough we'll need to have an ethics of AI that appreciates, like, the separate cultural practices of this potentially separate AI civilization. That thought has never occurred to me. Very interesting! Let me know when you find the answer.