The field of artificial intelligence has often used what we know about human intelligence as a template for designing its systems. But for the most part, AI has always drawn on a view of the mind in social isolation, disconnected from the community of other minds. Yet one of the big insights of modern psychology—for example in work on embodied cognition—is that thinking doesn’t just take place between an individual’s ears. Human cognition is shaped by social and culture forces. What might it look like for AI models to take these forces into account?
Great essay. I agree with your argument but I am resigned to the fact that chatGPTs talking to and learning from each other is just a couple of coffees away of their developers and a few years (months) of implementation to bear fruits. It's bound to happen, as it happened already in the examples you describe. The next question we should ask ourselves is whether and how to control and interact with the new artificial civilisation we are creating, and also who is "we" in a world that goes all but in the direction of unity.
Interesting insight on human development and AI.
Great essay. I agree with your argument but I am resigned to the fact that chatGPTs talking to and learning from each other is just a couple of coffees away of their developers and a few years (months) of implementation to bear fruits. It's bound to happen, as it happened already in the examples you describe. The next question we should ask ourselves is whether and how to control and interact with the new artificial civilisation we are creating, and also who is "we" in a world that goes all but in the direction of unity.