Social science can no longer ignore the social actions of intelligent machines

Intelligent machines and AI interfaces are increasingly embedded in a variety of social contexts. In turn, these machines are themselves deeply shaped by the social and cultural environment of their human creators. Milena Tsvetkova causes social scientists to recognize and engage with the social properties of these new technologies.


At 2:32 PM on May 6, 2010, the Dow Jones Industrial Average fell nearly a thousand points and temporarily wiped $1 trillion from the stock market, causing the first financial “flash crash.” Early Sunday morning, September 17, 2023, twenty vehicles passing through the West Campus in Austin, Texas suddenly came to a halt, causing a massive traffic jam. In the six weeks leading up to the 2024 UK general election, ten users posted tweets that garnered 150 million views on X.

The common element between these events is that they all involve intelligent machines – algorithms, bots or robots. The traders that immediately responded to an erroneous large sell order and caused the flash crash were high-frequency trading algorithms, the vehicles that failed to communicate and maneuvered around each other ending up in a deadlock were self-directed, and the X accounts that flooded UK online chats with hateful and controversial tweets were bots.

These anecdotes remind us that smart machines have infiltrated our daily lives, insidiously shaping our social reality, sometimes in undesirable ways. Lately, we’ve started hearing more and more warnings about the sudden rise of superintelligence bringing about the end of humanity as we know it. Much of this fear envisions AI as a single god-like entity – omniscient and omnipotent. But the fact is, AI is a crowd – plural, diverse and let’s be honest, not that intelligent yet.

social scientists must approach intelligent machines as social actors equal to humans.

In a recent contribution (open source version available here ), a team of researchers, including myself, proposed a radical idea: social scientists should approach intelligent machines as social actors equal to humans. We social scientists can adapt and reapply social science theories and empirical research methods to study today’s society of humans and machines. Instead of imagining doomsday scenarios for the distant future, we should work to understand and solve the real social problems we are facing now.

Social psychology and established sociological theories such as outgroup bias, authority bias, and personification that describe the relationship between two individuals or two groups can be adapted to model the relationship between a human and a robot, or a group of humans and several robots. For example, it has been found that when in groups, rather than interacting as individuals, humans are more likely to feel the “us versus them” effect and compete with and bully robots more than other humans.

Similarly, social scientists can adapt and extend methods from network science and the study of complex systems to examine the collective dynamics and patterns that emerge in networks and communities composed of humans and robots or bots. Together with collaborators, I have done this to investigate the frequency and consequences of unplanned interactions between editing worlds in Wikipedia. Others have studied the impact of bots on the spread of political misinformation and inflammatory content on social media networks.

social scientists can adapt and extend methods from network science and the study of complex systems to examine the dynamics and collective patterns that emerge in networks and communities composed of humans and robots or bots.

There is much more work to be done. We social scientists must build a new incremental and cumulative, theoretically informed and empirically grounded social science of humans and machines. The time for this is now, while robots and existing bots are still relatively simple, because even simple behaviors can produce unintended consequences. I urge social scientists to catch up with recent and ongoing advances in AI, which have already resulted in algorithms behaving in unexpected and inexplicable ways.

What should be done? First of all, we need to improve training in computation and computational methods for social scientists. Algorithms, bots, and bots have become an indelible part of social scientists, and social scientists must be able to speak their “language” to study and understand them. Second, we require new types of interdisciplinary research and researchers. Susan Calvin, Isaac Asimov’s famous robopsychology might be an aspiration here: we would also benefit from robo-sociologists, robo-anthropologists, robodemographers, robo-economists, robo-geographers, robo-historians and robo-political scientists.

A social science that approaches intelligent machines as autonomous human-like actors will not only improve our understanding of the social world, but also inform AI design and policy. Self-driving vehicles are trained on human driving data and human traffic, and therefore, it is not surprising that they end up in mutual paralysis when they appear in large numbers; Training algorithms on data from human-machine and machine-machine interactions will help them better integrate on the road.

A social science that approaches intelligent machines as autonomous human-like actors will not only improve our understanding of the social world, but also inform AI design and policy.

Culture should also play a role when designing self-driving cars and personal assistant robots, among many other applications of this technology. People’s perception and judgment about cars depends on their age, environment and personality traits, as well as nationality. Machines also possess culture: the decision-making and behavior of machines reflect the culture of their designers; Machine decision-making and behavior also always occur in a specific cultural context. Social scientists must rise to the occasion and shape and lead the conversation about culture in AI design.

Increasing social connectedness and accelerating developments in AI make the study of social systems of humans and intelligent machines a challenging enterprise. However, the positivist approach will be very important for a better human future. To prevent financial crashes, improve road safety and reduce political misinformation … SOCIALISTS WANTED.


This post is based on the author’s co-authored article, A New Sociology of Humans and Machines, published in Nature Human Behaviour.

The content created on this blog is for informational purposes only. This article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (blog) or of the London School of Economics and Political Science. Please review our comment policy if you have any concerns post a comment below.

Image credit: Yutong Liu & Kingston School of Art, Best AI Images, Talking to AI 2.0, (CC-BY 4.0)


Print Friendly, PDF & Email

Leave a Comment