The advancement of technology has been a defining part of human history, especially since the industrial revolution, which altered global economics, politics, society and the natural environment.
Technology’s advancement mostly happened, not through popular consensus, but through economic factors, political considerations and individual ingenuity. Because of this, one did not need to be a Luddite or Futurist to be unsettled with the changes technology could bring.
ChatGPT, a chatbot which can generate seamless copies of human writing through analyzing previous forms of writing, took the internet’s attention by storm after its launch in November 2022.
According to The Washington Post, some people pointed out its design could be easily used for cheating, which it already likely has. According to CBS News, ChatGPT, with its mimicry, could automate writing jobs including copywriting, legal writing, programming and potentially — according to The New Yorker — journalism. According to Independent, AI has created similar worries for the visual art world.
Outside of creation, institutions and individuals are debating AI and automation, whether they be AI lawyers, robot police dogs and even AI dating. We may look lonely, but AI dates cannot fix alienation.
There is a tendency to see AI as more objective than a person, but these new technologies are neither objective judges of character nor tortured artists. ChatGPT and AI Art cannot create anything because they can only plagiarize from existing media, and machines are only as objective as their programmers’ input.
According to The Guardian in 2016, an AI was used to determine the winners of a women’s beauty contest. The program’s creators fed the machine examples of who they considered beautiful women so it had perspective on who to pick. However, most of the photos they gave were white women, so the machine reflected that prejudice when picking the contest’s winners.
With this one example, it’s hard not to imagine the potential consequences of similar prejudices taught to more influential technologies. Of course, with the rise of social media algorithms, we don’t have to imagine it.
During the 2010s, Myannmar’s military targeted the Rohingya people for genocide, killing and expelling many of them. According to Amnesty International, support for genocide from segments of the country was possible because Meta’s — then Facebook’s — algorithms amplifed anti-Rohnigya hate and propaganda to the Burmese people. Now, many Rohingya refugees are suing Meta over its culpability.
Meta’s algorithms have still been used to propagandize in Myanmar, and also in Ethiopia. American law enforcement have also been adopting AIs, including the infamous robot dogs, but also predictive policing algorithms which have been shown to operate through racial biases.
ChatGPT also absorbs the biases of what it plagiarizes. According to The Intercept, when given prompts on national security issues, it recommended targeting Muslims and torturing people of certain nationalities.
If robots are forced into every facet of our lives and society — which they may be — we should remember that they are not objective soothsayers. They should not be put on a pedestal, looked up to as a mentor or coveted as a false idol.