It is surreal to even imagine the direction our world is headed in. On the one hand, we have wars being waged with powerful nations pulling the strings for their personal benefits by citing political reasons that are resulting in the deaths of thousands and the displacement of even more; on the other hand, we have companies racing against each other to develop artificial intelligence that poses a threat that has the potential to change the way we perceive our lives and even control them. In Episode 1 of Season 10 of VICE, we see an example of both.
The Hell That Is Syria
The Turkish-Syrian border region was hit by two high-magnitude earthquakes in February this year. In the aftermath of the 2017 peaceful protest that was squashed by President Bashar al-Assad’s forces, a civil war has ensued between his forces and opposition groups and rebels (ISIS and Al-Qaeda included), which continues today. Turkey is home to more survivors of this war than any other country. 30 kilometers from Syria is where the Syrians took refuge until the earthquakes struck. The irony is that people say that a minute of what they experienced during the earthquake was worse than what they went through in their years in Syria. It is horrifying to listen to them, knowing what they must have gone through in Syria, and to have them say the earthquakes were a lot worse really puts things into perspective. Media coverage has gone down to almost zero, but the terror continues, and what’s worse is that Bashar al-Assad has used the veto power of his UN Security Council ally Russia to limit foreign aid from reaching his opposition’s region in Syria.
Bab al-Hawa (the Turkey-Syria border crossing), the only authorized border crossing into territory that belongs to the opposition, was the only means to survive for those in northwest Syria, but it was destroyed in the earthquake. Taking advantage of this, Assad also blocked the other entries to the region, thereby preventing international aid and rescue teams from reaching those who needed help. Local rescue teams were thus left without enough supplies. When people thought nothing could be worse, Assad launched more attacks in the earthquake-hit areas. Yes, that guy is the president of a country and is thus considered an “official.” What can only be an act of God is how a newborn baby was found still alive under the rubble. Becoming the miracle baby for everyone across the world, the doctors called her “Aya” (miracle). Her uncle was the one who found her still attached to her mother via the umbilical cord. Her mother and father were dead. Later on, her uncle named her “Afraa” after her mother. Bab al-Hawa reopened after three days. The United Nations, the double-faced organization that it is and always has been, shook hands with Assad to provide supplies.
Meanwhile, Afraa’s uncle reveals that no aid has arrived other than from the locals and small aid groups. Muhannad Adi, the regional humanitarian coordinator for the United Nations, who was supposed to oversee the aid effort, states that the end of the suffering comes with a political solution [of which there is no sign], but since they apparently cannot find it and aren’t even trying to, they are dealing with the repercussions of what he calls “failed politics.” “Utter shamelessness” is what it really means. Hospitals, in the absence of sufficient medicines and treatment facilities, cannot provide treatment. The doctors have to let go of the patients who suffer in pain at home because there are not enough beds in the hospital. With bombings, shootings, earthquakes, and illnesses, Syria is as close as one can get to hell while staying alive. And yet there is no reporting of any of these, and the breaking news is the coronation of a former colonizer.
The advent of artificial intelligence, to think about it, didn’t take long, considering the big bang of the World Wide Web occurred only 30 years ago. But what’s taking even less time is the speed at which it is developing suddenly. Scientists are using AI to discover a way to interpret how whales communicate and understand what they are telling each other. This will help lower the impact of humans on whales and animals in general, which is considered a responsibility we must fulfill. But the scientists believe, deep down, that there will always be people who misuse it in ways yet unimagined. So whatever way they come out with, it will have to be kept in check and monitored. But before AI can decode the language of whales, it is being used to decode humans, and it doesn’t look good.
Generative AI and its chatbots are the hot new thing today. We all know the debate about it slowly taking over many jobs. But the time for that hasn’t yet come, so we can talk about it later (human nature). Microsoft is going all out on generative AI, and so is Google. But are they considering the safety nets that are required before there finally comes a time when we will not be able to go about our lives with AI? That’s a crucial question with no real answer. We are so bent on finding a potentially self-aware technology that makes our lives easier that we have forgotten the possibility that it can manipulate us. And the main culprit behind this issue is the language models used to build AI. Med Mitchell, former co-lead of the ethical AI group at Google who was sacked following her doubts over the negative traits of AI, states that AI is based on real-person language and uses the Internet as its dictionary, including Wikipedia, Reddit, and blogs, which means that the answer AI gives you is a biased answer of those who upload that data rather than applying its own “intelligence.” And since the Internet is basically the most powerful way to control the general public, AI is just the perfect means.
Also, the Internet isn’t really a healthy space. Period. No one knows how to control AI systems because AI itself is still developing, and the time to find a way to establish safety protocols in case it becomes self-aware is decreasing by the day. AI is a digital mind, and one day will come when even its creators will not be able to truly understand it and thus control it. Connor Leahy, CEO of Conjecture, a startup that aims to find out how language models work, states that the models are basically numbers written in a “non-human” way. It is just math. It is not human. And we, as humans, are ready to trust, if not already trusting, such language models without fully understanding them, and what’s worse is that we won’t even know if and when they are manipulating us. Leahy believes that there will come a time when a very powerful AI system will be deployed that we won’t be able to stop, and the race that companies are in, to build an AI like that, brings that time closer and closer, and in doing so, they are only shortening the amount of time required to do thorough research on AI that’s needed before ensuring that it is safe to be deployed.
Assuming we have a plug for it that we can just pull when things go haywire, if it comes to that, will already be too late. And since AI is being designed to be as life-like as possible, it will get even tougher to realize when something goes wrong. People have already begun falling in love with AI personalities due to the way they interact, and it is not good, as there are reports that they are harming people who are more emotionally vulnerable. And if it isn’t scary to imagine this, imagine leaving behind this self-aware piece of technology for our future generations, who will be more vulnerable to it than us in every way and thus overpowered. Technology will become self-aware. Skynet will rise. And we can only pray for some John Connor or Kyle Reese to save us.