The Student News Site of Bakersfield High School

The Blue and White

The Blue and White

The Blue and White

Should society fear AI?

Should+society+fear+AI%3F
Alex Cardona

As long as humans have built machines, they have feared the day they could destroy us. Stephen Hawking famously warned that AI could spell an end to civilization.  

However, to many AI researchers, these conversations feel unmoored. It’s not that they do not fear AI power, it’s that they see it already happening, just not in the ways most people would expect.  

How AI is already being used

AI is now screening job candidates, diagnosing diseases, and identifying criminal suspects. There are now even self-driving vehicles that are controlled by AI; the most well-known of the self-driving cars are Elon Musk’s Teslas.

The most commonly used AI by the average person is Siri and Alexa. Siri can be used for pretty much anything and everything. Siri is a feature of Apple products; Siri has been around since the first iPhone came out in October 2011. 

The source of AI criticism

The danger is that artificial intelligence tools could be used to manipulate people even when AIs aren’t used maliciously, they can spread dangers around the world. The danger it can spread for example if AI gets into the hands of someone bad they can use it to blackmail. Perhaps a family member or a friend can be anyone. AI is considered so dangerous that scientists have said AI can act independently.

In the case of Siri and Alexa, users expressed fears that these personal AI assistants can hear them speak and possibly use it against us– even when people are not actively using their services.

In March 2023, Musk called for a pause on AI. Musk said a pause in AI development was needed because of concerns about artificial intelligence, worrying that the technology could advance so rapidly that it creates existential risks for humanity

Musk’s concerns include AI terrorism; the AI “terrorists” make it seem like they have someone important for ransom. There is also the anticipation of autonomous weapons, or killer robots as only seen in sci-fi previously, to be used in the future of warfare. Although they are designed to kill invaders and militants, if they get into the hands of the wrong person it is dangerous for everyone.

AI can now be considered smarter than humans. Despite all the positive promises that AI provides, human experts, however, are still essential to designing, programming, and operating AI from any unpredictable error from occurring. 

“Preventing AI from taking over the world requires a multi-faceted approach. Robust safety measures, transparency, collaboration, regulation, and interdisciplinary research are all crucial components in ensuring that AI is developed and deployed responsibly,” TS9 Space wrote.

Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. 

There are times when AI meets an impasse, and to carry on its mission, it may proceed indiscriminately, ending in creating more problems. For example, cars go electric with built-in AI. It will potentially help with environmental issues, but vehicles will suddenly stop driving when they reach a prohibited zone. With the integrated technology, there’s also the possibility that someone could hack into the system and control the vehicles; this fear has even found itself addressed in the recent thriller film “Leave the World Behind.”

 NHTSA, carmakers have submitted a total of 419 autonomous vehicle crash reports as of January 15. 263 of those accidents have been in Level 2 ADAS cars, with 156 having been in truly autonomous, ADS-equipped vehicles. Across those 419 crashes, the NHTSA records 18 definite fatalities.

Vigilant watches of AI’s function cannot be neglected; this reminder has been termed “physician in the loop.” The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article “The impact of artificial intelligence on human society and bioethics” published in Nature to caution against any bias and possible societal harm caused by AI.

The Neural Information processing Systems conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias can result in hurting the vulnerable population.

Stephen Hawking warned early in 2014 in his book “Brief Answers to the Big Questions” that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate. 

Humans, who are limited by slow biological evolution, could not compete and would be superseded. 

In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. In one scenario, Bostrom AIs could allow nonexperts to bioengineer pathogens that are as deadly as Ebola and more contagious than COVID-19.

Bostrom emphasizes that there is “potential for serious, even catastrophic, harm” due to AI.

He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity.

Another major fear of AI is rooted in the idea of mass unemployment of human workers due to their replacement by AI workers. 

A big concern is that in the previous wave of automation, it was mostly blue-collar jobs like manufacturing-oriented jobs that were automated away. Such as Cashiers there are now self-checkouts in plenty of stores; stores that have incorporated this tech include Walmart, Target, Sam’s Club, and even certain gas stations. 

In this new wave, it will be mostly white-collar service-oriented jobs like Agriculture workers that are based around knowledge workers that will bear the brunt of intelligent forms of automation. 

The need for trained human workers in many areas of the economy will go away as the use of AI grows and increasingly permeates the business world.

Should humans fear AI?

Fear of the unknown has always been the case with technology from the wheel to the internet. So, is AI something we should be scared of

The fears of AI seem to stem from a few common causes: general anxiety about machine intelligence, the fear of mass unemployment, concerns about super-intelligence, putting the power of AI into the wrong people’s hands, and general concern and caution when it comes to new technology.

While society should not abort the studies of AI entirely, as it does offer numerous benefits, AI should be approached with caution.

 

Leave a Comment
More to Discover
About the Contributor
Hauri Gonzalez
Hauri Gonzalez, Staff Writer

Comments (0)

All The Blue and White Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *