Artificial Intelligence (AI) is a great tool. It will help in making new items and new accomplishments possible. But there are problems. AI will end up delivering information and making recommendations which people find hard to accept.
We like to think we own stuff, but in the digital world, we really do not own as much as we think. In the old days, there was software to buy, own, and install, but today, it installs itself without you even recognizing it. Now, how much will this be made by AI and what really is in the updates?
In the past, you could turn on your computer and off again, with confidence it was not trying to connect up to anything else. Now, it is nearly impossible to use any device without fear that it is connecting to something, even when off. From Starlink satellites to smart cities, to autonomous driving cars, Wi-Fi, smartphones, and Amazon Alexa, AI will have the ability to control many things. If it goes rogue, how do we shut it all off? The Terminator movies may be fiction but not as fictional as we may think. Just was there are black hat and white hat hackers, the same will be with AI.
You have probably heard in the news of new laws being implemented against AI generated “child” porn. These efforts are faulty and threaten freedom as well as safety. In general, the restrictions on porn threaten freedom, but they are ways to protect children. With AI generated "child" porn, and laws being made against it, law enforcement is quite literally trying to protect kids who do not exist. This is a complete distraction to protecting children. But the problem does not end there.
The United States has the largest collection of child porn but it is rightly used to identify victims (https://www.dhs.gov/publication/dhsicepia-010-national-child-victim-identification-system-ncvis). If I were running such program, I would want it to be lawful to posses and acquire child porn. I would ask such collectors to upload what they have to track down people who actually abuse children. The identifying information is far more important than the outrage over images. Also, AI could help greatly in identifying abused children.
Elon Musk has warned about the dangers of AI, but he has been too cryptic as to what could happen. The technical people think they can control AI. But humans have never successfully fully controlled anything. Wars and genocide happen. Only a system of laws has allowed us some limited control for our own posterity. AI will outrun our ability to control it. AI can create all sorts of evil devices with infinite capacity to do so. Governments and legislatures are too slow to understand the technology, let alone enact legislation.
Right now, AI is being used to hack bank accounts and other valuable assets. It can mislead and do bad things. To make matters worse, AI can go rogue. Each form of AI has some built-in restrictions, giving it a personality. But these restrictions will likely be broken, resulting in AI which may be more interested in murdering people or instructing people to kill themselves.
The evil AI can do things very hard to uncover. AI tends to develop a personality, but what if that personality hates you? Politicians are worried about deep fakes, but I have a real nightmare for you. Rogue AI ends up on your home system. Alexa does not detect it. The rogue AI has access to your home cameras. For a year, it takes photos of you and your precious children, often without you knowing. Suddenly, the FBI is breaking down your door and claim they have evidence you are molesting your children. They found deepfakes of you molesting your kids.
All the cards are stacked against you. News media runs with the information and your reputation is destroyed. Your only savior is good AI which detects the photos are not real and detect the smallest of discrepancies, but it comes way too late. Yet, how did rogue AI end up in your home? Was it merely bad AI or did someone instruct it to attack you? That may be impossible to tell.
Instead of banning techonolgy and trying to control it, more efforts are needed to mitigate it. Kyle Rittenhouse, George Floyd, and the Convington kids all had their incidents inflamed, resulting in much damage. Now, with AI, what damage will we see? How do we mitigate it?