This write-up is a product of my exposure to sci-fi movies, science articles and deep reflections. Although there are other challenges AI presents but this article only presents a few with thoughtful questions asked.
Artificial Intelligence (AI) refers to machines exhibiting a level of intelligence reflected by human beings and also includes reasoning capabilities beyond human mental power. It revolves around an intelligent agent (robot or software system) being able to perceive its environment and carry out precise and highly successful actions based on what it has perceived. As software systems become more sophisticated and AI-infused, a lot of things (that employ AI and have become routines) are been struck out of the list of features that previously fell under the canopy of AI, this is known as AI effect. This effect is in a way narrowing down the concerns of AI but it doesn’t mean AI’s list of problems and research themes are shrinking. One of the problems surrounding AI that remains unsolved and keeps sticking its head up like an angry serpent is Ethics. The quest to solve this complex problem usually leads to more unanswered questions which mean a lot to the ecosystem of the digital and physical world.
I once read an article in Reader’s Digest about a British wildlife photographer, David Slater who was sued by PETA because he didn’t give credit to the monkey who excitedly snapped the photo that graced his book’s cover. The photographer had his camera set to capture photographs but while he was busy with something else, the monkeys around got excited and began snapping themselves. One of those photos featured on his book cover. That sounds twisted, what about machine copyright? In this age of intelligent software systems that sort out news feeds, video clips, musical selections to uniquely suit everyone’s taste, it is important for us to ask who owns the copyright to a painting or literary piece when an AI creates one. But before I go further with that, here are a few questions for the mind.
- What makes an entity eligible to fundamental rights humans possess or should I say are AI systems, beings worthy of the things accorded to humans?
- If these intelligent agents are awarded civil rights, does it mean they can vote?
- When they create intellectual property, do they get a financial reward? If they do, are they to pay taxes and what do they spend these rewards on? Would it be spent on buying access to confidential data or acquiring that promising quantum or optical computer hardware for their physical upgrade or show-off to rival AIs?
- With the quest of making more emotional or self-aware machines, can they be totally human in reasoning?
The question of if AIs can lay claim to copyrights or not, depends on how and under what instructions the intellectual property was created. If the computer was ordered to write a poem that speaks philosophy or create a watercolour abstract art, one can say it’s manufacturer, it’s creator(s) (the one whose intellectual power is mirrored by the machine), the company or individual that has licence to the machine owns the copyright. This scenario can be likened to a writer who works for a publishing company and every article he makes for the company carries the company’s stamp of ownership. But what if the whole process was initiated by the machine’s sense of free will and out of its break time? Can we still call it the property of its owner or that of the machine itself? This only takes us back to question number one, which talks about machines and civil rights.
Under the issue of ownership/copyrights I think it’s also proper to talk about who takes the blame when a supposed perfect intelligent agent makes a costly mistake. Imagine an agent designed to carry out medical procedures (which always demand a high level of precision), what happens when things go wrong? Who takes the blame? The answers as to who gets to be queried varies, it could be the programmer, hardware designer, the medical institution that owns it, the company that created it and also responsible for it’s maintenance or maybe the machine itself (since it might be emotional, self aware and crowned with its own civil rights and exclusive rights to its own programming, assuming self-learning also becomes an item on the AI effect list). Or perhaps we blame it on the good old manufacturing defect. Here again the answer depends on the situation surrounding the machine. This section leads to how things can go wrong.
How can intelligent machines go wrong?
- Errors in core coding: Although we expect that machines would be able to learn better and correct themselves in the future but what if there are errors in the programmer’s code that can negatively affect the supposed self-learning machine, therefore making it unable to detect it’s creator’s glitch.
- Design and manufacturing defect.
- The inventors hidden agendas.
- Machine crime: As we all know that emotions are usually the cause of good and wrong deeds in mankind. We should expect that once machines are able to reflect human emotions well there is every possibility they can be inspired to cause trouble with the belief that their actions might mean good or maybe cause troubles out of pure mischief.
- Threats from malicious codes.
- Intelligent machines in the hands of outlaws: We can’t deny the fact that criminals always have their own share of innovation and toys no matter how classified an invention can be. So you never can tell if there is a terrorist in a remote part of the world with an all-star team of programmers creating a digital big bad wolf.
Here again another question pops up, how do you prosecute an intelligent machine that decided to crash the stock market or ruined a nuclear disarmament operation? Do you arrest the software engineer, owner or the firm who created it or perhaps the machine’s next of kin? Lets imagine you manage to round up the owner or company for judgement; it doesn’t stop the machine from doing more damage. And when it decides to go ghost by shutting down its major hardware or go into hiding in an unknown data house in the world akin to a fugitive who takes refuge in the mountains. How does the law catch up with it? Or does this lead us to police robots or algorithms? AI in Law firms mean a lot to the professionals it helps, but isn’t it possible that an advanced computing can judge based on emotions rather than logic since emotions are part of what makes us humans? Would it also mean we are putting the fate of our fellow human beings in the hands of synthesised beings that have no personal psychological or biological clue of what it means to be human?
HUMAN BOND, VALUE, PURPOSE AND FATE
Another aspect of artificial intelligence and ethics that we should also think about is what would become of human bonds, value, purpose and fate. No doubt the technology behind personal computers and smartphones has enhanced our mode of meeting new friends and keeping in touch with friends and family. But in many ways we the users have allowed it to affect our physical communication skills. These skills are essential to a healthy society. Social media and the never-ending quest to see the world through the screen of our phones, is capable of leaving a table of twelve in a public gathering or an event, quiet with zero interaction but with faces illuminated by phone screen lights. Now imagine what happens when there are better personal assistants aided by neural networks and the likes, people would only have more faith in their wonderful software companions than in actual human beings. Jobs are already being taken over by machines and algorithms and more will be taken when these brilliant toys gain more machine power. In the future when machines now do more jobs that added value and purpose to human life, since employers would be enjoying increased output without worrying about paying people money, what would become of the people who once had those jobs?
As more research and work is being put into making machines reach perfection or let’s say mirror human imperfection, our governments, the Judiciary, ICT firms and everyone would face a larger threat and the disruption of the digital and social ecosystem. Since AI working hand-in-hand with other aspects of computing would break all the rules these individual aspects have broken in the past and are still breaking, this would only make the problems more complex since it would be a synergy of all the individual issues of the computing aspects involved.
We can’t deny how good technology especially computers have been to us since its creation, it is beautiful to know that these wonderful electronics still have a lot more to offer to mankind. But we must also acknowledge the harm it can bring along, solve them and also reduce the rate at which more of them spring up. The purpose of this write-up is to spark more thoughtful discussions about these issues, how they can be solved and how more future threats can be neutralised before they come into existence.
All the solutions can’t come from more advanced technology, a reasonable percentage would come from us, human beings, the ones who’s ecosystem, existence and even freedom is being threatened by future Artificial Super Intelligent machines. Our personal behaviour also matters, if people lose interest in crime, it means crimes committed via electronic devices would be reduced, this also can mean a lot to our safety in the cybersphere.
By the time we have such powerful tools, it wont be a bad idea to subject these machines into their own mind, in order to probe themselves, their fate and the fate of mankind. If it does that, can it be objective with the results of self-discovery? Although we expect objective answers since their core codes deal with logic but we should also not expect objective answers since by then they might possess emotions and neural networks might be at their peak and this pair would also be a key in it’s decision making process.
Looking through history we discover a lot of human inventions that never required a computer, had/have their negative effects on man and the environment. Some of which would have been stopped if the inventors had a foresight of what their works could become. For instance, until nuclear bombs were made, who ever thought nuclear energy could be the mother of a worldwide threat? Maybe if the pioneers of nuclear fusion and fission saw the future, perhaps they would have discarded their work or kept it with the hopes that it never fell into the wrong hands. It has always been in the nature of man to seek better ways of living. But in this present era and with the knowledge of what we have experienced, I believe paying more attention to problems our technological dreams can bring is worth the effort and research. This is not a call to stop innovation but it’s just a reminder that we should also divert more energy and effort into making sure what we create doesn’t destroy or harm. Major technological achievements can be likened to a mythological beast whose major instincts were feeding and survival. In this era we can liken the beast to a computer crowned with the magic of AI. It is no longer satisfied with just eating and escaping danger, it wants to do more. And there’s nothing worse than a monster full of knowledge, passion and the required precision to wreak havoc. It’s more like a beast enhanced with a 360-degree and night vision
In my personal school of thought, there is something I call ‘Universality of quotes/laws’ by universality I mean how valid an idea or quote can be in every situation. Not all quotes or rules can apply in every context. This brings me to the saying ‘being limitless or being unlimited’ in a positive or motivational context, it tries to speak into the inner part of a person that he or she is capable of making dreams come true. Although we all know certain things are impossible except there’s an intervention that we cannot explain not to talk of understanding. I believe everyone who works in the line of making artificial intelligence better, should have a limit or work within a boundary. These limiting walls are not to hinder progress but to protect us from unknown and unexpected dangers that can creep in once the walls are bypassed or broken. In short engineers, scientists, legal practitioners, philosophers, theologians, the government, world powers and everyone at large should be involved in policies that protect and stop scientists and researchers from making the wrong moves. In the quest for breakthroughs, success can sometimes mean not chasing that goal which would only boost one’s ego or reputation at the expense of societal safety.
‘I count him braver who overcomes his desires than him who overcomes his enemies – Aristotle’
Another solution that can make a great difference in the battle of AI versus ethics includes creating a healthy balance between AI usage, human value and nature, it is not an easy task or a day’s job but a continuous and collective effort from every concerned party. Also learning how to be many steps ahead of troublemakers (who would no doubt employ smart machine power) is essential.
Of course, sci-fi movies like The Matrix, ‘I, Robot’, Person of Interest, Westworld etc. might seem to exaggerate the dangers of intelligent machines but we cannot deny how sci-fi works have shaped the future of science and innovation. Still doubting the troubles of AI in sci-fi movies and books? Think of how unbeatable your computer scrabble or chess game would be by the time it is hooked to an AI and probably enjoying the benefits of a quantum-powered hardware. 2017 showed us a classical example of the threats AI poses when Facebook was forced to shut down it’s chatbots deployed online, right after the intelligent agents without human input created a new language of their own which no human understood. Think of what these software systems would do when they become more sophisticated in the years to come and think about the other unheard incidents of intelligent agents that went off-script.
I hope this piece of mine has shed some light over the topic and would spark more concious thoughts and actions with regards to the future of Artificial Intelligence.