Artificial Intelligence or AI is taking the world by storm. As technology continues to fill the modern world, AI finds more and more applications in our daily lives. Big tech companies such as Google, Microsoft, Amazon, Facebook, and IBM have all developed AI systems for business applications.
What is Artificial Intelligence?
Natural Intelligence is the intelligence exhibited by living beings such as animals and plants. Artificial Intelligence, on the other hand is the intelligence displayed by machines. Humans have always been fascinated by the idea of creating things with intelligence from automatons such as Talos in Greek myths and Yan Shi presenting mechanical men to King Mu of Zhou in ancient China to Ismail al-Jazari creating a programmable orchestra of mechanical humans in the 12th Century CE. And it is not just automatons that have fascinated humans; there have been many machines that could solve complex problems such as telling the time or calculators solving maths problems. It wasn’t until the 19th century, with Charles Babbage and Ada Lovelace working on the programmable mechanical calculating machines, Bernard Bolzano formalizing semantics, and George Boole inventing Boolean algebra, that the modern era of computing and AI began.
AI and Impact
According to Google, AI is defined as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” The goal of any AI is to study its environment and take action to maximize its chances of success. This goal can be precisely defined or deducted by the AI. Usually, an algorithm is used to define what and how it should function and learn from data.
Today, AI applications are too numerous to list. Some of the most common applications today include:
- Speech recognition
- Natural language processing (NLP)
- Image recognition
- Real-time recommendation
- Virus and spam prevention
- Automated stock trading
- Ride-share services
- Household robots
- Autopilot technologies
The effects of AI in the long term for economics remain uncertain. There is no agreement if the use of AI will lead to long-term unemployment, but while automation today has eliminated jobs, it has also created new jobs. Most of the AI used today is what’s called Narrow AI/Weak AI/Artificial Narrow Intelligence (ANI). These are AI that has been written to perform specific tasks.
The second type of AI in development is called Strong AI/Artificial General Intelligence (AGI). These are AI that is developed to replicate the self-government of the human mind, able to solve many types of problems. This means it will not need humans to dictate what it should be, and the AI will pick and choose what it wants to do.
With AI being used in such a wide variety of fields, how it is used has become a matter of debate. One of the biggest debates is about privacy and how the data currently being used to train AI is being collected. Technology giants such as Facebook, Google, Twitter have come under pressure on what data they collect on their users and what AI experiments are being done on the users without their knowledge. There has also been pressure on companies to do more on what their users see on the feeds with the rise of fake news. As social media algorithms are based on pushing interactions, controversial content gets pushed to users more often than “normal” content.
Another area of debate is the use of AI in the security and military sectors. As many organizations are using facial recognition to identify people, it has been widely reported that many AIs are biased against people of color. This is due to the bias in humans themselves being passed on to AI in terms of the data that is used. Another area of contention is the use of AI in conflicts. As robots (drone weapons, etc) in conflicts gain popularity instead of sending humans, questions are being raised about how much autonomy the AI in these robots should have. Although many countries are pushing with their programs on AI weapons, there has also been a wide push around the world to work out clear rules about how they should operate.
Other ethical issues include legal liability for self-driving cars if they are involved in accidents. With companies like Tesla launching cars and trucks with autopilot, who is liable if these cars get into accidents? Finally, there are also debates about what would happen once AI reaches singularity and self-aware, what rights it would have.
With the use of AI rising around the world, there have been increased calls to regulate how AI is used. The regulations debated can be divided into three areas 1) regulation of autonomous intelligence systems, 2) accountability and responsibility for the systems and 3) privacy and safety issues. Many countries have regulations to tackle the privacy and safety issues with the EU’s GDPR and the USA’s older COPPA being the most well known.
Regulation in the other two areas is still under discussion with major players such as the USA, EU, and China setting up internal committees to discuss the regulatory framework, implementation, and impact of AI. Prominent voices such as Stephen Hawkings and Elon Musk have already spoken out about the need for regulation and about how human life would be impacted if we lost control over an AI.
AI is pervasive in everything we do today, and its influence is only going to increase as computational power increases and algorithms improve. AI has increased efficiency and productivity worldwide, and its automation has made many jobs redundant while creating others. Many studies are being done on how AI is impacting the world around us and what ethical implications various AI have and how we can remove human bias from its algorithms.
It is an exciting time to be working in the AI industry and while there are still debates about how AI should be used. But there is some agreement that AI development is still in its infancy, and there is a long way to go until we get to AI singularity and embrace our machine overlords.