Brian Aldiss was an English writer best known for science fiction novels and short stories. Probably the most famous one, "Supertoys Last All Summer Long", first published in the UK edition of Harper's Bazaar in December 1969, tells a story of a boy who cannot please his mother despite giving his best in every way he can. It turns out that he doesn't realize that he is an android living in an age of intelligent machines and loneliness endemic in an overpopulated future where child creation is controlled.
A movie buff might find this story familiar. The short story was used as the basis for the first act of the feature film "A.I. Artificial Intelligence," directed by Steven Spielberg and released in 2001. The plot broaches other issues like global warming, but the main subject is the capability of machines to experience love. In a world where machines are capable of processing complex thoughts, having emotions seems to be the missing puzzle to become human.
Science fiction has always been a subject that humans like to explore, and when it comes to where technology will drive us, creativity goes far away. But sometimes it looks like a sort of prediction, and as corny as it can be, the fiction comes very close to reality. Although we didn't reach a point of creating concepts like theory of mind and self-awareness in AI yet, AI is a technology that is already a reality and a trend for the following years. Companies and governments are heavily investing in enhancing the technology, the development obtained so far is astonishing, and concerns about security and ethical usage of the technology are being deeply debated.
Are we living through a simulation?
Artificial intelligence is a simulation of human intelligence by computer systems that can learn, perceive variables, reasoning to make decisions, solve problems, and correct themselves. From a preprogrammed code, variables are taken into account, the data is processed, and determine what to do in each situation. The AI system can be designed and trained for particular tasks like a virtual personal assistant or can be more complex and strong with generalized human cognitive abilities, sometimes being able to find solutions without human intervention, such as self-driving cars.
It's important to realize that AI systems are always incorporated into various types of technology. A spam blocker is no more a tech innovation but has more to do with AI than you can imagine. It looks at the subject line and text of an email and decides if it's junk. Natural Language Processing is the technology behind it where a computer is capable of processing human language. Current approaches to NLP are based on Machine Learning and can do more complex tasks like text translation, sentiment analysis, and speech recognition.
Machine Learning, which may be the hippest term of the moment, is also a technology that maintains close ties with AI. Its own definition fuses with the definition of AI: The science of getting a computer to act without programming. So Machine Learning nowadays can detect patterns from a previously labeled data set, sort data according to data sets that aren't labeled, and give feedback after performing an action or several actions. In other words, current systems may include machine learning capabilities that allow them to improve their performance based on experience, just as humans do.
These are a few examples of technologies that enable AI to work and provide intelligent systems. Automation, machine vision, and robotics are some of the others. All of them are constantly developing, sometimes in combination, to create new solutions to human problems.
Automation has been an industry tonic for many decades. And the machines keep getting smarter. With AI, there is equipment that manufactures and checks products without being operated by a human. Chatbots and systems with Natural Language Processing are becoming smarter to replace human attendants and being available to answer users' questions 24 hours a day. AI in education can automate grading, giving educators more time. AI can assess students and adapt to their needs, helping them work at their own pace. Ai in finance can collect personal data and provide financial advice. Ai in online retail can recognize user buying patterns to present them with offers according to their preferences.
There are programs with access to databases in the communication area that can write informative news stories that make it difficult for the reader to distinguish them from texts written by humans. Maybe this one.
What are the boundaries of artificial intelligence?
While AI tools present a range of new functionality for businesses, artificial intelligence raises ethical questions. Humans do all the programming and algorithms defined, and who is building this structure and controlling it is a crucial matter of security, trust, and where we want to get. Deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.
The standards we use to collect data and turn it into algorithms are nothing but our built-in opinions, such as chatbot Tay, which Microsoft launched in 2016. In less than 24 hours, Tay became an advocate for incestuous sex and an admirer of Hitler for using teenagers' dialogues as a basis of his answers. For this and other reasons, the application of ethics is not limited to the standards we use, but must also reach those behind the scenes, the people that standardize the information.
Unless supported by ethics, artificial intelligence becomes a carrier for codifying human prejudice, directly affecting those who have access to the Internet and digital devices. We should question what criteria are defined by algorithms at the time of job selection or bank loan. Or how robots draw connections between "suspects" in digital surveillance.
The Financial Times released an article in 2019 about the usage of facial recognition systems by companies and governments worldwide. At the same time, the Chinese telecoms company Huawei boasts that its cameras led to a 46 percent drop in the regional crime rate in 2015. Some critics believe that companies are spinning their products to fit the political demands of African elites, for example.
According to research by the Carnegie Endowment for International Peace, at least 52 governments use the technology to ensure security. While the debate over the use of facial recognition in the EU and the US is focused on the privacy threat of governments or companies identifying and tracking people, the debate in China is often framed around the threat of leaks to third parties, rather than abuses by the operators themselves.
Meanwhile, China's surveillance industry is already moving on to the next frontier of computer image recognition: identifying people by the way they walk and trying to read their emotions.
A few regulations govern the usage of AI tools, and where laws do exist, they typically indirectly regard AI. Some lending regulations require financial institutions to explain credit decisions to potential customers, limiting the extent to which lenders can use deep learning algorithms. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.
Coding the future
The future surrounding AI and its applications isn't clear yet. A broad and deep debate is crucial to define how we should use the technology to ensure the safety of those who use it, the purposes behind the applications, how the algorithms are reading the information, and what interests are involved. Some of the concerns may not even appear yet. There should still be new concerns about the use of AI that we have not yet been able to foresee, but we need to know and talk about what is emerging and may affect all.
Some pessimists believe in an apocalyptic world where machines will take over control of humans. Elon Musk, Tesla CEO, is among those who have publicly spoken out about the dangers of artificial intelligence turning against us. In "One Crew Over the Crewcoo's Morty," the third episode of season four of the TV show Rick and Morty, he appears as Elon Tusk, a version of himself but with tusks instead of teeth. Briefly, avoiding giving spoilers, the episode addresses artificial intelligence where the character Rick uses a robot with programmed intelligence to execute a plan to deceive others; it turns out that in the end, the robot himself tries to implement an intent to deceive Rick.
Artificial intelligence is still closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life. But it's changing. Our role at @talle is to learn about it, get opinions, discuss it, and decide what's the best way to go. We are in the middle of building an ecosystem that has incredible opportunities to make our life improve. Still, we all should be aware of the risks to reduce them, make them transparent, and allow sustainable growth. Super bots will last. Let's take the best of it.
By: Bruno Rodrigues