Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.
Can technology be conscious? Since the first artificial intelligence (AI) program was written in 1951, researchers and technology professionals have worked tirelessly to develop highly advanced AI programs. One of the first pioneers of this type of technology was Alan Turing, an English mathematician and computer scientist. Turing understood that as humans, we combine information available to us with reason to make decisions. He theorized that because it was possible for humans to come to a logical conclusion using these methods, it was conceivable that a machine could do the same.
Around the same time, we also saw that popular culture was using the rise of AI and robots to create a new class of villains: robots with human intelligence that could sense, feel and connect like humans do, taking over the world. It resulted in a fear of advanced technology that has persisted in movies, pop culture, and books for the past seventy years.
Outside of popular culture, scientists and engineers were actively developing smarter and more advanced AI programs. Because Turing believed early on that AI could be programmed to make decisions, it opened the door for scientists to ask a very critical, if philosophical, question: Could AI ever become advanced enough to become “conscious”? Whether this would be beneficial or dangerous depends largely on individual interpretation, but despite recent headlines, conscious technology isn’t here yet, and it won’t be in our lifetime.
That’s because AI and machine learning (ML) are still in their infancy and there are big strides to be made in terms of optimization and innovation. We have mastered many of the building blocks needed to create advanced AI systems, but we cannot yet build the full sentient being.
MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.
Society has many different ways of defining the term ‘conscious’
To really understand what it takes to make the technology aware, it is important to break the philosophy behind what defines Western civilization as ‘conscious’. We must also distinguish between a trained technological device and a legitimate autonomous decision-making machine. We popularly define feeling as a being that is aware of itself and that has freedom of choice and autonomy over their decisions.
In the early 1900s, Turing studied this idea and what it means for something to have consciousness. As a result of his research, he developed a test to determine whether a machine has human-level consciousness. The test showed that the AI had a human-level consciousness if a human could not discern whether they were communicating with a machine or with a human.
Sounds simple, right? It’s a bit more complicated than that.
For example, if you speak over the phone or through an online live chat service with someone who works in customer service, such as a bank teller, by asking questions and interacting, we can assume that they have self-awareness and awareness. This is because they listen to us and are able to respond and respond in a way that provides meaningful solutions to our problems. Sometimes when people interact with people who work in customer service, emotions such as anger, joy, and fear can be expressed that a person can pick up and react to. If a machine can perform the same function of listening, responding and detecting emotions in a meaningful way, how does it affect our definition of consciousness?
Sense technology: mirroring human interaction
AI is tied to human interactions, as humans program the software to perform functions that a person would normally do. As a result, they’ve implanted some of their own biases into the AI they’re creating – which is a whole different story. Take chatbots for example. This type of technology eliminates the need for human employees to fill call center offices, respond to and route customer inquiries to the right person. But the technology is built to respond and communicate in a conversational format that is traceable to the person on the other end of the line, or helps the person calling get the answer or task they need to do.
As AI becomes more and more sophisticated, it will undoubtedly become more complex. That said, just because something can handle complex tasks doesn’t mean it’s conscious. Today, AI can perform a multitude of tasks because it has been trained to do so: talking to us, doing real-time translations, driving autonomous vehicles.
This is not possible because the AI makes decisions, but because the machine or software follows the set of rules and codified information that a human has installed. It’s also worth pointing out that in most of these situations there’s still a human in the loop and the AI isn’t acting independently.
The emerging phenomena
AI needs a set of rules to follow and a human to set those rules. An example of how these rules take shape is the idea of ’The Emergent Phenomena’ in technology. This can be defined as the appearance of something new and unpredictable in the process of organic evolution.
This means that even if a machine is not specifically programmed to do anything, because of the training it has received and the broader context in which it operates, it may be able to perform certain tasks and operations relatively unsolicited, which is a natural progression in the process. of developing AI.
However, this does not mean that the machine is conscious. Rather, it represents the performance of current technological advancements in improving systems to help IT teams minimize the time spent performing tedious tasks that a machine can be trained to perform. It’s all about the degrees of freedom or restrictions that people build into the system. This idea of AI possibly teaching itself to do things is usually where the sensational Hollywood-inspired fear comes from, when we imagine machines taking over the world.
Will AI ever become aware?
While it’s not fair to put a stake in the ground and say that sentient AI will never be possible, it’s more realistic to think that this kind of technology is hundreds of years away. We’re just at the beginning of what’s possible with AI, and while the idea of sentient AI is intriguing, we need to master the art of walking before we can run.
If and when it happens, it will pose huge philosophical questions for the wider community. If a machine is conscious, are we granting human rights to that machine or access to a lawyer in the case of? Google LaMDA? At this point, we still have a long way to go in terms of perfecting general purpose AI before we can even start thinking about or developing conscious AI.
As AI and ML continue to evolve and improve, we will certainly be able to improve customer and employee experiences and minimize the time developers spend perfecting the individual building blocks of technology as the bigger picture begins to come together .
While the idea of a science fiction AI taking over the world might make a great plot for a movie or podcast series, we can bet that technology is a friend rather than an enemy. Widespread adoption of AI in everyday life will normalize it and build trust from a human perspective and remove the fear layers left over from years of sensational sentiment towards ML.
And while we might think our Siri or Alexa is mad at us, she’s definitely listening, but we can be sure she’s not a sentient being.
Adam Sypniewski is the CTO at deepgram.
Welcome to the VentureBeat Community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers