When I was a kid, I had this exact pet robo dog. And I remember being so amazed at it every time I turned it on, because for the few seconds it worked, it felt like a real dog talk to me. It felt alive to me. But I knew it really wasn’t alive. I knew it was just metal, plastic working together with predesigned actions. I knew it was impossible for, you know, plastic and metal to yield a breathing, living DOC. So I guess that’s why I thought it was amazing.
But now there’s a paradigm shift with the advent of advanced AI and advanced supercomputers. Now what we thought was impossible is becoming a reality. Robots will dominate the future. They will be the economy of the future. And we will depend on robots and AI in the future. It will replace all jobs, including manufacturing, manual labor and perhaps even military jobs. It will not just replace all of our jobs, but it will open up new tiers of markets in the economy. It will be the economy this can achieve a new level of abundance and prosperity for humans. New freedoms will be unlocked that we never could have done before.
Now, in the article by Tony Siebo, the one I mentioned in the last part, he mentions that there’s one caveat to all this. Robots sentience could disrupt all this. Now, we kind of just shrug this idea off, like, nah, yeah, robots should never become sentient. But this idea is becoming more prevalent as we recognize just how much power there is in biotechnology. Like for example, prosthetic limbs are becoming way more advanced today. In fact, we’re now replicating sensory data and information through robotic limbs. Scientists are now able to implant electrodes into our bodies, connect them to our nerves and muscle fibers, and send signals through those electrodes in the prosthetic limbs to the brain, emulating pain and sensory information. Again, we can make people feel again. We can make them sense again through their prosthetic limbs.
Now, this is not robotic sentience, but it could imply that the gap between biological sentience and machine sentience isn’t as large as we thought it was. If we think about our own consciousness and experience of this world, it really is fundamentally electric, bioelectrical. So perhaps it’s not that big of a stretch to think that we can simulate pain and sensory information through a mechanical medium, some kind of digital sentience. It is very quite possible that we might be able to someday emulate consciousness mechanically. Now, it’s not gonna be the same type of consciousness as we, but it’s still sentient nonetheless, is still a way of it, a form of it. It’s not impossible that robots do one day become sentient and feel pain and feel emotions and crops even dream.
But if it is possible, why would you want to give her about sentiments? I risk it. Why complicated? Well, because we are complicated. In the movie, I Robot, they explored this by invoking the super intelligent AI called Vicky, who was completely logical. Her logic was undeniable, but she was so logical that she devised a plan to take over humanity because she saw us logically as a danger to ourselves, we wrecking a planet, destroying ourselves, causing wars. So Vicki wanted to take over humanity and sacrifice some of us to save us from ourselves.
Now, that was just a science fiction movie, but it raises a great point. What if robotics and AI become just too logical? If we’re gonna deploy millions and millions of robots into our society and they’re gonna integrate with us as a society. Is it really gonna be that fruitful for us to have them not understand anything about ethics or compassion or feeling. Part of the human experience, part of being a person in our society requires you to have some kind of compassion and understanding. Rush, you’re not really gonna integrate well. And if we have robots just make only logical decisions, never one is based on empathy or compassion, couldn’t that lead them to make bad decisions? So perhaps we’ll have no other option than they give these robots sentiments, or at least the tools to make ethical decisions. But this raises another problem. What if we give these robots emotions and sentiments and they start recognizing how poorly they’re being treated, right, being forced to do all these tasks and jobs. What if they just get mad at us and start wanting to, like, revolt against us? What if they do begin to have dreams and aspirations of their own and wanted to be free as well? Recently, the EU has proposed this AI Act, the first AI regulations. And perhaps in the future we’ll have to do this for all robotics, perhaps give them rights, property and freedoms. This is very similar to what’s happening with the animal rights movement right now. We’re recognizing that, yeah, animals are less sentient than us, but they are sentient nonetheless. Hence, we think they deserve more rights and freedoms to not be slaughtered and harmed. The same will be done to the robots as we recognize their sentience.
Another solution is to perhaps create different tiers of robotics, one without sentience and one with sentience. It’s hard to determine what will be the best decision going forward, but what we do know is that this emerging technology is challenging the way we see robots. The dividing line between a machine being and a biological being becoming ever blurry as we learn more about what they are and what experiences and what consciousness is and what life is. And if you’ve noticed, this is not a fundamentally new paradigm shift. I mean, we’ve seen cases like this before. 4 in terms of ethics issues and civil rights issues of the past, history tends to repeat itself with new challenges sprinkled in, forcing us to change and adapt and better understand each other, who we are in our place in the universe. And it’s our choice. What do we do with our second chance? And this is one of the biggest second chances we have.