<p><span style="font-weight: 400">Robotics are advancing quicker than ever before with more advanced technology being created regularly. An example of robotics that has especially made huge strides over the past decade is artificial intelligence (AI). This is essentially technology that enables machines to think like humans and solve problems that were only previously possible by human brains. This science has revolutionised many industries and there’s no doubt that it will continue to make a big impact.</span></p>
<p> ;</p>
<p><span style="font-weight: 400">The speed of AI’s growth has been so astronomical due to chatbots, virtual assistants and autonomous vehicles technologies rapidly advancing. From surgical assistance to </span><a href="https://www.wlp.com.sg/xero-online-accounting/">xero cloud accounting</a><span style="font-weight: 400">, ai has been integrated into the daily systems we use. This has made the thought of super intelligence now being a serious possibility. Although this is exciting for the world of science, it can be a </span><a href="https://aitechtrend.com/unveiling-the-dark-side-of-ai-and-robotics-8-terrifying-incidents/"><span style="font-weight: 400">terrifying prospect</span></a><span style="font-weight: 400"> for everyday people who aren’t familiar with what superintelligence entails.</span></p>
<p> ;</p>
<p><span style="font-weight: 400">The potential dangers of superintelligence in robots will be explored in this article, as well as an insight into when we can expect these superintelligent technologies to come to fruition.</span></p>
<h2><span style="font-weight: 400">When Should We Expect Superintelligent Robots?</span></h2>
<p><span style="font-weight: 400">Superintelligence can’t be a reality until we see advancements from artificial general intelligence, which is the next step up before super status is reached. We could see the </span><a href="https://www.livescience.com/technology/artificial-intelligence/ai-agi-singularity-in-2027-artificial-super-intelligence-sooner-than-we-think-ben-goertzel"><span style="font-weight: 400">first AGI agent</span></a><span style="font-weight: 400"> being created by 2029 or 2030, as many scientists have predicted. This could eventually transform into superintelligence if the AGI agent is capable of rewriting its own code.</span></p>
<p> ;</p>
<p><span style="font-weight: 400">According to Ray Kurzweil, the technological singularity (the product of superintelligence) might actually materialise by 2045. Whether or not this is a good thing is still up for debate, with some scientists applauding the improvements and others criticising them. Stephen Hawking went on record to say that artificial superintelligence could result in human extinction, but this has been labelled as unrealistic.</span></p>
<p> ;</p>
<p><span style="font-weight: 400">Once we have superintelligent technologies in the world, there will be no going back which is why it’s important to prepare for the creation of these robots and AI. There should be plenty of time for us to prepare for this, so we should be ready for whenever they are introduced.</span></p>
<h2><span style="font-weight: 400">The Dangers of Superintelligent Robots</span></h2>
<h3><span style="font-weight: 400">Loss of Control</span></h3>
<p><span style="font-weight: 400">Once humans have created superintelligent robots, they could become autonomous and difficult to control as they technically have a mind of their own. Predicting the actions of super AI could be too difficult for humans, which could end up with us losing control of the robots that have been created with this level of technology.</span></p>
<h3><span style="font-weight: 400">Rapid Self-Improvement</span></h3>
<p><span style="font-weight: 400">If a robot is too powerful because of superintelligent technologies, it could rapidly increase its own capabilities as it should be able to rewrite its own code. An overload of intelligence can become a real possibility and nobody would know how to deal with that kind of situation.</span></p>
<h3><span style="font-weight: 400">Bad Ethical Values</span></h3>
<p><span style="font-weight: 400">There is no guarantee that these superintelligent robots can be programmed to think the same and have the same ethics as humans. We will be able to input information in an attempt to make their values be a certain way, but the AI could potentially interpret these values differently. This could lead to moral dilemmas and disastrous consequences.</span></p>
<h3><span style="font-weight: 400">Security Risks</span></h3>
<p><span style="font-weight: 400">There is a chance that superintelligent AI could be used maliciously by programmers, organisations or criminals. Given the increased likelihood of cyberattacks and autonomous weapons, this might result in disastrous security problems. Superintelligence may be disastrous if it falls into the wrong hands. </span></p>
<h3><span style="font-weight: 400">Human Replacement</span></h3>
<p><span style="font-weight: 400">Superintelligent robots will likely surpass humans in accuracy and efficiency, rendering humans unnecessary for a variety of jobs. For example, complicated surgical procedures like </span><a href="https://yapaplasticsurgery.com/ultrasonic-rhinoplasty-piezo-rhinoplasty/"><span style="font-weight: 400">piezo rhinoplasty</span></a><span style="font-weight: 400"> currently require the use of human doctors. In the future, this might be done entirely by superintelligent technology, eliminating the need for any human surgeons. This may lead to widespread joblessness in a variety of industries.</span></p>
<h3><span style="font-weight: 400">Threat to Humans</span></h3>
<p><span style="font-weight: 400">Due to how unpredictable the world of robot superintelligence could be, we have no idea on how the technology could react to humans. There is a danger that the development of superintelligent robots could pose a threat to human life and lead to extinction. As mentioned earlier, this was the belief of Stephen Hawking.</span></p>
<h3><span style="font-weight: 400">Scarce Resources</span></h3>
<p><span style="font-weight: 400">The development of these superintelligent technologies will use up a lot of resources, both physical and digital. This could lead to a scarcity of resources that could be used for other more important technologies. It’s going to be very important to balance out the resource usage.</span></p>
<h2><span style="font-weight: 400">How These Dangers Will Be Addressed</span></h2>
<p><span style="font-weight: 400">It will be the major concern of the scientists developing this technology to make sure that it is risk-free for humans, so addressing these threats will require extensive study. Ethical standards and safety precautions are being actively developed by researchers and policymakers.</span></p>
<p> ;</p>
<p><span style="font-weight: 400">Policymakers have received recommendations from the </span><a href="https://www.apec.org/docs/default-source/publications/2022/11/artificial-intelligence-in-economic-policymaking/222_psu_artificial-intelligence-in-economic-policymaking.pdf"><span style="font-weight: 400">Chinese Academy of Sciences</span></a><span style="font-weight: 400"> based on their use of machine learning algorithms. This aids in their decision-making on AI developments to keep us secure. By taking precautions like these, we can reduce the potential harm that superintelligence may cause and instead utilise it to our benefit.</span></p>

Exploring the Potential Dangers of Superintelligence in Robots
