The Final Frontiers of Artificial Intelligence

In an age in which AI is becoming increasingly prevalent, how will future AI developments affect the interactions with humans and what could it mean for machines themselves and their potential rights?
While AI today might run on a spectrum from Robot Process Automation through to deep learning algorithms and neural networks, the future holds the possibility of an Artificial General Intelligence (AGI), able to flexibly apply learning from one field to another.
Under the Future Investment Initiative, Saudi Arabia bestowed citizenship upon a humanoid robot named Sophia, which is able to hold basic conversations. Others, meanwhile have called for robots to be made accountable for tax and for companies to be held responsible for damage caused by machines – recognising robots as actors rather than simply as tools. AI experts have countered, however, that such moves are “nonsensical” and would work to undercut human rights.
The birth of AGI might well coincide with ‘the Singularity’: the creation of a true intelligence, able to improve itself and hence achieve exponential advances. Ray Kurzweil posits that the Singularity will occur in 2045; while Dr David Hanson argues that machines will be granted human rights in the same year. Elon Musk, meanwhile, has said that he believes the prospect of digital superintelligence is a certainty within his lifetime. He believes, however, that it may pose a major threat to humanity.
“AI doesn't have to be evil to destroy humanity,” he explained in the film Do You Trust This Computer? “If AI has a goal and humanity just happens [to be] in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings. It’s just like if we're building a road and an anthill happens to be in the way, we don't hate ants, we're just building a road, and so goodbye anthill.”
An alternative prospect might simply be an “immortal dictator from which we could never escape.” Such thoughts brought Musk together with Sam Altman (president of Y Combinator) in 2015 to found OpenAI, a non-profit research company aiming “to promote and develop friendly AI... to benefit humanity.”
If it did emerge an AGI would still be a far stretch from human intelligence, or from sentience, as Lydia Gregory, Co-Founder at FeedForward AI, notes.
“[Talking about robot rights is] missing the things that are really changing now,” says Lydia. “It’s less about will these robots have rights, as that’s still anthropomorphising them, still thinking of them as human-like entities and, [with] this presumed sentience that comes with it, [having] a level of competence. Whereas, actually we will have very competent AI, very competent machine learning, or RPA, but that’s different to it being sentient. Questions about business and policy and how they will be changed are more pertinent [today]... And we’re not that far away from really fundamental changes.”
In the meantime, experiments have illustrated that we are hardwired to empathise with the appearance of life (by analysing the reactions of subjects confronted by a toy that begged not to be turned off). And this is precisely what makes it all too easy to confuse simulation with reality.
Empiric is a multi-award winning business and one of the fastest growing technology and transformation recruitment agency's specialising in data, digital, cloud and security. We supply technology and change recruitment services to businesses looking for both contract and permanent professionals.
Read more (pdf download)
Empiric are committed to changing the gender and diversity imbalance within the technology sector. In addition to Next Tech Girls we proactively target skilled professionals from minority groups which in turn can help you meet your own diversity commitments. Our active investment within the tech community allows us to engage with specific talent pools and deliver a short list of relevant and diverse candidates.
For more information contact
02036757777