
The OpenAI manifesto – how we can benefit from Artificial Intelligence and live to tell the story
OpenAI posted a historical manifesto today, one that at least partially acknowledges the risks posed by AI. According to OpenAI, it is “highly probable” that within the next decade, AI systems will surpass the level of expertise held by most professionals and that while superintelligence can potentially bring significant benefits, it poses greater risks than any other technology mankind has encountered so far. In order to manage these risks and ensure a more prosperous future, OpenAI says we must exercise caution, similar to how nuclear energy and synthetic biology have been handled in the past. They proposed the following 3-prong approach to managing artificial intelligence.
AI companies must work together
Firstly, to guarantee the secure and seamless integration of superintelligence into society, it is crucial for leading development initiatives to work together in coordination. OpenAI suggests multiple approaches to achieving this goal. One possibility is for major governments worldwide to establish a project that current initiatives can be a part of. This project could provide guidelines and limitations while closely monitoring AI’s advancement.
Another option is for companies to reach a collective agreement, supported by a proposed organization, to restrict the rate of growth in AI capability at the frontier to a specified limit per year. The collective’s purpose would be similar to OpenAI’s proposed government oversight, except at a private level.
Establish an organization for international oversight
Secondly, OpenAI believes it may eventually be necessary to establish an organization like the IAEA (International Atomic Energy Agency) for superintelligence initiatives. OpenAI recommends that for any AI effort that exceeds a certain capability threshold or uses measurable resources such as computing power or energy usage, will need to be subject to international oversight, which can inspect systems, mandate audits, test for compliance with safety standards, impose restrictions on the degree of deployment and levels of security, and so on.
As an initial step, OpenAI says companies could voluntarily begin implementing elements of what such an agency might one day require, and as a second step, individual countries could do the same. The agency would focus on reducing existential risks and not issues that could be determined by individual countries, such as defining what an AI should be permitted to say.
We should be technically capable of reigning in Artificial Intelligence
Finally, OpenAI notes the obvious – we should possess the technical capability to ensure the safety of a superintelligence. This is the most important criterion but unfortunately, is currently an “open question that needs to be addressed”.
Strong public oversight is required
According to OpenAI, although the three initiatives noted above are critical, the governance and deployment decisions of the most powerful AI systems require strong public oversight. In other words, people around the world should democratically determine the boundaries and defaults for AI systems. However, OpenAI has no proposal for how this could be achieved.
OpenAI doom and gloom but only where it doesn’t hurt our business
OpenAI proposes we should consider why we need AI at all. Then then go on to explain why they feel AI is necessary – and why its growth should not be hindered.
OpenAI feels AI has the potential to transform our world into a much better place than what we can envision currently. They point out that we are already witnessing the positive impact of this technology in various fields, including education, creativity, and personal productivity, but face numerous challenges that “require extensive support to overcome”. OpenAI of course, insists they can play a pivotal role in this domain.
AI not bad. Government oversight bad.
OpenAI warns that government involvement in the regulation of the development of superintelligence would be a challenging and high-risk endeavor. Their dystopian view is that “to achieve a complete stoppage, we would require a global surveillance regime, which may not even yield the desired outcome. Therefore, it is imperative that we approach this development with utmost care and precision.”
But what about all the lost jobs?
What OpenAI does not address is the risk that most tech companies overlook or ignore – the risk to the economic stability of societies and their citizens when AI eliminates jobs. Will AI further consolidate riches into the hands of an elite few, exacerbating income inequality? If AI further shifts wealth to the elite, how will income be distributed to people that have no way to participate in the generation of that wealth? What will a civilization do if they have no purpose?