In the face of regulatory concerns in the artificial intelligence (AI) industry, OpenAI has launched a program that will offer $100,000 grants to support research into creating a democratic mechanism for determining the rules AI systems should adhere to.

Creating a Democratic AI Framework

The massive advancements in the field of artificial intelligence and more specifically, generative AI have caused a stir in the world due to concerns surrounding the lack of regulation and minimal government oversight.

In an attempt to get stakeholders to consider a regulatory framework for the technology, Elon Musk and many other high-profile persons in the tech industry signed an open letter requesting for a hold to be placed on any experiments more advanced than OpenAI’s GPT-4 until rules are put in place.

OpenAI, the creator of ChatGPT, has shown support for the need to control the technology in various ways including launching this program. “Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law,” the company announced.

The company is therefore seeking teams, individuals or organizations who can develop proofs-of-concept for a democratic process that could provide answers to questions about what rules should govern AI systems.

“The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence,” the company added.

With the program, OpenAI aspires to create a process that resembles the ideal of democracy, that is,” a broadly representative group of people exchange opinions, engage in deliberative discussions, and ultimately decide on an outcome via a transparent decision making process.”

OpenAI additionally clarified that the developed experiment would not be used as the final decision maker but would instead inform the process of developing the democratic tools as well as future decisions regarding the same.

The $100,000 grants will be awarded to teams that present compelling frameworks for answering such questions as whether AI systems should condemn or criticize public figures, how AI should represent disputed views in its outputs, or whether the AI should reflect the ‘median individual’ from the user’s country or from the world.

Upon being awarded the grant, the winning teams are expected to present prototypes developed by engaging at least 500 people seeing as it is a democratic process. Additionally, any source code or intellectual properties of the prototypes is required to be made publicly available as open source.

Is OpenAI’s Quest for Regulated AI Genuine?

Sam Altman, OpenAI CEO, and Worldcoin co-founder | Photo courtesy of Startup Talky

 

Critics of Sam Altman and OpenAI claim that their calls for regulation have simply been profit-motivated. There are a few reasons why being involved in creating the regulation for one’s own industry could be massively beneficial.

The first is that regulation often costs companies extra expenses or consequences in general to follow the rules. OpenAI is already the top generative AI company in the world and new regulation may only entrench its position further, killing off would-be competitors.

No matter whether Altman is genuine or not, it will be hard to trust any regulatory frameworks born from this ‘democratized’ project because OpenAI, which has a motive to push for regulation favorable to itself, is literally paying for it.

This program comes just 3 days after OpenAI’s CEO Sam Altman along with the company’s President Greg Brockman and Chief Scientist Ilya Sutskever publicly called for an international regulatory body for AI similar to the one regulating nuclear power.

The trio stated that due to the rapid rate at which artificial intelligence is developing, the current regulatory framework cannot effectively control the technology.

“We are likely to eventually need something like an [International Atomic Energy Agency] for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc,” they said.

However, it is unclear what exactly Altman’s standards for the regulations are since he said that the AI Act recently proposed by the European Union was ‘overregulating’ and supported that it be pulled back. He also warned that if OpenAI couldn’t meet the EU regulations that it would cease all operations in the region.

Related Articles:

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops