The Frontier Model Forum, an industry body founded by Anthropic, Google, Microsoft, and OpenAI, has established a $10 million AI Safety Fund to promote responsible artificial intelligence research.
This development is the first major announcement from the four-member group since its establishment in July. The fund will support research on the social impact of large language models, including the responsible development and deployment of such models, as well as risk assessment and mitigation. A
Business A.M gathered that the fund will be administered by a third-party funder, and grants will be awarded through a competitive process.
The Frontier Model Forum is an industry body with a focus on ensuring that advancements in AI technology and research are developed and used in a safe, secure, and human-controlled manner.
According to the Frontier Model Forum, the $10 million AI Safety Fund was established in response to the increasing pace of progress in the AI field. The group explained that while significant advancements have been made, more independent research into AI safety is needed.
“The initial funding commitment for the AI Safety Fund comes from Anthropic, Google, Microsoft, and OpenAI, and the generosity of our philanthropic partners and the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmid, and Jaan Tallinn. Together this amounts to over $10 million in initial funding,” the group stated.
The statement from the Frontier Model Forum noted further that the primary focus of the AI Safety Fund will be supporting the development of new methods for evaluating and testing AI models, particularly in areas where they may pose a risk. The goal, it stated, is to improve safety and security standards in the industry, as well as provide insights that can help governments and civil society to understand and address the challenges presented by AI systems.
The Fund will also support the development of best practices for AI safety and security, as well as research into ethical and social issues related to AI.
The Frontier Model Forum explained that in the coming months, it will put out a call for proposals, which will be administered by Meridian Institute, a non-profit organization that specializes in complex, multi-stakeholder problem-solving. The advisory committee for the Fund will be composed of independent experts in AI, as well as experts from AI companies and individuals with experience in grantmaking.
It is hoped that the fund will help to promote collaboration and information-sharing between the AI industry and the research community, as well as to support the development of new AI safety and security technologies.
Earlier this year, the companies that make up the Frontier Model Forum signed on to a set of voluntary AI commitments at the White House, which included a pledge to encourage third-party discovery and reporting of vulnerabilities in their AI systems.
The AI Safety Fund is intended to support this commitment by providing external researchers with the resources they need to better understand and evaluate frontier systems. The Fund will also help to promote open research and collaboration between the industry and the research community.