PYMNTS Intelligence Banner June 2024

AI Evaluation Initiative Could Boost Commercial Adoption

Anthropic’s new funding program for advanced artificial intelligence (AI) evaluations could accelerate the adoption of AI across various commercial sectors, industry experts say. 

The AI company announced Tuesday it will fund third-party organizations to develop new methods for assessing AI capabilities and risks, addressing a critical gap in the rapidly evolving field.

The initiative seeks to create more robust benchmarks for complex AI applications, potentially unlocking billions in commercial value. As businesses look to deploy AI solutions, the lack of comprehensive evaluation tools has been a barrier to widespread adoption.

“We’re seeking evaluations that help us measure the AI Safety Levels (ASLs) defined in our Responsible Scaling Policy,” Anthropic stated in its announcement. These levels determine safety and security requirements for models with specific capabilities.

Checking for Threats

Key focus areas include assessments of AI models’ potential cybersecurity capabilities, such as vulnerability discovery and exploit development. The company also seeks “evaluations that assess two critical capabilities: a) the potential for models to significantly enhance the abilities of non-experts or experts in creating CBRN [chemical, biological, radiological and nuclear] threats, and b) the capacity to design novel, more harmful CBRN threats.”

The impact of this funding program is expected to be particularly significant for complex AI applications. “Straightforward applications like speech recognition already have decent benchmarks, but quantifying a model’s capability in assisting a crime is much more difficult,” Julija Bainiaksina, founder of the AI company MiniMe, told PYMNTS.

Improved benchmarks could address critical challenges in AI adoption for businesses. “The main problems of adapting generative AI at the moment are cost, hallucinations and safety,” Ilia Badeev, head of data science at Trevolution Group, told PYMNTS. “While the first is relatively predictable and controllable, the latter two are a pain and a breaking point for many projects and integrations.”

The initiative comes as significant tech companies race to develop increasingly powerful AI models, raising concerns about potential misuse. Anthropic, founded by former OpenAI researchers, has positioned itself as a “responsible” AI development leader.

“A robust, third-party evaluation ecosystem is essential for assessing AI capabilities and risks,” Anthropic emphasized. The company added that “developing high-quality, safety-relevant evaluations remains challenging, and the demand is outpacing the supply.”

What Makes a Good Evaluation?

Anthropic outlined several principles for good evaluations, including that they should be “sufficiently difficult” and “not in the training data.” The company stressed the importance of domain expertise: “If the evaluation is about expert performance on a particular subject matter (e.g., science), make sure to use subject matter experts to develop or review the evaluation.”

The company is accepting proposals through an online application form on a rolling basis. Its internal experts will work closely with selected teams to refine evaluation methods, noting that “refining an evaluation typically requires several iterations.”

Anthropic’s initiative could have far-reaching implications for the commercial AI landscape. By creating more reliable and comprehensive evaluation methods, businesses may gain the confidence to deploy AI solutions in critical areas such as healthcare, finance and customer service. This could potentially unlock productivity gains and new revenue streams across industries.

However, the success of this program will largely depend on the quality and relevance of the evaluations developed. If the new benchmarks fail to capture real-world scenarios adequately or are too narrowly focused, they may provide a false sense of security.

The challenge lies in creating rigorous evaluations to ensure safety and flexibility to keep pace with rapidly evolving AI capabilities. As the initiative unfolds, monitoring how well the resulting evaluations translate to practical commercial applications will be crucial.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.