fbpx

Intel Joins the MLCommons AI Safety Working Group

Intel joins group of leading AI industry and academia experts to establish standard AI safety benchmarks that will guide responsible AI development.

What’s New: Today, Intel announced it is joining the new MLCommons AI Safety (AIS) working group alongside artificial intelligence experts from industry and academia. As a founding member, Intel will contribute its expertise and knowledge to help create a flexible platform for benchmarks that measure the safety and risk factors of AI tools and models. As testing matures, the standard AI safety benchmarks developed by the working group will become a vital element of our society’s approach to AI deployment and safety.

“Intel is committed to advancing AI responsibly and making it accessible to everyone. We approach safety concerns holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. Due to the ubiquity and pervasiveness of large language models, it is crucial to work across the ecosystem to address safety concerns in the development and deployment of AI. To this end, we’re pleased to join the industry in defining the new processes, methods and benchmarks to improve AI everywhere.”
–Deepak Patil, Intel corporate vice president and general manager, Data Center AI Solutions

Why It Matters: Responsible training and deployment of large language models (LLMs) and tools are of utmost importance in helping to mitigate the societal risks posed by these powerful technologies. Intel has long recognized the importance of the ethical and human rights implications associated with the development of technology, especially AI.

This working group will provide a safety rating system to evaluate the risk presented from new, fast-evolving AI technologies. Intel’s participation in the AIS working group is the latest commitment in the company’s efforts to responsibly advance AI technologies.

About the AI Safety Working Group: The AI Safety working group is organized by MLCommons with participation from a multidisciplinary group of AI experts. The group will develop a platform and pool of tests from contributors to support AI safety benchmarks for diverse use cases.

About Intel’s Participation: Intel plans to share AI safety findings and best practices and processes for responsible development such as red-teaming and safety tests. The full list of participating members can be found on the MLCommons website.

What’s Next: The initial focus of the working group will be to develop safety benchmarks for LLMs, building on the groundwork of researchers at Stanford University’s Center for Research on Foundation Models and its Holistic Evaluation of Language Models (HELM). Intel will share its rigorous, multidisciplinary review processes used internally to develop AI models and tools with the AIS working group to help establish a common set of best practices and benchmarks to evaluate the safe development and deployment of a generative AI tool leveraging LLMs.

More Context: MLCommons Announces the Formation of AI Safety Working Group

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top