Discover iHyperg’s apps on Google Play! Open

OpenAI Committee to Lead AI Safety and Security Efforts

OpenAI forms an independent committee to oversee AI safety and security processes, with increased transparency and government collaboration.





Microsoft-backed OpenAI announced on Monday that its newly formed Safety and Security Committee will independently oversee the security and safety processes related to the development and deployment of the company's AI models. This move comes after the committee's own recommendations were made public for the first time.

OpenAI, the company behind the widely known ChatGPT, established this committee in May to assess and further enhance its existing safety protocols. The launch of ChatGPT in late 2022 ignited widespread interest in AI, sparking both excitement and debates over ethical concerns and potential biases in AI systems.

In line with the committee's suggestions, OpenAI is exploring the creation of an "Information Sharing and Analysis Center" for the AI sector, aimed at sharing cybersecurity and threat intelligence within the industry.

The committee will be chaired by Zico Kolter, a professor and director of Carnegie Mellon University's machine learning department and a member of OpenAI's board. Additionally, OpenAI plans to increase transparency about the capabilities and risks of its AI models.

Last month, OpenAI signed a significant agreement with the U.S. government to support research, testing, and evaluation of its AI technologies.

Post a Comment

Don't be angry in the comments :O

© 2019-2024 IHYPERG.COM - All Rights Reserved.

Did someone say … cookies?

IHYPERG uses cookies to make your experience better and brighter! By accepting, you’ll help us remember your preferences, like your favorite theme, and keep everything running smoothly. Enjoy your time here!