Science & Tech

Making AI Safer: Texas A&M Joins National Consortium

Members include Amazon, Apple, Adobe, Intel, Google, Meta, Microsoft and OpenAI (creator of ChatGPT), plus Johns Hopkins, MIT and Stanford.
By Texas A&M University Division of Research February 28, 2024

An illustration of a human hand reaching out to a robotic hand.
Texas A&M University is joining more than 200 organizations as an initial member of the new new Artificial Intelligence Safety Institute Consortium. Members of the group will work to improve the safety and reliability of artificial intelligence.

Getty Images

 

Texas A&M University will join more than 200 major corporations, academic institutions, nonprofit groups and federal agencies in a national effort to improve the safety and reliability of artificial intelligence (AI), the Division of Research announced today.

The U.S. Department of Commerce, through its National Institute of Standards and Technology (NIST), selected Texas A&M as an initial member of the new Artificial Intelligence Safety Institute Consortium (AISIC). Members include tech giants Amazon, Apple, Adobe, Intel, Google, Meta and Microsoft, as well as OpenAI (creator of ChatGPT); research institutes such as The Johns Hopkins University, Massachusetts Institute of Technology and Stanford University; and the nonprofit Linux Foundation. 

AISIC chooses its members based on their ability to deliver the research and development required to harness the potential of AI, mitigate its most serious risks, protect the public and our planet, reduce market uncertainties and encourage innovations.

“In terms of tools and applications, artificial intelligence is expanding at an astonishing rate,” said Dr. Jack G. Baldauf, vice president for research at Texas A&M. “AI is likely to change every aspect of our society. The benefits are promising but the risks are daunting. Everything about AI calls for careful study and thorough research. We anticipate making significant contributions to this important body of work.”

Consortium members will develop policies, standards and best practices for the use of AI technologies in five areas: risk management for generative AI; the use and detection of synthetic content; benchmarks and testbeds for evaluating potentially harmful AI capabilities; guidelines for adversarial evaluation; and stress testing AI models that pose potential security risks.

Dr. Nick Duffield, director of the Texas A&M Institute of Data Science (TAMIDS) and holder of the Royce E. Wisenbaker Professorship I in the Department of Electrical and Computer Engineering, will lead the Texas A&M team, which includes researchers from the College of Engineering, the College of Arts and Sciences, the School of Public Health, the School of Architecture, TAMIDS, the Global Cyber Research Institute, High-Performance Research Computing and the Center for Applied Technology. 

“Our researchers look forward to contributing to the development of best practices to support responsible adoption of AI and underpin confidence in innovative products and services enabled by exciting advances in AI,” Duffield said.

Media contact: Dr. Nick Duffield, duffieldng@tamu.edu

Related Stories

Recent Stories