Science & Tech

We Asked Four Texas A&M Professors About The Future Of AI. Here’s What They Said.

From engineering to the humanities, experts agree that artificial intelligence technology is here to stay. But how can we maximize its benefits while avoiding ethical pitfalls and unintended consequences?
By Texas A&M University Division of Marketing & Communications March 2, 2023

artist's rendering representing artificial intelligence of a human brain on black background
Debates about the future of artificial intelligence technology have become more and more prevalent as tech companies debut AI-powered tools that can quickly perform a variety of increasingly complex tasks.

Getty Images

 

As the capabilities of artificial intelligence-powered tools like ChatGPT continue to generate headlines and controversy, many are envisioning what a world run by AI could, or should, look like.

While researchers, companies and everyday users continue to explore the applications of these tools, concerns about privacy, plagiarism and the future of work have become increasingly prevalent, fueling an ongoing discussion among developers, policymakers and scholars about how to handle this emerging technology responsibly.

With this in mind, Texas A&M Today asked four faculty experts from a diverse range of disciplines to explain some of the ways AI may change our lives in the years to come:

Martin B. Peterson is the Sue and Harry Bovay Professor of History and Ethics of Professional Engineering in the Department of Philosophy and Humanities.

a headshot of a man in a suit and glasses
Martin B. Peterson

Texas A&M College of Arts & Sciences

In March, 1811 a group of workers attacked and destroyed textile machinery in factories in the Nottingham area. They became known as Luddites, and we remember them for their opposition to a new technology they worried would make their skills irrelevant. It seems likely that many jobs that are currently performed by humans will soon be performed by AI. Trucks and taxis will not be driven by humans in the future, and AIs will probably perform some tasks professors spend their time on today. (Wouldn’t it be nice if an AI could grade essays? And hold office hours?) The textile workers soon found other jobs that machines could not do and overall productivity increased.

We should think of AI as an opportunity on a par with the industrial revolution. If we play our cards well, society can benefit enormously, but there are risks that need to be carefully monitored. I do not claim to know precisely what those risks are. The Luddites failed to identify the real risks with the industrial revolution, such as environmental degradation and wars triggered by the need for oil and other natural resources. I suspect that tech gurus claiming to know exactly what the risks of AI are may make the same mistake. Tech gurus focus on some aspects but miss others that are more important in the long run. The best approach might be to adopt a general attitude of cautiousness and awareness. We should carefully monitor each and every AI tool, but we should also welcome technological progress and strive to use AI for improving the living conditions for humans and other sentient beings.

Cason Schmit is an assistant professor in the Department of Health Policy and Management. He recently co-authored a paper in Science proposing a governance model for ethical AI development.

a headshot of a bearded man in a suit
Cason Schmit

Texas A&M School of Public Health

Artificial intelligence (AI) has enormous potential to harness the vast amount of existing data to make new discoveries, accomplish tasks, and make decisions, far outpacing human capacity to do so. If carefully crafted and deployed — a big “if” — AI can also help address many of the fallacies and implicit biases that plague human decision making, ameliorating past harms, and creating a more equitable world.

Unfortunately, many existing datasets that are used to train new AI models come with biases pre-baked, leading to AI models that exacerbate — rather than ameliorate — existing biases and related harms. I worry that the benefits of AI will accrue to those with the most resources, creating a wider inequity gap within society. Transparency is a major challenge for AI, so it can be difficult for AI developers to fully understand the impact of their models. Consequently, AI applications require careful monitoring and continuous assessment to ensure that their benefits are not outweighed by potential harms.

AI will touch every aspect of our world and our lives. Fully understanding the impact of AI requires a transdisciplinary approach. Experts in engineering, data science, sociology, economics, public health, ethics, and many, many other disciplines may be needed to help guide ethical and beneficial AI.

I worry that existing governance frameworks for AI are simply inadequate to achieve any utopistic outcomes without first realizing devastating harm. Traditional laws are often too blunt and slow to regulate a rapidly evolving technology like AI. Existing industry standards and  ethical guidelines lack sufficient enforcement tools to deter harmful AI uses. My co-authors and I have proposed a new way that we can govern AI in Science that is capable of rapidly adapting to changing AI technology while enabling effective enforcement tools. However, our approach will require substantial buy-in from the AI community.

Theodora Chaspari is an assistant professor in the Department of Computer Science and Engineering. Among other topics, her research focuses on data science and machine learning.

a headshot of a woman with long dark hair wearing a blazer against a grey backdrop
Theodora Chaspari

Texas A&M College of Engineering

Artificial intelligence (AI) is nowadays seamlessly interconnected with our lives. I am most excited about applications of AI pertaining to the field of psychological well-being and mental health. Sensing and mobile devices, such as our smartphones and wearable sensors, can unobtrusively collect our data on a 24-hour basis. Our phone usage can even provide an estimate of the quality and quantity of our social interactions and network of friends and family. The microphone of our smartphone can passively collect our speech prosody (qualities like intonation, rhythm and the stressing of certain syllables), which can be indicative of our mood and emotions. The heart-rate sensor installed on our smartwatch can also provide meaningful insights about our stress levels. Combined with AI technologies, this data can contribute to personalized AI applications that can help us manage our mental health by suggesting personalized interventions at times of highest need and outside the scope of the traditional healthcare settings. Such technologies can provide valuable insights about our psychological well-being to therapists and mental health experts, who can prescribe us with more effective therapy strategies that can be fully tailored to each and every one of us. In this way, AI can help rendering mental healthcare technologies inexpensive and accessible to all people.

Despite the premise, this endeavor comes with high ethical and societal risks. As engineers and computer scientists focusing on sensitive human-centered challenges, it is our responsibility to make sure that the technologies that we design abide by the morals of our society, and maximize benefits and minimize harms to citizens. For example, we need to ensure that AI technologies can provide equitable decisions and that are inclusive of all socio-demographic groups, even groups that have not been traditionally considered in research. It is also our responsibility to design AI technologies that minimize the leaking of personally-identifiable information. Finally, we should seek input from stakeholders at every stage of the way, from the design, to the development and deployment of AI technologies. Engineers and computer scientists should be at the forefront of engaging in public dialogue and raising public awareness, and at the same time, support mental healthcare experts and policy makers, so that AI technologies can operate within an acceptable societal and legal framework.

Lu Tang is a professor in the Department of Communication and Journalism specializing in health communication.

a headshot of a woman with long dark hair wearing a sweater with water and trees in the background
Lu Tang

Texas A&M College of Arts & Sciences

AI is revolutionizing the way we access information, communicate with each other, and has the potential to transform healthcare and medicine. It is extremely important to have a society-level conversation about the social and ethical implications of AI involving all the stakeholders. Using medical AI as an example, researchers are increasingly aware of the danger associated with the biases in medical AI algorithms in screening and diagnosis. The challenges range from a simple pulse oximeter that produces less accurate readings for patients with darker skin to diagnostic AI that underdiagnoses among certain demographic groups (such as minorities, women, and patients on public insurance).

Ethicists assert that the development of AI, including medical AI, should be guided by a set of ethical principles, such as beneficence/non-maleficence (using AI to promote the well-being of users and avoid harming them); justice (the development and use of AI should foster equality and avoid bias); responsibility (concerns with who is accountable for the outcome of AI technology; such as when a wrong diagnosis harms the patient ); and transparency (AI should be explainable), among others. International and U.S. governmental and non-governmental organizations, such as the World Health Organization (WHO) have issued ethical guidelines for health and medical AI.

In the practice of developing and utilizing medical AI, however, it is unclear, how these ethical principles have been used. It is extremely important to sustain an ongoing conversation among ethicists, legal experts, AI researchers, medical professionals, medical researchers, and patients to ensure that ethical principles are used to guide each step of the process. Finally, most of the ongoing debate about medical AI ethics is conducted in high-income countries, but it is imperative to involve stakeholders from middle and low-income countries, who are drawn into this global trend without being able to provide their concerns, challenges and perspectives.

Media contact: tamunews@tamu.edu

Related Stories

Recent Stories