Excerpted from the Fall 2023 UC San Diego Magazine story, "The Future of AI is Now."
For many people, robots might be the first thing that comes to mind when they hear the term “artificial intelligence.” But the two fields, though often conflated, are their own distinct disciplines. Currently, most robots that interact with humans outside of research spaces — such as the robot vacuums and drones that have become common household items — are built to perform highly specific duties. They largely aren’t able to move beyond a single, specialized function.
With public attention now fixated on ChatGPT and other recent advances in generative AI, some may wonder what this means for the future of robotics. Will we soon be surrounded by artificially intelligent robots that are capable of thinking — and acting — like humans?
At UC San Diego, Laurel Riek, director of the Healthcare Robotics Lab and professor of computer science and engineering with a joint appointment in the Department of Emergency Medicine, has worked at the intersection of AI and robotics for decades. Her areas of research include building robots for health care applications, studying human-robot interaction and exploring the ethical and social implications of technology.
When it comes to developing new AI-enabled technologies, Riek believes that engineers and developers have a responsibility to think through the social issues and potential pitfalls that might be introduced if they are deployed for public use.
“As researchers, we have ethical principles that guide us when we do these types of technology deployments,” says Riek, who describes the future of AI as nuanced. “We can build anything, but that doesn’t mean we should,” she adds.
When Riek and her students in the Healthcare Robotics Lab develop and build new technologies designed to assist patients and clinicians, she says they remain mindful of the community’s needs, the type of data they’re collecting, how the robots will interact with humans and how to ensure the protection of individual privacy.
With this very deliberate and mindful approach, Riek and her team have leveraged the capabilities of AI to build and program a Cognitively Assistive Robot for Motivation and Neurorehabilitation (CARMEN), a social robot that’s designed to teach cognitive strategies related to memory, attention, organization, problem solving and planning to help people with dementia or mild cognitive impairment. It can learn about the person and personalize its interactions based on the individual’s abilities and goals. Prototypes of CARMEN are currently being used to provide cognitive interventions for individuals affiliated with the George G. Glenner Alzheimer’s Family Centers in San Diego.
Artificially intelligent robots like CARMEN have the potential to improve access and increase independence for individuals with disabilities. Yet, Riek says it is important they are deployed in an ethical manner, mindful of their effects on individuals and communities.
“It’s been exciting to start to think through these questions in a grounded and real-world problem domain,” says Riek. “AI ethics research can sometimes be broad and far-future but this is a real, true problem that we’re solving.”