Q&A: Northeastern’s Timothy Bickmore on the clinical future of relational agents

September 17, 2012 in Medical Technology

BOSTON – Timothy Bickmore, associate professor at Northeastern University’s College of Computer and Information Science, has been working for the past decade in the area of “relational agents.” He says these artificially intelligent avatars are poised for a promising future in healthcare.

Bickmore describes these relational agents as “computational artifacts designed to build and maintain long-term, social-emotional relationships with their users.”

Sometimes called intelligent virtual assistants (IVAs), they’re not too dissimilar from the “chatbots” appearing more and more in a consumer-focused commercial capacity, deployed by companies such as E*Trade, ATT and IKEA, and by health insurers such as Aetna.

[See also: At Aetna.com ask 'Ann' anything]

But relational agents, equipped with natural language processing capabilities, are more explicitly designed to maintain persistent contact with their human interlocutors, and are designed and developed to “remember” past interactions with people and build on them in an ongoing relationship.

Bickmore specializes in making these avatars as expressive as possible, in order to improve the emotional verisimilitude of the human/humanoid interaction, fine-tuning “speech, gaze, gesture, intonation and other nonverbal modalities to emulate the experience of human face-to-face conversation.”

In the coming years, Bickmore says relational agents and IVAs have an important role to play when it comes to improving health literacy, driving patient engagement, and maintaining compliance with wellness programs. He spoke to Healthcare IT News about his work with the technology, and the new ways he sees it being deployed.

What is your research background? How did you become interested in relational agents?

[See also: Virtual reality tech projected to grow in healthcare sector]

I did my Ph.D at the MIT Media Lab some years ago. I was working in a research group that was simulating face-to-face conversation between people as a kind of user interface. We were studying hand gestures, facial displays of emotion, body posture shifts and head nods, and how these are used to convey information in face-to-face conversation.

My dissertation was applying this type of interface to health counseling – studying how doctors talk to patients, how nurses talk to patients and how we can take best practices from face-to-face counseling and build that into automated systems for educating patients and doing longitudinal health behavior change interventions.

For last 10 years my lab has been doing this: We’ll pick a particular area of healthcare – say, patient education at point of discharge and medication adherence – and we’ll go with our video cameras and record providers talking to patients, come up with models of dialogue and the language they use, both for the therapeutic part of the dialogue and how they build trust and rapport and therapeutic alliance over time. And then we also characterize their non verbal behavior.

Our stance is that in order to have a high-fidelity simulation, if the computer is going to do hand gestures, we have to give it hands. And if it’s going to do facial displays of emotion, it has to have a face. They’re basically animated characters you have a conversation with.

How advanced is the technology?

We’ve been doing this for about 10 years, with animated characters in health counseling. And I don’t think anyone was doing it before we started. What’s happened in the last few years is we’re getting more sophisticated with the kinds of tools we use, and dialogue structures. We can build systems more quickly, bring our architectures to new sorts of health problems more quickly.

What are some of the ways you see it being deployed?

My initial interest in this was this notion of therapeutic alliance: the trust and rapport a patient had with a provider to achieve some therapeutic outcomes. It’s been shown in a number of different areas of healthcare to have a significant impact not only on patient satisfaction but on adherence to what the provider has asked them to do and, because of that, health outcomes. My initial interest was emulating behaviors providers do to build trust with their patients. A lot of those are nonverbal.

[See also: Virtual reality tech projected to grow in healthcare sector.]

But since I’ve been doing this, some of the other reasons this has been particularly effective in healthcare is that, first of all, everyone knows how to do face-to-face conversation so we do a lot of  our interventions with safety net hospitals and populations that don’t have high levels of computer and health literacy. We find that they not only find these conversations easy to use but they actually are happier with them. They give us higher levels of satisfaction than other patients.

How do you go about designing the agents’ facial features? I’ve seen some virtual avatars that try to be lifelike and expressive but frankly just look creepy. Is it important to avoid the “uncanny valley“?

Of course. First of all, we do lots of user testing with our patient population, with different design for characters, to come up with characters that are effective for each different demographic. But what we’ve done in our lab is we’ve stuck to relatively simple 2D looking characters. We don’t try to push into something that’s very anthropomorphic or very human looking, explicitly because we want to avoid the uncanny valley. We find that the simpler figures are just as effective. And they work well.

Also, I’m interested in implementing theories from communication. I only know how to do eyebrow-raising. I don’t know how to do blushing and stuff like that, so I don’t want to go into that level of detail. I only want to work on the visual cues we know how to control well. Lends itself to a simpler style of interface.

Is the technology being used very much in clinical settings at the moment?

Today, not too widespread. In the years I’ve been doing this we’ve had one product that was licensed to a company for deployment – this is our bedside virtual nurse that we roll out at point of discharge: It spends about half an hour with the patient explaining their discharge instructions to them. We license that to a company called EngineeredCare in San Francisco. They’ve had some traction selling this into hospitals. But beyond that I haven’t seen wide commercial adoption yet.

But you see it being used more and more in the near future.

Yes, I think so. What we’ve found is that, especially for patients with, say, low health literacy, there’s a particularly strong market. That’s about one third of U.S. adults. There’s a big market out there for which this is a particularly good communication channel.

Another thing characters are good for is engagement. You would have a situation where you’re doing a longitudinal intervention where you want patients to keep coming back, and a counseling session where you want people to stay on their medications or stick with their appointments. Having a persona that somebody builds a relationship with is a way of achieving that.

I don’t think it’s a panacea. I don’t think everyone would like to use a character for all aspects of their healthcare. But certainly for certain patient populations and for certain types of problems, I think it’s a good solution.

Learn more about Timothy Bickmore and his work with relational agents here.

Be the first to like.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Article source: http://www.healthcareitnews.com/news/qa-northeasterns-timothy-bickmore-clinical-future-relational-agents

Be Sociable, Share!
Bookmark and Share

Leave a reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>