hello and welcome to speaking of psychology a podcast produced by the
American Psychological Association I'm your host Kim Mills speaking of
psychology is a podcast for anyone with an interest in the science of psychology
we talked to psychological researchers practitioners and educators about any
and every aspect of psychology and its application to the world around us dr.
Jeff Hancock is founding director of the Stanford Social Media Lab and a
professor in the Department of Communication at Stanford University dr.
Hancock works on understanding psychological and interpersonal
processes in social media his research team specializes in using computational
linguistics and experiments to understand how the words we use can
reveal psychological and social dynamics such as deception trust intimacy and
social support dr. Hancock is well known for his research on how people use
deception with technology from sending texts and emails to detecting fake
online reviews we're fortunate to have him here today with the American
Psychological Association welcome dr. Hancock Thank You Kim
so I wanted to start by talking a little bit about social robots and your work in
that arena the first question is just to explain for the audience what's a social
robot as opposed to any other robot right right yeah social robot is really
broadly defined basically any robot that's situated with with humans so a
couple of definitions are that they should be socially evocative sociable so
any robot that's designed to essentially work or interact or evoke responses from
humans so it's not a room bot for example well you know it's funny you
should ask that room is a good question and our group thought a lot about that
sometimes a Roomba could be made into a social robot you put some little things
on it and amazingly people will really interact with that robot as if it's you
know interacting with them but guess that in the corner there get that
work done for me but no typically it's it's robots that
are designed to interact with the humans so it can be in workplaces so factories
now often have robots and a number of them now have been sort of personalized
made it look a little bit more human so that the workers around them can
understand what the robots doing what its intentions are where its attention
is mm-hmm so talk a little bit about the research that you did I understand you
looked at like the last decade of all the research that involves social robots
what were you looking for what did you find
right so the group I was working with was byron Reeves and sunny Leone at
Stanford with me and we had a group of 10 Ras work on this project where we
looked at a decade worth of research on social robots and it was fascinating and
a lot of work there was almost 7,000 articles that in Google Scholar that
referenced social robots and then we narrowed that down to about 1,400 that
mentioned social robots but then also had a robot interacting with a human or
looking for some sort of social response so there's been you know over a thousand
articles on social robots and it's across a dizzying array of disciplines
so psychologists computer scientists engineers anthropologists it's it's
pretty amazing so we looked at all of those across a decade and then the thing
we got really excited about was we found the photo of every robot that was in
that decade's worth of work and found as many photos as we could for each robot
and so we sort of had like early census if you will of every social robot that
had been published about and it turns out there's 342 that we found over that
decade so there was sort of like a one of the first collections of all social
robots that that's been put together and what were you trying to find by looking
at that well my colleague Byron Reeves had this insight when we first started
the project and and one of the reasons I got involved so I usually study things
like social media so how people interact through technology but I remember having
a great a meeting with Byron where he had this insight which was
and we can think of robots as media and since there are social robots it was
like a form of social media so I got really excited and it was really my
first big foray into working with robots and it was all because of Byron's
insight of thinking of them as as media and and what follows from his insight
then is most of the research on social robots looks at one robot at a time
there's a good reason for that they're expensive usually you've built a robot
and you understand you know how does this robot you know evoke a response or
how is it effective at getting people to learn or to to feel better if it's an
assistive context the problem with that which we know from psychology is if
you're trying to generalize to social robots studying one robot at a time is a
real problem this is a problem of stimulus sampling and so once I called
you we've known about this issue for many many decades since the middle of
the last century and what Byron's insight sort of led to is that we need
to get the whole a big collection of these stimuli so that we can start
generalizing across social robots as a category of social actor rather than
well there's this robot do anything or if we make this robot have an arm and
versus no arm does it do anything and so that was what we were interested in is
is getting this big collection together so we could start doing research on a
population a sample if you will have social robots rather than one exemplar
at a time and so what does this portend for the future how will this be applied
right that's the key question and it's been exciting in this conference because
I've already talked to you know half a dozen people that came up after the talk
that were like hey we'd like to do this project of that project what we've done
is started by asking well now that we have this collection of robots and a
collection of photos of them you know when you look at them and I can share a
image with you to go on the podcast or a website it's astonishing how varied they
are I mean even when we show it to people that are in the field and been in
the field for a decade they're like well okay these are really really different I
mean it's sort of like thinking you know I could take you and study you as an
example of an extrovert and then you know generalize to all the extroverts
but we know that people are really different
well robots are even more different than different people so the first question
then is do we needed a whole new psychology a whole new side called you
understand social responses to social robots and the answer when we look at
the literature is pretty clearly no people tend to bring sort of standard
social psychological processes to new media so there's tons of work that shows
that we treat technology kind of as social actors and we bring our old brain
which has been evolving for a long time to understand social actors like you
know is this a friend or a foe we bring that to technology so the next question
then is well if we don't need a new psychology because people you know sort
of react and perceive technology the same way they do humans what's a good
place to start to look at you know is is there a fundamental dimension or two in
which people perceive robots when we looked at the literature in social
psychology around person perception there's a lot of evidence that people
judge others along two dimensions very quickly automatically and and you
know comprehensively so their warmth and competence and some of the main research
on this are Susan Fiske and her colleagues Amy Cuddy for example they've
done a tremendous amount of work showing that over you know a hundred years of
research across cultures people's perceptions initial perceptions of other
people really boil down to warmth so is this person going to be trustworthy kind
warm towards me or are they cold perhaps threatening and they argue this is an
evolutionary question is this a friend or a foe I need to determine that right
away and then another is competence so
does this person seem capable
competitive strong these sort of terms and so we thought let's let's start
there let's take a look at that and what we did is we we had several thousand
over three thousand Mechanical Turk participants take a look at a single
robot and then answer a bunch of questions like does this person this
robot seem warm or cold bunch of those bunch ones related to competence a lot
and then we did what you know psychological researchers do you factor
analyze those to see if they resolve to some factors and it's amazing Kim it's
exactly the same as if we just like people it's just like people so what
makes a robot warm or cold alright so that was our next question exactly
because designers are gonna want to know this right like how do I make it warm or
cold or competent or in common warmth it turns out is really driven by eyes so
does it have eyes or not which you wouldn't normally think of right away is
but they don't I mean Gebo doesn't have eyes for example right exactly exactly
so there's this that's a major thing and not only anything about eyes so once you
have eyes that's a big predictor then it's um the ratio of the eye size to
your head size and there's lots of evidence that this is about warmth a--to
so Disney characters for example tend to have really big eyes so that's a really
huge factor and then in terms of confidence it's the lack of fur so if
you can see the mechanics you know like steel and you know actuators of that
they're gonna actually appear more competent if there's further gonna
appear less confident and then mobility is a big one for confidence if that if
that thing can move around whether it's you know arms or moving around like that
then then there's more competence and and and so it's amazing and it actually
has really fascinating and potentially disturbing implications so Fisk and her
colleagues have this model called a stereotype content model and they say
with warmth and calm you can kind of predict in these in this
2d space stereotypes so confident competent is up in the right those are
people that are Earth's alright warm hi warm hi competent and that's the default
in group so when they were doing their research in the early 2000s late 90s
this would be like white middle-class America so if you ask Americans at that
time you know the default group the high confidence high warmth that was white
middle-class men you go down into the lower space where it's high confidence
little warmth these are like engineers rich people and these people are they
evoke a different kind of emotion so it's envy right so and a little bit of
so you're like you admire them a little bit but it's more like a little
suspicious so they evoke this Envy thing whereas the default group evokes like
admiration and positive emotions you keep going around so you're down a
little low low space and stereotypes down there would be poor people so this
be poor white part black homeless people on welfare and and they evoke a
different kind of emotion as well which is contempt and so you keep going around
you get up to the high warmth low competence these are people in the 90s
would be like housewives people in that sort of space you know mentally
handicapped individuals you know so the again these are stereotypes yeah and the
emotion of oak there is pity so as a designer if you're designing a robot
with these different features unbeknownst to you you could actually be
causing an initial emotional response that is deep-seated in our psychology
that's pretty interesting yeah we thought so does this did this research
tie-in at all with the work that you're doing on deception yeah so now we're
doing a bunch of things about like trust of robots so how much would you trust
this robot and the initial work there is that you know warmth is is going to be a
big predictor of that your sense of its status but then we'll need to move it
into different situation so I might trust Jibo in an
interpersonal interaction where we're just having a fun social interaction but
I might not trust G Bo if I'm on the battlefield and I need a robot to help
me find bombs and defuse them right so situations gonna play a really huge role
and our collection of photos really is an instead of a totally neutral like
there's there's zero context so that's the next step but yeah I'm really
interested in deception with these robots you know one of my favorite
examples of deception with technology wasn't a robot but kind of similar and
that's the Volkswagen scandal where they programmed their cars to lie to
investigators who were looking for like how much pollution it would produce
right so it's fascinating this car when figured out that it was being tested
changed its behavior right like it would literally have less power but produce
fewer emissions and not one car millions of these and programmed to lie to humans
and so I mean it's really fascinating and so and so you know the engineers use
robotics when they were developing this well we we don't know that but we can
kind of think of the car a little bit like a robot that's a it's a technology
that it wasn't making its own decision it was programmed in but it was
programmed by humans to lie to humans via you know it's it's sort of
technology so right what's gonna happen with robots we've seen some autonomous
robots that have learned to lie so these are small little robots and their job is
to go around and find food and they're competing with other little robots and
the food is like a little electrical charge that they get and they are given
some artificial intelligence so they're trying to find food but not let their
other robots get the food and and these robots would learn to lie they would go
to any area once they found where the electricity was they would then go to
another area and buzz around there and then other robots would come and when
they all came there and we see this with animals like crows are very good at
doing deception so a younger male crow that will get beat up by the higher
status crow will pretend to find food somewhere and then when all the big crow
come it goes off right right so right so we're gonna we're seeing humans using
technology like robots to lie to other humans and we're seeing some of the very
earliest evolution of deception in these in these sort of artificial intelligence
systems what was were those robots actually programmed to learn deception
so they just they were given these constraints and objectives and the
objective was to get as much of this food their electricity as possible and
that they were competing with these other robots and so from that they
learned that you know deception was a good tactic to do and we see this with
with non-physical AI so things like chatbox conversationally I in a
negotiation game where they're negotiating negotiating with another
human or with a human or with another AI we saw that that deception that same
kind of idea of deceiving evolved in that as well so it's it's pretty clearly
an advantageous evolutionary strategy once you're able to communicate
something that isn't necessarily true then deception becomes a strategy for
achieving your goals it comes with risks so if you're having
a one-off interaction with another person where you're trying to get goods
from them then deception can be very useful but over the long term deception
has been shown to not be necessarily the best strategy so are we moving in any
particular direction around around the design of robots I mean I'm thinking are
they going to become more human-like less human-like or does it really depend
on the context yeah I mean that's a really great question and I think
Justine Cassell who did the keynote yesterday I think sort of like wreath we
asked that question which was it's not about what the robots or the these
conversational agents and their humaneness necessarily but rather about
our humaneness and so she really put us into this concept of intercept
subjectivity which is when I feel like I'm engaging with the technology
I'm doing that as a human and I'm having a very human attraction then whatever
that agent is is a success in that regard so it's about creating a sense of
of intercept subjectivity and I thought that was a really nice way of asking the
question because then they can be human-like and they can be machine like
but it's going to be about how that's sort of dyadic interaction works so I
think it's you know one of my intellectual Heroes is is herb Clark
who's also at Stanford and and his work shows that a lot of conversation and
interaction communication is really tightly coupled it's a joint action so
what we're doing right now is very joint so we're nodding at each other and we're
agreeing and suave the right time and looking and I know I'm in a very human
activity right is amazing joint activity of communication and so that's what's
gonna matter I think with with robots and with with you know AI type
technologies is Lugar to which they're coordinating with us and and and
building up that in your subjectivity so they could look kind of artificial still
machine-like and exactly it will relate to them in a right way right in theory
Brazil's geebo and you know she gave the first keynote here doesn't look human at
all but people really react to it right yeah oh she showed was amazing evocative
right and then they're having this meal pleasant and and intriguing and
surprising kind of interaction and there's zero you know appearance of
humaneness but geebo has this ability to sense and respond in a way that feels
very evocative and it's kind of like you know people love dogs right and they
don't look human at all but you know people form these really deep bonds with
them and it's because of that sense of intersubjectivity
well it's very interesting yeah thank you so much for for being with us today
my pleasure Kim I really enjoyed it speaking of psychology is part of the
APA podcast network which includes other great podcasts such as APA journals
dialogue about the latest in most citing psychological research and
progress notes which discusses the practice of psychology you can find all
APA podcasts on iTunes stitcher or wherever you get your podcasts you can
also go to our website speaking of Psychology org to listen to
more episodes and see more resources on the topics we discuss
I'm Kim Mills with the American Psychological Association and this is
speaking of Psychology
Không có nhận xét nào:
Đăng nhận xét