[Music]
So I'll thank Jaka for that lovely
introduction, and, and thank you for
sharing lunch, sharing your lunch hour
with me. So I think what I'm going to do
is kind of jump into the talk, and if
there are procedural questions you have
or questions about methods, I'm happy to
take those questions. Now otherwise
we'll just wait till the end. So lots of
tech companies are trying to figure out
how to detect emotion by reading facial
expressions. It's a really exciting time
because the technology is developing
really quickly, advancing really fast, and
in fact the pace is, even seems to me,
anyway, it's kind of speeding up and
there's a growing economy of emotion
reading gadgets and apps and algorithms.
But the question I want to start today
with is: can we really expect a machine
to read emotion in a face? There are
plenty of companies who are claiming to
have already done it, and their claims
are based on some fundamental
assumptions that we're going to
systematically examine today. And I'll
just warn you, I'm going to maybe suggest
some things that some people might find
a little provocative and might challenge
your deeply-held beliefs, because the
message I'm going to suggest today is that
machines, it's not the case that machines
can't perceive emotion, but that
companies currently seem to be going
about this question in the wrong way,
because they fundamentally misunderstand
the nature of emotion. And as a
consequence, they're missing what I would
think of as a game-changing opportunity
to really transform the science and its
application to everyday problems. So
emotion-reading technology usually
starts with the assumption that people
are supposed to smile when they're happy,
frown when they're sad, scowl when
they're angry, and so on, and that
everyone around the world should be able
to recognize smiles and frowns and
scowls as expressions of emotion. And
it's this assumption that leads
companies to claim that detecting a
smile with computer vision algorithms is
equivalent to detecting an emotion like
joy. But I want you to consider this
evidence. Here on the x-axis are the
presumed expressions for the various
emotions for anger, disgust, fear,
happiness, sadness, and surprise. And now
we're going to look at some evidence
from meta-analyses, statistical summaries
of experiments, to answer the question of
how often people actually make these
faces during emotion. And the answer is:
not so much. The y-axis represents the
proportion of times that people actually
make these facial expressions during
actual emotional events. So in real life,
for example, people only make a wide-eyed
gasping face during an episode of fear nine
percent of the time
across 16 different studies. And in fact,
that face, if you were in Papua New
Guinea in the Trobriand Islands, would be
considered an anger face. It's a threat
face. It's a face that you make to
threaten someone. So in real life, people
are moving their faces in a variety of
ways to express a given emotion. They
might scowl in anger about 30 percent of
the time, but they might cry in anger.
They might have a stone-faced stare in
anger. They might even smile in anger. And
conversely, people often make these faces
when they're not emotional at all. For
example, people often scowl a full on
facial scowl when they're just
concentrating really hard.
Nevertheless, there are hundreds of
studies where
subjects are shown posed expressions
like these supposed faces and then
they're asked to identify the emotion
being portrayed. And again, the
proportions are on the y-axis and so you
can see there's quite a difference, right,
even though people only make a wide-eyed
gasping face about 9% of the time,
68% of the time, test subjects identify
that as a fear expression, and so on, and
so forth.
So, which data are the companies using to,
to, as the basis of their development?
They're using the blue bars. So when
software detects someone is scowling,
they infer that the person is angry, and
in fact, you'll hear companies refer to a
scowl as an "anger expression" as if
there's a one-to-one correspondence, and
frowning for sadness, and so on. And so,
the question is, well, if people sometimes
make these faces to express emotions, the
the the presumed emotion, but often not.
Why are test subjects, as perceivers,
identifying emotions in these faces so
so frequently? So why are the blue bars so
much higher than the white bars? And now,
I'm going to show you the answer. Here's
the kind of experiment that is almost
always used in the sorts of studies that
generate the data for those blue bars.
Test subjects are shown a posed face
like this, and then they're shown a
small set of words, and then they're
asked to pick the word that matches the
face. So, which word matches this face?
Good job. When test subjects choose the
expected word from the list, it's called
"accuracy" even though this person is not
angry.
In fact, she's just posing a face. And in
most of the faces that are used in these
experiments, subjects are just posing
faces, right. So it's not really
accuracy. It's more like how much did you
agree with the experimenter's
expectations. But it's called "accuracy," so
that's what we're going to call it today too.
So, hundreds of studies show
pretty high accuracy using this method.
This is on average, so this is a
meta-analytic average across hundreds of
studies. And emotion perception, you know,
feels as easy as reading words on a page,
because in fact, that's actually what's
happening in these experiments. And when
you remove the words, and you merely ask
test subjects to freely label the faces,
accuracy drops precipitously. And for
some emotions like contempt and shame
and embarrassment, the rates actually
drop to chance levels, which is about
17% in most of these studies. And here's
what happens when we add a little bit of
diversity into the picture. So, things get
a little more interesting. So, we tested a
group of hunter-gatherers in Tanzania
called the Hadza. The Hadza have been
hunting and gathering continuously as a
culture since the Pleistocene. They don't
live in, you know, the same, exact same
circumstances as ancient humans, but they
are living, they are hunting and
gathering on the African savannah, so
they are living a lifestyle that is
similar to the conditions that some
psychologists, evolutionary psychologists,
believe gave rise to these "universal"
expressions. So they're a great
population to, to test. And they're
actually aren't that many of them left.
It's actually really hard to get access
to these, to this group of individuals.
You have to have special research
permits, and so on. So, we were able to,
with the help of an anthropologist who
we collaborated with, get access to the
Hadza, and who were very generous with
their time and, you know, labeled some
faces for us. And we showed them a set of
faces, and we asked them to do exactly
what we asked other test subjects to do,
and accuracy actually dropped even
further. And this number is actually a
little high, because what the Hadza were
very good at doing was distinguishing a
smile from
all the other faces which were depicting
negative emotions. So when you just look
at the accuracy for, for labeling scowls
and pouts and things like that, just the
negative depictions of negative emotions,
the rate dropped even further, pretty
much to to chance levels. And so, this is
what happens when you remove the secret
ingredient from these experiments: the
evidence for universal emotional
expressions vanishes. Now, I'm not saying
that that means that faces carry no
information, or that we can't look at a
face and and make a reasonable guess
about how someone feels. But what I am
telling you is that human brains are
doing more than just looking at a face
when they make such judgments. That is,
right now, when you're looking at me, or
when I'm looking at you, some of you are
smiling and nodding - thank you very much.
Others are, you know, maybe looking a
little more skeptical, or at least that's
the guess that my brain is making, and my
brain isn't just using your face. There's
a whole context around us. But in these
experiments, we're just looking,
the, the experimenters were looking only
for the signal value in the face alone,
stripped away of all context; except the
context that they, unbeknownst to them,
actually had provided to the subjects,
which are the words. So - just to confirm
that, you know, the experimental
context was actually generating evidence,
making, making, that it could make ANY
emotion look universal, we decided to
test this by going back to the original
experimental method. And we identified
six emotions from different cultures
that have never been identified as
universal, that can't be translated into
English with a single word, which is
important because all of the presumed
universal emotions happen to be English
categories, and they also don't exist in
the language that is spoken by the Hadza,
which is Hadzane. And then, what
we did is we invented expressions
for these emotions - we just made them up
- and in this case, we were using
vocalizations, although we have a version
with faces. But we were using
vocalizations because, it's a complicated
story, but we were basically replicating,
we were replicating another experiment
and kind of criticizing it. So we used
vocalizations. So for example, the
category Gigil is the overwhelming urge
to squeeze or pinch something that's
very cute; you know, when you see something
cute and you just want to, you just want
to, you know, squeeze the cheeks of a baby,
right. That's the, that's the emotion. And
so, we made up a vocalization to go with
that, which sounds something like this: eee!
OK, so, we made that sound, and then we
asked our test subjects again from the
Hadza test subjects to match each
sound with a little story that we told
about emotion, because in remote samples
that are, you know, small-scale societies
that are very remote from Western
cultures, typically the way these
experiments are done is you don't give
them a list of words. You tell them a
little story about the emotion that
contains the emotion word, and then you
give them two faces or two vocalizations
and you ask them to basically pick the
expression that matches. So, that's what
we did. And then the average accuracy
actually was pretty high. And if you look
at the individual emotions, five of the
six of them look universal. And in fact,
these accuracy rates are pretty similar
to what you see in many studies of anger,
for anger, sadness, fear, and so on.
So this is where the blue bars come from.
Scientists have been using, really since
the 1960s, an experimental method that
doesn't discover evidence for universal
expressions of emotion, but it
manufactures that evidence. This method
of providing test subjects with
linguistic cues is responsible for the
scientific belief that a scowl
expresses anger and only anger, that a
smile expresses happiness and only
happiness, and so on. And so, if you're a
company who wants to build AI to
perceive emotions in humans by measuring
their facial movements, then it's
probably important to realize that these
famous configurations don't actually
consistently display disgust, anger, and
fear, and so on, and that it's a mistake
to infer that someone who is scowling is
angry. And in fact, it's a, it's a mistake
to call a scowl an "anger expression,"
because only sometimes does a, is
a scowl indicative of anger. Instead, what
we see when we look at the data is that
variation is the norm. And to show, I'll
just show you what I mean. So, if you were
looking at this person's face, how, how
does she look to you? What, what emotion
does she seem to be expressing? Sadness.
She's sneezing (not even an emotion at
all). Smelling something good. So usually,
this is it, yep, usually people see her as
tired, or, or as grieving, or as about to
cry, sad.
Actually, this is my daughter Sophia
experiencing what I can only
describe to you as a profound and deep
sense of pleasure, at the chocolate
Museum in Cologne. Germany. And this
little sweetheart is also experiencing a
profound sense of pleasure. And the
lesson here is that people move their
faces in many different ways
during the same emotion. Now, if we were
to only look at this little guy's
eyebrows up to his, you know, eyes and
nose, this, these facial actions actually
are very reminiscent of
the presumed expression for anger. So for
example, this face is often seen as angry.
Does anybody actually know who this is?
Jim Webb. This is actually Jim Webb when
he won the senatorial race in Virginia,
which returned the Senate to Democratic
control. This victory returned the Senate
to democratic control. Sorry, I was just
having a moment there. And so, without
context, we see his face as communicating
anger because actually this face
symbolizes anger in our culture. So
people don't just move their faces in
different ways during the same emotion,
they also move their faces in the same
way during different emotions. So in
real life, a face doesn't speak for
itself, when it comes to emotion, right.
People usually see this guy, this
face as smug or pride or confidence.
Actually it's the, the supposed
universal expression for disgust. And
what's really interesting is that when
you stick the presumed, you know, the presumed
expression for disgust on a body, or in
any kind of context that suggests a
different emotion, perceivers actually
track the face differently. Their
scanning of the face has a completely
different pattern, suggesting they're
making different meaning of that face on
the, by virtue of the, the context. It's,
and I'll just tell you as an aside, in
every - maybe this an exaggeration - in most
studies where you pit a face against the
context, the face always loses. Faces are
inherently ambiguous without context to
make them meaningful. So, what's up with
these expressions? Where did they come
from? Well, it turns out that they were
not discovered by actually observing
people as they moved their faces
expressing emotions in real life. In fact,
these are stipulated expressions. So, a
handful of scientists just anointed
these as
the expressions of emotion, as universal
truths, and then people built a whole
science around it. So basically, they're
stereotypes. And what we have is a
science of stereotypes, or, you know,
emojis,
which by themselves, I should tell you,
also are highly ambiguous, it turns out,
without context. So obviously, we don't
want to build a science of artificial
intelligence on stereotypes. We want to
build them on emotional episodes as they
occur in in real life. And in real life,
an emotion is not an entity, right. It's a
category that's filled with variety. When
you're angry, your face does many things,
and your body does many things, and it
turns out your brain also does different
things depending on on the context that
you're in. Now, for those of you who build
classification systems, you know about
categories, right? So for example, if you
were building a category of, you're
building a recognition system for cats, a
cat recognition system, you would develop
a classifier that could learn the
features that cats have in common that
distinguish them from other animals, like
dogs and birds and fish and so on. And
this CAT-egory... get it? My daughter made
me say that, OK? This category (Thank you
for laughing - now I can tell her that
you thought it was funny.) is a
collection of instances that have similar
features. But, you know, there's actually
plenty of variation in the instances
of this category, too, right? Some cats are
big, some cats are small, some cats have,
you know, cats have different eye colors,
some cats have long fur, some have short
fur, some have no fur. But the human brain
tends to ignore this variation in favor
of what cats have in common. And the
interesting thing is that humans also
have the capacity to make other kinds of
categories. Categories where
there are no physical similarities. Where
the category is not based on physical
similarities of the instances. And this
is something we do all the time. For
example, here's a category. This is a
category that every I'm sure everyone in
this room knows. You want to take a guess
what it is? Human-made objects? I suppose
if you treat the elephant like a picture
of an elephant, then that would, that
would be true, yeah. OK, well, these are
all objects that you can't bring through
airport security. Actually, the last time
I did this, one clever person actually
said they're all instances of things
that you can squirt water out of. And I
thought, well, actually, yeah, if you think
of the gun as a water pistol, then that
that could work, right? This is a category
that's not made of instances that share
physical features. Instead, they share a
common function, in this case, squirting
water through them, or not being able to
take them through airport security. This
category, though, exists inside our heads
and in the head of every adult who has
ever flown on an airplane. It's a
category of social reality. So, for
objects to belong to this category, they
they belong not because they all share
the same physical features, but because
we impose a similar function on them by
collective agreement. We've all agreed
that it is not OK to take water
through the, you know, a water bottle
through security, or a gun, or an
elephant. And in fact, it turns out that
most of the categories that we deal with
in civilization are categories of social
reality, whose instances don't
necessarily share physical features, but
we've imposed the same function on those
features by collective agreement. Can you,
can you think of any that might come to
mind? Things that we treat as similar but
but are actually, their physical features
actually vary quite a
bit? Money. Exactly. Money is, money is a
great example. So, throughout the course
of human history, and actually even right
now, there's nothing about what humans
have used as currency that defines those
instances as currency. It's just that a
group of people decide that something
can be traded for material goods, and so
they can. And, you know, little pieces of
paper, pieces of plastic, shells, salt, big
rocks in the ocean which are immovable,
mortgages. And when we remove our
collective agreement, those things lose
their value, right? So one way of thinking
about the, the mortgage bubble is that
mortgages, the value of mortgages is
based on collective agreement, and some
people removed their agreement. Anything
else? Yeah, that's true, you have to work
really hard to accept the collective
agreement of driving on the wrong side
of the road. Oh, come on. Beauty.
How about citizenship of a country? How
about a country, right? If you go, for
example, into, if you look at a map from the
1940s or before the 1940s, the map of the
world looks very different. The map of
the earth is pretty much the same. The
physical features of the earth are more
or less the same, but, but the countries
are that are drawn are different. So we
could go on and on like this. We could
talk about social rules, like being
married. Marriage, it actually turns out, is
also a category of social reality. The
presidency of any country is, you know,
people don't have power because they're
endowed by nature with power. They have
power because we all agree that certain
positions give you power. And if we
revoked our agreement, then they wouldn't
have power anymore.
That's called a revolution. So,
emotion categories are categories like
this. Anger and sadness and fear and so
on are categories that exist because
of collective agreement, just in the same
way that we had to impose a function on
the elephant that, that wasn't there
before, in order for it to belong to
this category. We also impose meaning on
a downturned mouth, a scowl. We impose
meaning on that scowl as anger, right. So
a scowl isn't inherently meaningful as
anger. In this culture, we've learned to
impose that meaning based on our shared
knowledge of anger. And in the Trobriand
Islands, they would impose a different
meaning they impose a meaning of,
sorry, uh, they impose a meaning on a
different face for anger, for the
stereotype of anger. It's a wide-eyed
face, a wide-eyed gasping face. And this
is also what allows us to see other
expressive movements as anger, right? So
what we're doing is imposing meaning on
a smile or on a stone-faced stare, or on a
cry as anger in a particular situation.
It transforms mere physical movements
into something much more meaningful,
which allows us to predict what's going
to happen next. So, if you want a machine
to perceive emotions in a human, then it
has to learn to construct categories on
the fly. Perceiving emotions is not a
clustering problem - it's a category
construction problem. And it's a category
construction problem whether you're
measuring facial movements, or bodily
movements, or the acoustics of someone's
voice, or whether you're measuring the
changes in their autonomic nervous
system, or even in
the neural activity of the brain, or even
all of those, right? All of these things
are physical changes that aren't
inherently meaningful as emotions.
Someone or something has to impose
meaning on them to make them meaningful,
right? So an increase in heart rate is
not inherently fear, but it can become
fear when it is, um, it's pressed into
service to serve a particular function
in a particular situation. So, emotions
are not built into your brain from birth.
They are just built, as you need them.
And this is really hard to understand
intuitively since, you know, your brain
categorizes very automatically and very
effortlessly without your awareness. And
so, we need special examples to kind of
reveal to us what our brains are doing,
kind of categorizing very continuously
and, and effortlessly. And so, what I'd
like you to do right now is, we're going
to go through one of these examples, so I
can, I can explain it to you. So, here's a
bunch of black and white blobs. Tell me
what you see. Sorry, a person? A person
kicking a soccer ball. Mm-hmm. An octopus.
One-eyed octopus. So, right now, what's
happening in each of your brains is that
billions of your neurons are working
together to try to make sense of this, so
that you see something other than black
and white blobs. And what your brain is
actually doing is it's searching through
a lifetime of past experience, issuing
thousands of guesses at the same time,
weighing the probabilities, trying to
answer the question, "what is this like?"
Not "what is this?" but "what is this like?"
"How similar is this to past experiences?"
And this is all happening
in the blink of an eye. Now, if you are
seeing merely black and white blobs, then
your brain hasn't found a good match, and
you're in a state that scientists call
experiential blindness. So now I'm gonna
cure you of your experiential
blindness. This is always my favorite
part of any talk. Should I do that again?
Now many of you see a bee. And the reason
why is that now, as your brain is
searching through past experiences,
there's new knowledge there from the
color photograph that you just saw. And
the really cool thing is that what you
just saw a moment or two ago, that
knowledge is actually changing how you
experience these blobs right now. So your
brain is now categorizing this visual
input as a member of the category "Bee." And
as a result, your brain is filling in
lines where there are no lines. It's
actually changing the firing of its own
neurons so that you see a bee where there
is actually no bee present. This kind of
category-induced hallucination is pretty
much business as usual for your brain.
This is just how your brain works. And
your brain also constructs emotions
in exactly this way. And here's why,
here's why it happens. Because your brain
is actually entombed in a dark silent
box, called your skull, and it has to
learn what is going on around it in the
world via scraps of information that it
gets through the sensory channels of the
body. Now, the brain is trying to figure
out the causes of these sensations, so
that it understands what they mean and
it knows what to do about them to keep
you alive and well. And the problem is
that the sensory information from the
world is noisy. Ambiguous. It's often
incomplete, like we saw with the blobby bee
example, and any given sensory input like
a flash of light can have
many different causes. So your brain
has this dilemma. And it doesn't just
have this dilemma based on sensory
inputs from the world. It also has this
dilemma to solve regarding the sensory
inputs from your body. So, there are
sensations that come from your body, like
your lungs expanding and contracting and
your heart beating, and there are
sensations from moving your muscles and
from metabolizing glucose, and so on and
so forth. And the the same kind of
problem that we face with having to make
sense of information from the world we
we also face from having to make sense
of our own bodies, which are largely a
mystery to the brain, more or less. So, an
ache in your gut, for example, could be
experienced as hunger if you were
sitting at a dinner table. But if you
were in a doctor's office waiting for
test results, that gut, that ache in
your gut would be experienced as anxiety.
And if you were a judge in a courtroom,
that ache would be experienced as a gut
feeling that the defendant can't be
trusted. So, your brain is basically
constantly trying to solve a reverse
inference problem, because it has to
determine the causes of sensations when
all it actually has access to are the
effects. And so, how does it do this? How
does the brain resolve this, this reverse
inference problem? And the answer is by
remembering past experiences that are
similar in some way. So, it's remembering
past experiences where physical changes
in the world and in the body are
functionally similar to the present
conditions. Similar. It's creating,
basically, categories. So,
your brain is using past experience to
create ad hoc categories to make sense
of sensory inputs, so that it knows what
they are and what to do about them. And
these categories represent the causal
relationships between the events in the
world and in the body, and the
consequences, which is what the brain
actually detects. And this is actually
how your brain is wired to work. It's
wired to work this way. It's
metabolically efficient to work this way.
And this is how your brain constructs
all of your experiences and guides all
of your actions. Your brain begins with
the initial conditions in the body and
in the world, and then it predicts
forward in time, predicting what's about
to happen next,
by creating categories that are
candidates to make sense of incoming
sensory inputs. To make them meaningful,
so that your brain knows what to do next.
And the information from the world and
from the body either confirms those
categories, or it, it prompts the brain to
to learn something and try again. It
updates and then the brain makes another
attempt at categorization. So, emotions
are not, you know, reactions to the world.
They are actually your constructions of
the world. It's not like something
happens in the world and then you react
to it with an emotion. In fact, what's
happening is that your brain is
constructing an experience, an episode, or
an event, where what it's trying to
do is make sense of or categorize what
is going on inside your own body, like an
ache, in relation to what's happening in
the world, like being in a doctor's
office. So, emotions are basically brain
guesses that are forged by billions of
neurons working together. And so, the
emotions that seem to happen to you are
actually made by you.
And categorization is also how your
brain allows you to see emotions in
other people. So, your brain remembers
past experiences from similar situations
to make meaning of the present, you know,
to make meaning of the raise of an
eyebrow, or the movement of the mouth, and
so on. So, to perceive emotion in somebody
else, what your brain is actually doing
is it's categorizing that person's
facial movements, and their body
movements, and the acoustics of their
voice, and the surrounding context, and
actually stuff that's happening inside
their own bodies, all conditioned on past
experience. So, even though when we're
talking to each other,
we're mainly looking at each other's
faces, and we're aware of the
movements of each other's faces, and we
might be maybe aware of the tone of
voice, our attention is not given to
the rest of the sensory array that the
brain has available, including what's
going on inside your own body. You know,
your body, inside your own body, is a
context that you carry around with you
everywhere you go, that is involved in
every single action and experience that
your brain creates. Largely, you are largely,
and you are largely unaware of it,
actually. And this is how a scowl can
become anger or confusion or
indigestion or even amusement;
so that emotions that you seem to detect
and other people are partly made inside
your own head. So, when one human
perceives emotion in another person, she
is not "detecting" emotion. Her brain is
guessing by creating categories for
emotion in the moment. And this is how a
single physical feature can take on
different emotional meanings in
different contexts. So, for a machine to
perceive emotion, it has to be trained on
more than stereotypes. It actually has to
capture the full high-dimensional, the
high-dimensional detail of the context,
not just measuring a face, or a face
and a body, which is inherently ambiguous
without the context. So, perceiving
emotion means learning to construct
categories using the features from
biology, like faces and bodies and brains,
but in a particular context. And the
thing I want to point out here is that
I'm using the context, the word
"context," pretty liberally here, because
context also often includes the
actions of other people, right? So we are
social animals, and other humans make up
important parts of our context, which
suggests that when you want to measure,
when you want to detect emotion in
a person, you want to build AI, you're
measuring the context, you might consider
also measuring the physical changes in
the people who are around that person,
because that can give you a clue about
about what the physical changes in the
target person really mean. So, measuring
the features of other people, that is,
their physical changes and actions, that
are contingent on the biological changes
in the person of interest, is a an
extension of the idea of context
which is really important. And in the
last few minutes what I'm going to do is
switch gears here, from perceiving
emotion to ask whether it's possible to
build machines who can actually
experience emotion the way that humans
do. And this is a question I think that
interests AI, people who work in AI,
often because they're interested in
questions about empathy. And so, if
emotions are made by categorizing
sensations from the body and from the
surrounding context using past
experience, then machines would need all
three of these ingredients, or something
like them. And so, we're going to just
take this really quickly one at a time.
So, the first is past experience. Can
machines actually recall past experience?
Well, machines are really great at
storage and retrieval.
Unfortunately brains don't work like a
file system. Memories aren't retrieved
like files. They are, memories are
dynamically constructed in the moment. And
brains have this amazing capacity to
kind of combine bits and pieces of the
past in novel ways. They're... brains are
generative. They are information-gaining
structures. They can create new content,
not just merely reinstate old content,
which is necessary for constructing
categories on the fly. To my knowledge -
and maybe, you know, which might be out of
date, but to my knowledge - there are no
computing systems that are powered by
dynamic categorization, that can create
abstract categories by grouping things
together that are physically dissimilar
but because they, they are all in that
particular situation serving a similar
function. So, an important challenge for
computers to experience emotion is to be
able to develop computing systems that
have that capability. The second
ingredient is context. So, computers are
getting better and better at sensing the
world. So, there are advances in computer
vision and speech recognition and so on.
But a system doesn't just have to detect
information in the world. It also has to
decide which information is relevant, and
which information is not, right? This is
the "signal vs. noise" problem. And this
is what scientists call "value." So, value
is not something that's detectable in
the world. Value is not a property of
sights and sounds and so on, or the
information that creates sights and
sounds, and so on, from the world. Value is
a function of that information in
relation to the state of the organism or
the system that's doing the sensing. So,
if there's a blurry shape in the
distance, does it have value for you as
food, or can you ignore it? Well, partly
that depends on what the shape is, but it
also depends on when you last ate, and
even more importantly, the value also
depends on whether or not that shape
wants to eat you. And so, to solve this
problem, it turns out that, you know, the
brain didn't
start off, in terms of
brain evolution, it didn't start off with
systems that allow creatures to compute
value. Those evolved in concert with
sensory systems, in concert with the
ability to see and hear and so on, for
exactly this reason. And so, evolution
basically gave us brain circuitry that
allows us to compute value, which also
gives us our mood, or what scientists
call "affect," which are simple feelings of
feeling pleasant, feeling unpleasant,
feeling worked up, feeling calm. Affect or
mood is not emotion. It's just a quick
summary of the state of what's going on
inside your own body, like a barometer.
And affect is a signal that something is
relevant to your body or not - whether
that thing has value to you or not. And
so, for a machine to experience emotion,
it also needs something that allows it
to estimate the value of things in the
world in relation to a body. Which brings
us to the third ingredient: brains
evolved for the purposes of controlling
and balancing the systems of a body.
Brains didn't evolve so that we could
see really well, or hear really well, or
feel anything. They evolved to control
the body, to keep the systems of the body
in balance. And the bigger the body gets,
with the more systems, the bigger the
brain gets. So, a disembodied brain has no
bodily systems to balance. It has no
bodily sensations to make sense of. It
has no affect to signal value. So a
disembodied brain would not experience
emotion. Which means that for a machine
to experience emotion like a human does,
it needs a body, or something LIKE a body:
a collection of systems that it has to
keep in balance, with sensations that it
has to explain. And to me, I think this is
the most surprising insight about AI and
emotion. I'm not saying that a machine
has to have an actual flesh-and-blood
body to
experience emotions. But I am suggesting
that it needs something like a body, and
I have a deep belief that there are
clever engineers who can come up with
something that is enough like a body to
provide this necessary ingredient for
emotion. Now these ideas and others, the
science behind them and related ideas,
can be found in my book, "How Emotions are
Made: The Secret Life of the Brain," and
there's also additional information on
my website. And even though this is not,
strictly speaking... I'm not throwing tons
of data at you, I do always at the end of
talks like to thank my lab. They really are,
they're the ones who actually do all the
really hard work. You know, scientists
like me just get to stand up here and
talk about it, so I just want to thank
them as well, and thank you for your
attention, and I'll take questions. I am
wondering how someone who is say blind
from birth will perceive emotion because
they don't they cannot depend on visual
cues whether it's facial expression or
body language so I'm guessing they
usually go off of vocal tones or lack
thereof have you looked into their
accuracy of predicting emotions and is
that better or worse than people who
rely on visual cues? So people who are
born congenitally blind have no
difficulty experiencing emotion and they
have no difficulty in perceiving emotion
through the sensory channels that they
have access to because their brains work
largely in the same way that a sighted
person's brain works at birth the brain
is collecting patterns statistical
patterns and so it's just vision isn't
part of that pattern and what's really
interesting actually is that so for
someone who is is congenitally blind
they they're learning patterns that
include you know changes in sound
pressure that become hearing changes
in the pressure on the skin which
becomes touch and they have taste
they have sensations from the body which
become affect so they can do multimodal
learning just like the rest of us and
they can learn to experience and express
emotion and perceive it through the
channels they have access to what's
really interesting is that when adults
have let's say who are congenitally
blind because they have cataracts have
those cataracts removed for the first
time they can see or they should be able
to see but actually it takes them a
while to learn to see and when they
finally learn to see they their
experience if you talk to these people
what they say is that they feel
like they're always guessing what faces
mean and what body postures mean they
find faces in particular hard for
example even as one there's one person
Michael May who's been studied really
extensively over a number of years and
even a couple of years after his
cataracts were replaced his cornea so
he had corneal abrasions that so his
cornea were replaced he was still
guessing consciously guessing at whether
a face was male or female
before someone spoke he just it was
really hard for him to do and he
experienced his vision as separate from
everything else like a like a second
language that he was learning to speak
which had no affect to it right so so
but the answer to your question is so
we could ask a bunch of
questions like so do blind people who
are congenitally blind do they actually
make facial expressions you know the way
that a sighted person does and the
answer is they their facial movement
they don't make the stereotypic
expressions when they're angry or sad or
whatever but they do learn to make
deliberate movements in a particular way
for example when they there are these
studies showing that when congenitally
blind athletes win an award and they
they know they're being filmed
they will make body movements that
indicate being really thrilled but they
don't but they're doing it because
they've learned it in the in the same
way that if you test a congenitally
blind person on the meaning of color
words their mapping of color words
largely is the same as a sighted person's
because they've learned
from the statistical regularities in
language which words are more similar to
each other and which ones aren't so
their abilities at emotion perception
and emotion expression largely look the
same as a sighted person's
without the without the visual component
was really interesting is that people
who are congenitally deaf who don't who
who tend to learn mental state language
they develop concepts for mental states
later also are delayed in their ability
to perceive emotion in other people so
that literature suggests a coupling
between emotion words and the ability to
learn to form emotion categories in
childhood.
so you said an essential component in
recognizing an emotion is the context. I
would never say recognizing but yeah. if
we didn't have the context and but we
could monitor whatever is happening
inside a person's body and and the brain
really well would we be able to
recognize emotions and and what
specifically would it take what would we
want to monitor? Yeah so it's interesting
the I mean when I was originally
thinking about giving this talk I
thought I might start with machine
learning attempts to identify emotion
with neural activity patterns of neural
activity and it turns out that you can
in a given study if you if you show
people films say and you try to evoke
emotions by showing them films you can
actually build a pattern classifier that
can distinguish anger from sadness from
fear
meaning you can distinguish when people
are watching films that
presumably evoke anger versus sadness
versus fear the problem with those
studies is that that classifier can't be
used in another study like it doesn't
generalize right so you're what you're
building is you're building a set of
classifiers that work in a specific
context but when you generalize try to
generalize to another sample of subjects
maybe so let me say it this way if you
have the same subjects in the same study
watch movies and so you evoke anger by
watching a movie and you evoke anger by
having them let's say remember a prior
anger episode you get you can classify
the emotions and distinguish them
from one another and you can you get a
little bit of carryover from one
modality of evoking to another but if
you go to a completely separate study
the patterns look completely different
and this is true across hundreds of
studies so for example we developed a we
have I published a pattern
classification paper where we use 400
studies and we developed these
classifiers based on this meta-analytic
database that that those classifiers are
not successful at classifying any new
set of instances I mean they show really
good you know I mean we used to leave
one out method we used a you know
multivariate Bayesian approach you know
there are no problems with the
statistics the issue is that when
scientists do this they believe that
what they're discovering in these
patterns is actually a literal brain
state for the emotion the literal brain
state for anger and then they think it
should generalize to something to every
brain to every instance of anger and
they don't generalize usually outside of
their own studies this is also true for
physiology where so we just published a
meta-analysis where we examined the
physiological changes in people's bodies
like their heart rate changes their
breathing their skin conductance and so
on and you you see that these physical
measures can distinguish
sometimes one emotion category
from another in a study but they don't
generalize across studies and in fact
the patterns themselves really change
from study to study and so there's when
you look at it in a meta-analytic sense
it looks like for all emotions
heart rate could go up or go down or
stay the same depending on what the
situation is so there's so far no one
has done a really high-dimensional
nobody's made a really high-dimensional
attempt at this meaning they haven't
tried to measure the brain and measure
the body and measure the face and
measure aspects of the context that's
actually what I think needs to be done
so I think this is a solvable problem I
just think we have not been going about
it in the right way and I think that
this is a real opportunity for for any
company that is serious about doing this
I love the way you mentioned in the book
that and then the talk is well how we
perceive emotions based on context so we
look at the context and then we infer
emotion and one of the examples that you
have in the book and and you have here
as well was Serena Williams winning a
Grand Slam
and you have Jim Webb I switched it out
people were starting to say oh that's
Serena Williams oh okay well that's
right yeah so but there's something that
is troubling to me at least in that in
that example well I think that's that's
certainly possible and what I would say
what I would say to that though is that
there are studies particularly by Hillel
Aviezer who's actually done work you
know he didn't I published the picture
of Serena Williams in 2007 I published
an example and Hillel came out with a
great set of experiments in 2008 and
then again in 2012 and he proceeded to
continue where he knows he has the
reports of people people's subjective
experience and he has their facial
movements and in fact there are meta-
analyses which have the subjective
reports of people and their facial
movements and in some time
also the reports of people interacting
with the people whose faces have been
and there's no evidence that the
variability is due to a
series of quick emotions being evoked
over time so what you know but I want to
back up one step and say this when when
you ask the question well maybe Serena
Williams is really experiencing maybe
she really is in a state of anger in
that moment or in that case it's
actually looks like fear more or terror
more when you say really that implies
that there's some objective criterion
that you could use to measure the state
that Serena Williams is really in and
there is no objective criterion for any
emotion that's ever been studied ever so
what scientists use is agreement they
use collective agreement essentially so
you can ask does the face does the face
match her report does the face match
somebody else's report do two people
agree on what they see so you're using
all kinds of basically perceiver based
agreement which is basically consensus
because no one has found the there
is no ground truth when it comes to
emotion that anyone has ever found that
replicates from study to study and so
there's a part of you that wants to say
I can't even answer your question
because I think it's not even a
scientific question that's answerable
but we can answer it in other ways
by looking at various forms of
consensus and while I can't say anything
about Serena Williams and what she
experienced I can say that in other
studies it's very clear that people are
scowling absolutely when they are not
angry my husband this is my husband Dan
Barrett who works for Google sorry honey
I'm gonna out you
you know he gives a full-on
facial scowl when he is concentrating
really hard and it was only after I
learned that that I was telling my
students actually like can you believe
and they're like can we believe it we
experience it every time we give a
presentation in front of you
right so I'm sitting there you know
paying a lot of attention to every
single thing they say and they think oh
my god she hates it
and the whole you know emotional climate
in my lab changed the the moment I
realized that so so that you know that's
an anecdote but it's an anecdote that
reflects what is in the literature which
is that people are making a variety of
facial I'm not saying it's random I'm
saying there's a pattern there are
patterns there that we haven't really
yet detected and I think it's in part
because we are measuring individual
signals or we think we're doing really
well if we measure the face and the body
or we measure the face and acoustics or
we measure the face and something about
you know maybe heart rate but we pick
you know we pick up two channels instead
of doing something really high
dimensional I'm not saying there's no
meaning there if there was if that were
true we you know you and I couldn't have
a conversation right now I'm saying that
it's probably something high dimensional
and it might be quite I idiographic
meaning there could be different
different brains maybe have the capacity
to do a different number of categories
to make different number of categories
and that's also something I discuss in
my book actually. so when you listed all
the sort of pre qualifications for maybe
emotion forming I was thinking you know
a lot of vegetarians say oh you know all
animals have feelings have this ability
to emote and to feel emotion and a
lot of meat eaters are like no no no no
that's impossible they they don't do you
have any opinions oh yes here's my
opinion I think that I think everybody
has to stop calling affect emotion like
many many many problems disappear they
just completely dissolve when we
understand that every waking moment of
our lives there's stuff going on inside
our bodies and we can't we don't have
access to the small every little small
change in our bodies that gives that
send sensory information to the brain if
we did we would never pay attention to
anything outside our own skins ever
again so instead
evolution has given us affect so we sense
what's going on inside our own bodies by
feeling pleasant or unpleasant feeling
worked up or feeling kind of calm
feeling comfortable or uncomfortable
that's not emotion that's affect or
mood that's with us always every waking
moment of your life you have some affect
there are some affective features to
your experience and it's very likely
also true of non-human animals you
know I can say this circuitry is
very similar similar enough that I think
you could go down all the way to
certainly all vertebrates and I would
even guess that there are some
invertebrates actually maybe all
invertebrates I don't know even insects
potentially could actually have affect
although I think that's drawing I mean I
might draw the line at like flies or
something but but recently there was a
study that came out that suggested maybe
they do have affect so you know my
feeling about this is I guess twofold
one is I think we have to stop
conflating affect and emotion
affect just with you always even when
you you experience yourself as being
rational even when you experience
yourself as just thinking or just
remembering it's just that when affect
is super strong our brains explain it as
emotion once we make that distinction
and we understand that distinction that
emotions and affect
maybe affect you could think of it as a
feature of emotion it's actually a
feature of consciousness then I think we
can say without hesitation
we don't know for sure whether non-human
animals feel affect but they probably do
and we should probably treat them as if
they do that that solves a lot of
problems it actually doesn't matter
really from a moral standpoint whether
an animal feels emotion it matters
whether they can feel pleasure and pain
that's enough actually it's an
interesting scientific question whether
or not they can they can their brains
can create emotion that's a whole
different conversation but I think the
answer to your question isn't really
about emotion it's about affect and I
think there it's really obvious that
if you're gonna the smart thing to do
you just want to do things where you do
the least amount of damage if you're
wrong
right and so that means including
animals in our moral circle right they
can feel if you just assume they can
feel pleasure and pain that solves a lot
of problems. yep thank you so much for
your time so you mentioned at the end I
guess to answer the question if machines
can experience motion that three things
and the body was one of them and then
earlier on or you also mentioned I guess
one purpose to have to create emotions
is like to know what to do next
so my question is if a being without a
body like a machine really needs that
body element if the purpose of that
being is different than just you know
knowing what to do next
therefore can we take that body out of
the one of the three requirements based
on a different purpose of that being
that's a great question that is a great
question
so if the so can you give me an example
we help me to help I don't have an
example by the way I'm thinking when I
hear machines yeah and you're modeling
all based on in humans we have their
purpose their emotions like we are
creating them maybe at the beginning for
survival maybe it's different social
elements no but if you take the body we
can still have a brain artificial
intelligence without a body which is a
different being or element therefore I'm
questioning that model of three things
needed yep to create emotion you know here's
something you need to have so you'd need
to have something that could tell the
machine what it needs to pay attention
to in the world and what it can ignore
so value right so I mean let me back up
and say it this way I mean I I don't
know how else to think about it except
in organic terms right but for example
if you look at brain evolution if I were
to say it really simply
like super simplistically so I'm just
glossing over like a ton of detail
organisms first developed a motor sort
of a rudimentary motor system with just
a tube for like a gut that's it they
used to just float around in the sea and
kind of like filter food and it wasn't
until the Cambrian explosion when there
was a like a lot of oxygen and other
things and so a lot of you know an
explosion of diversity of life that
predation developed and predation was a
selection pressure for the development
of two things sensory systems so these
little like floating tubes had no visual
system they had no rudimentary visual
system or auditory system or they had
no sensory systems they didn't
really need them and they also had no
internal nervous system to control
any any systems inside because they
didn't have any systems really inside
except a motor system and a gut and that
was it so they had to develop sensory
systems but whether they're a predator
or a prey most predators also are
prey right so that they could they could
sense they had to develop distance
senses so they could detect what was it
going to happen what was out there but they
also had to figure out what was
meaningful and what wasn't what was
valuable because it's expensive to run a
system and learn the two most expensive
things that any human or any organic
system can do is move and learn and so
it means and so the development of the
systems of the body sort of served
the purpose of helping to determine the
value to the organism now it turned out
that you know along the way that those
systems also developed sensation you know
develop the capacity to send
sensations to the brain which had to be
made sense of if you completely demolish
that and you say okay well you have a
machine that
its purpose isn't to sense things in the
world and make sense of them then so
that it can predict what to do next
then maybe you don't need a body but
then you're not even talking about
something that is I don't even know I
mean you'd have to give me an example
for me to kind of reason through it in
terms of the energetics and the I wonder
if maybe body is throwing me off because
like AI purpose can be also survive to
exist and can be and I say just very
simplistically need needs electricity
or its connection to the cloud or
something but that can be its body
what's its function what does it do what
do you mean like what like it
doesn't just you plug it in and it
doesn't just sit there it does something
what's its function what does it do
do you mean artificial intelligence what
does it do well it's so you're saying
okay it gets its energy from a plug I
get that but what is it actually
attempting to recognize or do or what's
its function I mean we can use it for I
don't know some industrial experience or
maybe a self driving car AI for
example and then what I'm saying is okay
okay so it's driving a car it's driving
a car for you it has to sense things in
the world right and then the question is
can can it experience emotion and then
in your three model I agree with the two
of two and I was questioning about the
body and then maybe the body is what is
reflecting body is its survival to
create that value here's what I would
say I would say okay well let's let's
take this as an example I don't know I'm
just doing this up down my head but
let's take is an example so so sure you
can just plug it in and it can get its
energy from an electrical outlet but
still you want to have an efficient
machine that uses electricity
efficiently and otherwise that would be
like more expensive than it needs to be
and so that means that you'd want it to
do things predictively because that's
actually more energy efficient that's
why the human body doesn't care or
actually any at all brains are
structured to work efficiently not
because of the the you know the
the energy source there is glucose and
other other organic sources so it's the
same principle basically in fact
electrical machines in fact the whole
idea of predictive coding which is what
I'm talking about
from cybernetics and then human
researchers who study humans were like
oh wait a minute that actually could be
really useful for explaining things here
so you'd still want it to be super
efficient presumably if it's driving a
car it has to determine what's what it
has to pay attention to and what it
doesn't you can't be energy it can't be
frivolous in its energy use
right so it's got to be predictive and
it has to basically not pay attention to
some things and it probably has a bunch
of systems that it has to keep in
balance so that it's working efficiently
that so far counts as there's nothing in
there that actually violates anything
that I've said I think I was trying to
be really careful to say when I say a
body
I don't literally mean a flesh and blood
body I mean one of your brain's basic
jobs is to keep the systems of your body
in balance and that requirement which is
called a allostasis that requirement
forces a lot of other things to be true
about how the system works so if you
want if you want AI to do anything like
a human it has to be put under the same
selection pressures as a human not
literally with flesh and blood if
however you're talking about a function
that a human can't do or that isn't
relevant to humans then nothing
I've said is relevant to you probably at
all right because we're only talking
about about humans but could a car you
could have could a computer that drives
a car feel emotion maybe if it had
sensory inputs that it had to make sense
of but the problem is that I don't know
that I would call that emotion because
for humans we make a distinction between
the brain makes a distinction between
the sensations from the body and the
sensations from the world if I were
if there if you didn't have sensations
from your body you you wouldn't have
affect and so it just wouldn't be the
same but I don't know I mean may I
maybe I I can't really answer but maybe
maybe actually can I try to kind of
offer one idea that might combine you
both yeah so what if emotion is just
kind of heuristic for how your body
feels like you don't have enough
convenient power to process everthing
you summarize it and machine in that
regard would need the same heuristics if
it's not allowed then it would be
emotion so like either heuristic sorta
like something is brought wrong and my
views are off that can be in a way seen
as emotions and for us it would be like
something is off for me I feel pain you
don't really know where pain is but yeah
that's a signal field for deeper
investigation right and might be one of
the causes is I mean I don't believe our
brain is a pinnacle of engineering at
least no no like I correct me if I'm
wrong but let's say the frequency of our
neurons is like hundred Hertz a second
so the bandwidth is really like limited
and the only thing that gives us life is
that you have like hundred billions and
machine might not need that because
they're frequency's like higher here I
guess but right but I think it's maybe
I'm wrong but it mean I think it comes
down to a philosophical question like
okay so so a machine is driving a car
would have sensory
inputs that it has to make sense of and
it would have to do it predictively and
all of that but so it would have to have
category would have to do ad hoc
categorization and it would have to
maybe not but I think that would
probably be efficient way to do it so
it's making categories and it's
perceiving things but so when does that
become an emotion and when doesn't it I
mean you could also ask that of humans
right I mean we I mean I you know nobody
asked me this question but you know like
what is the difference between an
emotion category and any other kind of
category that a human can cat can develop
you know any kind of any other kind of
category that is of this sort which is
ad hoc and of social reality and the
answer is
nothing nothing is different so you know
in some ways it's a not a question I
think that science can answer because in
this culture we've drawn a boundary and
we've said well these things are
emotions and these things aren't there's
something rat...they're thoughts and in
half the world people don't make that
distinction so is it possible to to
develop categories you know to do ad hoc
categorization to do a predictively to
make sense of the world or sensations
sensory input from the world without a
body sure sure you could do it without a
body but then it probably wouldn't be
what we would call human emotion or what
feels to us like human emotion but but
of course you know it would be it could
be similar I guess to the experiences
that our brains can make but I I don't
know I I have to think about it more
actually my iPad is speaking to me I
Thanks Lisa for a great great
presentation I had a follow-up question
do we believe that if the human brain
and consciousness could process all of
the interoceptive signals everything
from the world all the percepts in real
time so there's no bandwidth issue
suppose the human brain could just
process everything the first question is
do you believe we would still have
affect by the sort of simple state of
where we are suppose we could just
represent every piece of information
coming and the follow-up question is
that depending on their answer so that
is how how how that relates to our
notion of the emotional experience so in
order though for us so for us to have
high dimensional in order for us to have
let me let me think about this for a
second so if we could sense everything
so our wiring changed right because we
the reason why we can't is that we don't
have the wiring to do it but would we
have affect
I think yes I think we still would and
I'll tell you why we would because the
way the brain is structured it's
structured to do dimension reduction
and compression of information so if you
were to look at the cortex and you were
to sort of take the cortex off the
subcortical parts of the brain and just
lift it off like a and stretch it out
like a napkin and you were to look at it
in cross-section what you would see is
that you go from primary sensory regions
like primary visual cortex or primary
interceptive cortex which is where the
information from the body goes to there
are a lot of little pyramidal cells with
few connections which and the
information cascades to the front of the
brain where there are fewer cells which
are much bigger with many many many
connections but the brain is doing with
all sensory inputs is it's doing
compression it's doing dimension
reduction that's how multimodal learning
hap that's how really all learning
happens essentially so I think it
happens in vision it happens with
audition and so I think even if we could
have even if we had higher dimensional
access to what the the sensory changes
in the body I still think given the way
that the cortex is structured we would
still experience we would still have
affect which basically affect is just a
low dimensional representation of the
stuff going on inside your body
[Applause]
you
For more infomation >> Dr. Lisa Feldman Barrett: "Can Machines Perceive Emotion?" | Talks at Google - Duration: 1:17:17.-------------------------------------------
How to Use Google Home Continued Conversations - Duration: 4:58.
Hello everyone. Thanks for tuning in again. If you have a Google Home, then
I'll almost guarantee that you've been looking forward to Continued
Conversations by Google. Now this feature has been rolled out and I'm going to show
you today how to use Continuous Conversations on your Google Home, how to get Continued Conversations enabled, and of course I'm going to
show you a few of the quirks about it that you might not know right out of the
box. So let's get going. Google's continued conversations feature is
relatively easy to get set up. I'm going to show you on an iPhone here. In the Google
Home application, you just need to head to the menu, and then more settings. Once
inside there, you're going to see your Preferences menu option, and then you're
going to head into Continued Conversation. In this section of the Google Home app you can
see which devices you own where this is available, or going to be available.
Continued Conversations only works on English US devices, so you have to set
your language to that, and then they will show up here. Once they have the
right firmware on board, I had to have 1.32
at least or better in order to get this working. So you just need to take this
slider at the top to the righ,t and then you've enabled Continued Conversation on
your device. That's it. Once I head out of my preferences, I can slide down
to the devices, here's where your assistant language is. I went into my
basement home device, and you can see I have English United States selected. Even
though I'm in Canada, this is still working for me. I just have to have that
language selected. So let me show you a demo, and then I'll show you some of the
nuances about this with the Google Home. "What's the weather like?" "Currently in New
York City it's 81 and partly cloudy. Today it will be partly cloudy with a
forecasted high of 82 and a low of 62." "And what about tomorrow?" "In New York City
tomorrow it'll be mostly sunny with a high
of 77 and a low of 64." "Is there rain in the forecast?" "No it won't rain tomorrow in
New York City." "Thanks Google." "No problem." Now here just on the Google Home support
page, again Continued Conversation is available only when your Google Assistant
language is set to English United States. You're going to wake your device with the normal wake
words that you use. The only difference is once Google Home has completed
answering your question or your query, it's going to listen for about eight
seconds, or up to eight seconds for any follow-up questions. What I've noticed
using the device is it's much less than eight seconds unless your room is quite
noisy. How do you end this? Otherwise it will continue to listen for
eight seconds and continue the conversation. So you can end it by saying
"thank you", "thanks Google", or "I'm done". Any of those will stop the device which you
saw me use in the demo. There are a few things you can't use with continued
conversation. Basically they call it an active session, so during a phone call,
during alarms or timers, when they're actually going off, when it's ringing, or
when you're listening to music or video on your Google Home. One thing to note is
the only visual notification you get is that the lights on the top of the Google
Home will stay on when the device is listening. This doesn't change anything
from before, it's just that you will notice that those lights remain on
for much longer now. One last thing to tell you about, this is pretty
interesting, one person can ask a question, and then a second person can
ask the follow-up question. So you can basically have a multi person
conversation with Google Home. The only thing is you'll need to be wary of is when
asking for personal information. Once you have your voice assistant trained to
your voice, if person one asks about like a calendar event and
then person two within that same query with continued conversation enabled asks
about their calendar, that's not going to work. You're going to need to basically
stop the session, and then let that second person go ahead and ask their
question.
-------------------------------------------
Melania Trump Accidentally Reveals Why She Won't Stop Her Husband's Bullying - Duration: 3:51.
On Sunday evening, Melania Trump attended the annual gathering of the Students Against
Destructive Decisions, where she gave a speech about the importance of kindness, compassion
and all of the things that we would like our children to have as they grow into adults.
Here's a brief snip of Melania Trump's speech to that group:
Kindness, compassion and positivity.
These are very important traits in life.
It is far easier to say something that is too ... It is far easier to say nothing than
it is to speak words of kindness.
See, what's really interesting about what she just was finally able to say there is
the last part: "It's easier to say nothing than it is to actually do something and stand
up to someone, perhaps a bully."
So she accidentally just admitted why she's not standing up to her husband with his bullish
behavior towards damn-near everyone in this country that doesn't blindly follow him.
Because it's easy.
Because it's easier for her to sit in that White House and not say a word about him or
to him than it is to actually do something.
So instead, she chooses to go out there, be this massive hypocrite about, "I want to stop
bullying.
I'm creating this Be Best campaign.
I don't really care, do you?"
Melania is just like the rest of the group, folks.
She is as heartless and as hypocritical as her husband.
That's why she's not standing up to him.
It's not be she's afraid of him, it's because she doesn't care that he does this.
She supports it.
She was a birther as well, questioning the validity of Barack Obama's presidency and
his birth certificate.
She is a horrible human being.
And we need to stop pretending that any one of the Trump family is anything but that.
But Melania, to go out there and act like she cares, or in some instances, like she
really doesn't care, is absolutely disgusting.
She does not care if these children are nice, she doesn't care what kind of human beings
they grow up to be, because she got what she needed in life.
She's got money, she's got power, she's got fame and she's got looks.
So nothing else matters.
It's an absolutely disgusting and despicable way to go through life, but that is exactly
what she has become.
There was a point when a lot of people, myself included, had a little bit of sympathy for
her.
You know, it felt like she was almost trapped in this marriage that she didn't wanna be
in, but now it has become increasingly clear that this is who she wants to be and it's
where she wants to be and if this is the kind of behavior you're going to have, go out there,
say one thing but do another behind closed doors, then you're absolutely open for criticism
and we are not going to hold back.
Your husband, at the same time you're out there telling these kids to be kind and compassionate
towards one another, is on Twitter assaulting and attacking people like Maxine Waters and
Mark Warner, Democrats because they dare stand up to him.
He's calling them names.
Chuck Schumer, Chuck Todd, anybody he can get his tweets out to, he's attacking.
And you're saying nothing because you are complicit and complacent and most likely,
you probably support everything your disgusting husband is doing.
-------------------------------------------
A Creepy Scary Doll Haunted Our Cabin - Duration: 3:27.
We've been friends for about three years and we decided to go to her cabin last summer.
So it was a date where it was kind of groggy outside it was rainy.
My grandma decided to come in and she said, "Hey guys look what I got."
She had this really old doll which was native and it just gave the creepy like
feeling off of us. We asked her where she got it and she had gone to this weird
kind of garage sale type thing and she bought it off of someone for 60 cents
and the person said that they didn't want it anymore.
She put the doll in bed and tucked it in and give it a kiss and walked away.
We then said that it looked really creepy and I guess might have pissed the doll
off because when her parents and her grandparents went out for bingo the
Wi-Fi went out instantly. We decided to talk about funny things.
I told the weirdest thing happened where we heard a little voice behind us. No, no!
We heard a baby crying first yeah a baby behind the doors and I was like, "Are there any
babies like living around here?" She said, "No!" Then later on we heard scratching
from inside the room like tiny hands like as if it was like maybe a cat
scratching at a door a little freaked out and then we decided to leave the
house and so we went on our bikes and it got to the point where it was really
really cold and we couldn't handle the cold so we had to go back.
Came back, opened the door and we went into the bedroom. She had put, um, a new bra that
she got in a plastic bag on the middle of the bed. When we got back it wasn't there.
We looked around couldn't find it. I looked under the bed, the bra was out of
the bag and under the middle of the bed and then from behind us we hear this low
demonic like, "Yeah." We both looked back and we're like, "Did you hear that?"
We opened the door and we went into the bedroom. The blanket was underneath its
eyes and we had put it completely over. So, it got to point we were like really
freaked out. So she went up to this tree and it was a fake tree
and she said, "I hope nothing happens to this tree I really love this tree."
So my parents ended up coming back and we couldn't sleep that night.
We were so scared and so the next day we woke up, the leaves are all over the
floor and they were scattered which kind of freaked us out. That was the time that
we realized that we pissed that doll off and my grandma loves that doll
and she talks to the doll and to the point where she brushes his hair
and to the point where we went in there my grandma came in and she got really
mad at us and she closed the door on us and told us to never go in there again.
-------------------------------------------
Supreme Court says California abortion notice law is likely unconstitutional - Duration: 6:09.
Supreme Court Just Made Monumental Ruling That Has Now Crushed California.
In a monumental decision for religious freedom and free speech, the Supreme Court in favor
of pro-life groups that counsel pregnant women to make choices other than abortion.
This ruling invalidated a California law that required pro-life centers, such as pregnancy
medical clinics, to prominently post information on how to obtain a state-funded abortion.
In a 5-4 ruling, the court ruled the state law was a violation of the Constitution.
The decision will have far-reaching effects casting doubt on the validity of similar laws
in place in Hawaii and Illinois.
The decision marks the first time the country's highest court chose to hear an abortion-related
case during the Trump administration.
Pro-life advocates, including The National Institute of Family and Life Advocates (NIFLA),
praised the Supreme Court for their decision in NIFLA vs. Becerra in what it considers
a "critical free speech case."
NIFLA founder and president Thomas Glessner, J.D, in a statement – "California's
threat to pro-life pregnancy care centers and medical clinics counts among the most
flagrant violations of constitutional religious and free speech rights in the nation.
The implications of the Supreme Court's decision, in this case, will reverberate nationwide,
to similar unconstitutional laws in Illinois and Hawaii."
The California law from 2015 dubbed the "Reproductive FACT Act" (AB 775) previously required all
pro-life pregnancy centers to post signage notifying their patients where and how they
can receive state taxpayer-funded abortions.
The law applied to hundreds of privately funded pregnancy centers.
California has a significant bias in maximizing abortions in their state and it shows.
Many California lawmakers receive campaign donations directly from Planned Parenthood.
Lawmakers such as Attorney General Xavier Becerra and Senator Kamala Harris are among
the recipients.
Abortion is not the only option and should never be promoted as such.
Adoption is also an option, as is education and job training options for young mothers.
Many consider this a major blow to the eugenics agenda against the evil of Planned Parenthood
and their life stealing agenda.
Pro-life groups across the country praised the Supreme Court's decision to affirm life,
free speech, and religious freedom today.
Pro-life centers petitioned the Supreme Court to hear their case after the San Francisco-based
9th U.S. Circuit Court of Appeals ruled against them last year.
The court sided with the state in a 3-0 ruling, saying that the state was acting within its
power of regulating medical providers.
The appeals court also ruled similarly stating that abortion advertisements in pro-life centers
did not violate free speech rights because such signage stated facts without encouraging
women to actually seek an abortion.
However, after hearing the case the Supreme Court did not agree.
The Pacific Justice Institute filed a request for the Supreme Court to review the law.
They argued that the state had effectively stripped pro-life centers and the people who
run them of their right to free speech, much like Masterpiece Cakeshop v. Colorado Civil
Rights Commission which was also recently ruled on by the Supreme Court.
Alliance Defending Freedom petitioned the Supreme Court to halt the law, arguing that
it forced the pro-life centers to act contrary to their core mission and violated their constitutionally
protected freedoms.
Alliance Defending Freedom's Senior Counsel Kevin Theriot welcomed the Supreme Court's
decision.
Theriot said in a statement –
"Forcing anyone to provide free advertising for the abortion industry is unthinkable—especially
when it's the government doing the forcing.
This is even more true when it comes to pregnancy care centers, which exist specifically to
care for women who want to have their babies.
The state should protect freedom of speech and freedom from coerced speech.
Information about abortion is just about everywhere, so the government doesn't need to punish
pro-life centers for declining to advertise for the very act they can't promote."
Ashley McGuire, Senior Fellow with The Catholic Association, said that she hopes this Supreme
Court decision will "put an end to these unwarranted free speech assaults so that the
centers and their staff can go on helping women without harassment from the abortion
industry.
Recent efforts to force America's pregnancy centers to advertise for abortion isn't
just an attack on free speech, it's an attack on the vulnerable women who find help and
healing in them.
These centers offer pregnant women in crisis a true choice in addition to dignified care
and do so with no profit motive and no political agenda, unlike their abortion clinic alternatives."
Jeanne Mancini, President of March for Life, also praised the decision.
"These benevolent centers, which exist solely to provide love and support for women facing
unexpected pregnancies and have no financial interest at stake, should not be forced to
violate their first amendment right to freedom of speech and conscience.
March for Life will showcase the heroic work of the pregnancy care movement at the 2018
March for Life with the theme 'Love Saves Lives,'" she said.
Catherine Glenn Foste of Americans United for Life said she was "pleased" to hear
the decision, stating – "Pregnancy Care Centers provide holistic care, resources,
and hope for vulnerable women who are facing unplanned pregnancies, and they should not
be compelled to promote the abortion industry's agenda by posting signs that violate their
mission and core values."
what do you think about this?
Please Share this news and Scroll down to comment below and don't forget to subscribe
top stories today.
-------------------------------------------
Supreme Court Hands Trump An Earth Shaking Decision On His Travel Ban - Duration: 3:40.
-------------------------------------------
Beyond Music: Video Ideas for Artists - Duration: 2:59.
Beyond Music: video ideas for artists
You probably don't have time to record a new track
or film a new music video every week.
We understand—you're a busy person!
But posting new videos can help your audience stay engaged with your channel.
And there are loads of other easy, fun and creative video ideas to explore.
Not only do your fans love to hear from you
but consistent updates are a great way to build your library
show off your creative side and reach new audiences.
Not sure which ones to try?
No problem!
We've got some suggestions.
Some video formats can be filmed with little to no preparation.
And give your fans a chance to enjoy your music.
One example is a lyric video.
Want to build excitement for a music video on the way?
Posting a shareable teaser or quick preview can help accomplish that.
It doesn't matter if it's 10 seconds or 60.
What matters is letting your subscribers know something great is on the way.
Your videos also don't necessarily have to involve music.
Want your fans to know you better?
It doesn't take much prep to post a vlog.
Or do a Q&A video from questions pulled from the Comments section of a popular video.
You can even switch it up by having a friend or fellow band member conduct an interview.
You can let your fans go behind the scenes
by posting a candid video from the soundcheck or tour bus.
Try making a fan's day with a surprise visit and catching it on camera.
We call them "Feel-Good videos" for a reason.
That said, there are lots of other ideas that do involve music.
Everyone loves watching artists collaborate with each other.
Or hearing an acoustic version or remix of a previous track.
And don't forget dancing. There's always time for dancing!
You could film an instructional dance video
where you teach fans the moves from your latest video.
Or try kicking off an interactive music challenge
where fans post their own videos dancing to your music.
Who knows? Your wacky dance challenge might be the next sensation.
Don't forget, all of these videos can also be done live.
You can go Live as you surprise fans waiting in line,
do a live acoustic session or just chat with fans.
Any video you make with YouTube Live is saved and added to your library right away.
Whether you rehearse or just hit Record, it's up to you.
If you're interested, you can click here
to learn more tips for setting up and knowing what to check for.
If you do choose to do a Live Stream, remember to promote it beforehand.
By letting everyone know where and when you'll be streaming
you can maximize fan engagement.
To make sure things go off without a hitch
here's a handy checklist of things to check before you go Live.
Battery life
Data signal
Audio quality
Camera angle
Finally, placing a sign or title card can help fans know they're in the right place.
And don't forget to check the comments!
Fans are usually very excited to interact with you.
So there you have it.
Several ways to fill out your channel with fun and exciting content.
Thanks for watching.
We'd love to hear about what new video types you'll be experimenting with.
So let us know in the Comments.
And don't forget to subscribe.
-------------------------------------------
Granny Spots CNN's Jim Acosta At Trump Rally, Gives Him 'Nasty Surprise' On Live TV - Duration: 4:59.
-------------------------------------------
Nancy Pelosi Likely Gags On Her Lobster Tail After How Trump Just Tied Her To Maxine - Duration: 3:35.
-------------------------------------------
Asian Very Funny Fails 2018 🔔 Asian Best Fails 2018 - Duration: 10:42.
please subscribe
please subscribe
please subscribe
please subscribe
please subscribe
please subscribe
-------------------------------------------
Qual o projeto do PCdoB? - Duration: 1:26.
-------------------------------------------
Official Latest AOSP Extended 5.6 - Review || The New AEX 5.6 - It's AWESOME ❤️ - Duration: 4:56.
Hey there, this is kali and how was doing?
Good?
And In this video we will be taking a in depth look on Latest official AEX 5.6 Oreo 8.1 fully
customizable android custom operating system.
And thanks again to the developer Sushant Kumar for making this one for this device.
So, without any further a due, Lets we take an in depth look.
//intro// Well, this is AEX 5.6 which is "Android
Open Source Project Extended" We already made a video on official AEX 5.0
in Alpha stage.
In that time it had tons of bugs and problems.
But now, it's Awesome.
Everything is working fine.
Thanks for giving this.
In some of the upcoming videos, I am going to do some cool stuff with this ROM.
So, Hold with this Rom for some days.
First we start with boot animation.
The animation has black background and it's like AutoCAD design.
Drawing like stuff.
Really wonderful.
In Home screen we have Pixel Launcher as a default launcher.
But, it's not normal pixel launcher or something.
Its little bit tweaked and modded.
In normal pixel launcher you can't get the double tap to sleep feature.
But, you can do that on this Launcher.
And then you have customization in App drawer column and row.
But you can't get this feature on normal Pixel Launcher.
This is not the default wallpaper of AEX.
I changed it.
Also, I was working with this Rom for last couple of days.
So, I made lots of changes in setting and AEX features.
And status bar is like all the Oreo Roms.
But here we have the keys to increase and decrease the brightness step by step.
Also in bottom of the status bar, you can directly toggle to Memory setting; you have
the key to do that.
As we seen in Dot OS In-depth look video, this one has its unique wallpaper app called
"AEXPaper".
This app consist of the AEX branded wallpapers.
And this Rom has via app .And unlike normal default browser, this via has some useful
features and customization and appearance, user experience.
Ad blocking and so on.
These things come in handy.
So, now we see the features in setting.
The name extended suited for this option called "Extensions".
You have tons of features like every other custom ROM like Resurrection Remix and DOT
OS.
But, the cool feature I found is you can change the Recents Layout.
You can change the recent layout to Stock, Grid and Android Go.
In this device I use Android Go Recents.
It's giving cool performance.
The Main concept of this Rom is to give the extreme level of UI/UX customization to the
user.
Also its AOSP based.
So, gives lag free performance.
I know about the customization level of this ROM.
To know about the performance, I installed Free fire on this Rom.
Being Honest It gives the Cool Game play.
The flow of the game in this rom is not bad.
As I told earlier, I am using this rom for last couple of days.
So, The Battery performance wise, I am not getting any changes.
The Battery drainage level is same like Resurrection Remix, DOT OS and Pixel.
So, No issue.
Well, here is the Conclusion.
You can use this ROM as daily driver.
It has all cool stuff.
I am personally using this ROM as a daily driver on My Asus Zenfone.
No issues.
And last two episode of In-depth look, I told and I suggested using Pixel Experience as
well as DOT OS.
And It's not mean that I am telling to use every ROM as a primary.
The Builds that I told in last two in depth look and also this one is really wonderful
updates for this device , also for most of the device.
Each of the Rom had its own uniqueness.
So, tell your taste, which one you use?
In the poll as well as in the command section.
Pixel experience, dot os or AEX.
So, thank you for watching this video.
With your support I am doing this.
You're each and every view, like and comment makes and gives lots of hope to me to encourage
this channel.
Thanks a lot.
I will see you in my next one, the good one.
KOTMTO
-------------------------------------------
This Closed Door Meeting Landed Robert Mueller In Deep Trouble - Duration: 3:19.
Robert Mueller is in deep trouble.
The special counsel's investigation has been plagued by accusations of bias and corruption.
And now one closed door meeting landed Mueller and his team in big trouble.
Inspector General Michael Horowitz's report exposed the Mueller investigation for the
fraud that it is.
Horowitz discovered a disturbing amount of anti-Trump bias from the FBI agents and lawyers
who handled both the Clinton email probe and the Russia investigation.
The Inspector General's findings led millions of Americans to question the legitimacy of
the Russia investigation.
Horowitz's findings provided evidence to back up the claims that the Russia investigation
was nothing more than a political hit job launched by Trump's enemies in order to
frame him and his campaign.
One example of the anti-Trump bias and how it tied into the Russia investigation was
a series of text messages sent from a source the report identified as "FBI attorney number
two."
This FBI attorney describes their disgust with Trump's victory, as well as pledges
their loyalty to the "resistance".
The messages read:
"I AM NUMB."
"I AM SO STRESSED ABOUT WHAT I COULD HAVE DONE DIFFERENTLY."
"HELL NO.
VIVA LE RESISTANCE."
The FBI attorney is alleged to be Kevin Clinesmith.
Clinesmith's identity took on added significance when it was revealed he was one of the FBI
agents who interviewed Trump campaign aide George Papadopoulos.
Papadopoulos pleaded guilty to making false statements about his meeting with Maltese
Professor Joseph Mifsud.
Mifsud allegedly offered Papadopoulos "dirt" on Hillary, but it's in dispute whether
Mifsud ever offered Papadopoulos Hillary Clinton's emails.
The professor – who is alleged to have ties to western intelligence services – claimed
to be connected to Moscow.
Papadopoulos pleaded guilty to making false statements about when he was in contact with
Mifsud – but not about the contents of their conversation.
The former Trump campaign aide was also not charged with conspiring with Russia, nor did
he plead guilty to colluding with Russian intelligence during the campaign.
Clinesmith's presence during the interview has Americans wondering if the deck was already
stacked against the Trump campaign.
Was it just a coincidence that so many anti-Trump FBI personnel worked on the Clinton email
and Russia investigations?
Or did James Comey hand pick agents he knew would push both investigations towards a predetermined
outcome.
If that was the case, Comey wouldn't have needed to order anyone to go easy on Clinton
or start a politically motivated witch hunt into Donald Trump and his campaign.
The agents assigned to both cases would have known to treat Clinton with kid gloves while
using the thinnest of evidence to launch an investigation into Trump instinctively.
Horowitz's report exposed the truth about the motivations behind the FBI agents involved
in the Russia investigation.
That's why a recent Morning Consult poll found a 26-point spike in disapproval of the
Mueller investigation.
The American people found out the truth about Mueller's investigation thanks to the involvement
of Never Trump diehards like Clinesmith and decided the investigation is not on the level.
We will keep you up to date on any new developments in this story.
-------------------------------------------
What Is Inside Bang Snaps Pop-Its (Party Snap-Its, Pops Crackers) & How Do They Work #tech #Science - Duration: 3:02.
They are known by many names: Bang snaps, snappers, party snaps, crackers, pop pops,
fun snaps, "Lil' Splodeys", Throwdowns, T N T Pop Its, snap-its, poppers, poppies, pop-its,
whip'n pops, Pop Pop Snappers, whipper snappers, whiz-bangers, snap'n pops, bangers, devil
bangers
But what is really inside those pop-its? And how do they work?
Nothing much really, just a bit of coarse sand or gravel twisted in a cigarette paper.
But the gravel has something extra.
It has been impregnated with silver fulminate, a ridiculously sensitive explosive that will
detonate when submitted to impact, friction, pressure, heat and even electricity.
The quantities of silver fulminate used are so low that it makes the product quite safe
to use even by children, or your mother.
-------------------------------------------
Supreme Court Hands Trump An Earth-Shaking Decision On His Travel Ban - Duration: 3:41.
-------------------------------------------
WP33002924 - Replacing Your Maytag Dryer's LP to Natural Gas AP6008009 PS11741137 - Duration: 7:31.
Hi my name is Bill and today I'm going to be showing you how to change your gas dryer
to liquid propane to natural gas For this repair we'll be using a small phillips head
screwdriver a 3/8 inch wrench 5/16 inch nut driver a pair of channel locks and a flat
head screwdriver
WARNING before doing any repairs please disconnect your power source
so this is our dryer that we're going to be using for this demonstration it's a Maytag
keep in mind your dryer might be a little bit different than what we have here but the
same technique should still apply the first thing you wanna do is make sure you turn your
gas off now we need to disconnect the gas line so we have our channel locks here and
we're just going to twist that till you have it lose
now that it's lose we can unscrew it the rest of the way by hand
so I'll be using a phillips
head screwdriver and it's our little short stubby guy cause we're dealing with an awkward
angle and not a lot of space
now that we have those screws off we can tilt the front panel
forward and those clips will come right out and now we're going to carefully set this
down we still have wires connecting to the front bulkhead so now all we're going to do
is just unplug these two wires here and we can set this bulkhead off to the side so now
we're going to unplug these wires here and now we need to remove these two screws holding
this entire assembly down
and once you remove those screws should be able to pick up the burner assembly and carefully
pull it out now I want to loosen this screw here and as I loosen this screw I'm going
to hold on to the burner assembly because it has the igniter on it and we want to be
very careful that we don't drop that or damage it in anyway and once we have that off just
set that back inside the burner tube now I'll be using a 3/8 inch wrench to loosen the orifice
here and once I do a couple of turns should be able to loosen it the rest of the way by
hand now lets get the top part of the gasket here and remove this cap as well now you can
grab your new OEM replacement gas conversion kit if you don't have one already you can
find it on our online store so for this particular model we'll be using these two fittings and
we're going to replace the top part here with that flat head screwdriver with the piece
that corresponds so you're just going to screw that down all the way and once you have it
as tight as you can by hand you're going to use your screwdriver and tighten it down okay
that's tight and now we'll put the orifice into place again screwing that down by hand
and tightening it up now you can put the burner assembly back into
the dryer and you'll want to make sure that the long gas tube slides back through the
hole in the back of the dryer you'll want to line up the tab in the slot right there
and once those are lined up just move the entire piece so it goes into that slot so
once you have that tab in the slot your screws should line up nicely and we can screw the
assembly back down into place now we can plug these cables back in and we
can continue putting the rest of the dryer back together and now we can hook the wires
back up so we're just going to make sure that we have the wires in the same arrangement
that they were before yellow goes on this side and the blue one goes on this side and
now we're going to want to put the front panel back into place to do that we're going to
do the opposite of what we did to take it off so we're going to pick it up and angle
it a bit and just tilt it back until you hear both of those clips snap into place then you
should be able to close it up and now we can screw the bottom back in so
you're just going to make sure your panel is pushed in all the way and once you do that
should be able to get your screw started in the hole
and now we can just screw it back in now we can reconnect the gas line so we're just going
to line that up and we'll screw it on by hand as much as we can
and now once you have that as tight as you can grab your channel locks and finish tightening
the rest of the way
now we can turn our gas back on
Finally don't forget to plug in your appliance
If you need to replace any parts for your appliances you
can find an OEM replacement part on our website pcappliancerepair.com
Thanks for watching
and please don't forget to like comment and share our video also don't forget to subscribe
to our channel your support helps us make more videos just like these for you to watch
for free
-------------------------------------------
Nozze Al Bano-Power, la risposta della Lecciso: 'Lo escludo ma non entro nel merito ' - Duration: 3:42.
-------------------------------------------
WPY312959 - Replacing Your Maytag Dryer's Drum Belt - Duration: 8:57.
Hi my name is Bill and today I'm going to be show you how to replace the drum belt in
your dryer the reason why you would have to do this is because the drum is no longer spinning
or because the belt is worn out
For this repair we will be using a short phillips head screw
driver a regular sized phillips head screwdriver and a 5/16th inch nut driver
WARNING before doing any repairs please disconnect your power source.
So this is our dryer that we're going to be using for this demonstration its a Maytag
keep in mind your dryer might be a little bit different than what we've got here but
the same technique should still apply.
First thing you wanna do is make sure you turn your gas off.
So I'll be using a phillips head screwdriver and it's our little short stubby guy because
we're dealing with an awkward angle and not a lot of space.
Now that we've got those screws off we can tilt the front panel forward and those clips
will come right out and now we're going to carefully set this down we've still got wires
connecting the front bulkhead so now all we're going to do is just unplug these two wires
here and we can set this bulkhead off to the side.
Now we're just going to pull out the lint filter set that off to the side for now and
now we've gotta get to a couple more screws and unscrew them they're going to be on the
inside of the dryer though and they're just on this side of the lint filter.
And now we've just got one more screw in the middle which is a phillips head screw and
now we've got one more screw that's holding this on so we're just going to remove these
two screws here and the whole entire piece should just come right off so now we're going
to remove the screws off of this side and we're just holding this in place so it doesn't
fall down and now you also want to remove this blue wire here and with all that off
you should be able to remove the front bulkhead and we can just set that down off to the side
now the belt is still on here so what I'm going to do is pull this drum out slightly
like this and then I'm going to go back into the back here and in order to get the belt
off we're going to have to push up on the lever to release the tension reach in with
our other hand and take it out and now we can pull the drum off and as you pull it out
just be careful that you don't get the belt caught on anything else and if you just reach
in there and get it out now we can take the old belt off the drum now you can grab your
new OEM replacement drum belt and if you don't have one already you can find one at our online
store and now we'll put the new belt on the drum and as you can see here there's a line
where the old one was so we're just going to set our drum belt right on top of that
line there and now we can put the drum back into the machine and as we put the drum into
the machine you're just going to wanna make sure the belt goes along with it and doesn't
get caught on anything once again because it's pretty loose on here now to put the drum
back on we're going to take the belt we're going to go on the other side of the wheel
here like this and we're going to lift it up keeping this all together and then loop
the belt around the shaft here make sure everything's on straight there we go and then everything
should be able to spin freely now so now that the belt is on the wheel in the dry shaft
we're going to put the drum on to the roller wheels and you can see here we'll spin the
drum around a couple of times and you'll see that also our blower wheels spinning as we
spin our drum so once you've got that you know that you've got it in the right way so
now we can put the rest of the dryer back together so you can line up your duct assembly
with the blower here and that plastic will actually go on the inside that'll help you
with lining everything up and once you do that lift up on the tub a little bit and everything
else should line up nicely for you so we've got these little tabs here and that'll help
you with the lining everything up you're just going to want to slide the tabs into that
bigger holes here and once we do that we'll screw it back on and
now we'll plug the blue wire back in here and now we'll screw these screws back in on
the inside of the duct assembly
now I can put the filter back in and now we can hook the wires back up so we're just going
to make sure that we've got the wires in the same arrangement that they were before yellow
goes on this side and the blue one goes on this side now go back to the front and now
we're going to want to put the front panel back in place to do that we're going to do
the opposite of what we did to take it off we're going to pick it up and angle it a bit
and just tilt it back until you hear both of those clips snap into place then you should
be able to close it up and now we can screw the bottom back in so you're just going to
want to make sure your panel is pushed in all the way and once you do that you should
be able to get your screw started in the hole and now we can just screw it back in and once
that's screwed in you can plug everything else in and your repair is complete
now we can turn our gas back on
Finally don't forget to plug in your appliance
If you need to replace any parts for your appliances
you can find an OEM replacement part on our website pcappliancerepair.com
Thanks for watching and please don't forget to like comment and share our video also don't
forget to subscribe to our channel your support helps us make more videos just like these
for you to watch for free.
-------------------------------------------
Get Paid To Play Games As Video Game Tester - Duration: 1:31.
Ever dreamed of getting paid to play games.
Now it's possible. Hi the Internet's number one source for video game tester job since 2008.
The gaming industry is now bigger than ever. From mobile phones
online games and home consoles video games are everywhere and it's now the most profitable industry, even
overtaking the billion dollar movie business and these game companies spend millions to compete and make the best games possible
That's where you come in
You see before any game is released to the public
They look for people just like you to test games and give their honest and unbiased review of the game
This helps game developers to polish and fix errors to the game which results in a better and more enjoyable
gaming experience and a better game leads to more sales for the game company with our help.
We can connect you to all of the major gaming companies and instantly have access to thousands of
work at home video game tester jobs,
location-based testing jobs and schedules online surveys and paid reviews.
Various gaming jobs plus video tutorials and guides to help you get started with your video game tester career
best of all you can join today risk free.
Sign up for our 7-day trial today and start your very first video game tester job minutes from now.
So what are you waiting for? Join Now and get paid to play games today.
Không có nhận xét nào:
Đăng nhận xét