Thứ Năm, 30 tháng 8, 2018

Youtube daily Aug 30 2018

There's some good news this week for anyone who happens to be a federal contractor with

the US government.

The Department of Labor has now changed the rules, so that you federal contractors can

now discriminate against your own employees.

Here's what's happening at The Department of Labor, this is their new announcement,

they're actually directing federal contractors, any business that's doing business with the

government, and they're informing them that hey, don't forget about this Masterpiece Cake

bakery court decision, so you're protected if your religion doesn't mesh with the viewpoints

or lifestyles of your employees.

To put it in a simpler term that's a little less complicated for people, The Department

of Labor is sending out notices to people doing business with the government to let

them know that if they have any gay employees feel free to go ahead and fire them because

you're protected under these recent court orders.

Now, first and foremost that's not how that court decision worked.

It was not a broad, expansive decision.

It literally only applied to that one specific case, and the way that the prosecutors brought

to the case against the bakery.

That is what was at issue, and yet The Department of Labor, right now, is trying to use that

as an excuse to tell their contractors that if you don't like the fact that you're hiring

somebody, or somebody works for you and they're a member of the LGBTQ community, or if they're

a member of a faith that you don't happen to agree with feel free to go ahead, and just

fire them, and don't fear any repercussions whatsoever because under The Department of

Labor's new rule, under the new rule of the United States discrimination is not only legal,

but it is actually embraced, and encouraged because that's what The Department of Labor

is doing here.

They're encouraging employees, federal contractors, people that we pay with our tax dollars to

fire employees for being gay, or for being a different religion, or as time goes on maybe

for being the wrong color.

That's what happens in situations like these.

Every time we see something like this it is a slippery slope, and it is a slippery slope

that leads to broad government supported oppression and discrimination, and that is what we are

living under now with this Trump administration.

The Obama administration, their Department of Labor, to their credit, was actually encouraging

federal contractors to diversify, to hire people of different backgrounds, or people

with different lifestyles, and more importantly hire people with disabilities.

That's no longer the case.

The Department of Labor today is telling their contractors, hey, make sure you keep all your

employees, keep them white, keep them straight, keep them able-bodied, because we don't want

any other people like that walking around the capital trying to do business.

That is the America that we live in today, it's the America that we lived in in the 1940s

and '50s, and pretty much all the time before that, but that is exactly where the Republicans

in charge want to take us back to today.

For more infomation >> Labor Department Encourages Federal Contractors To Discriminate Against Employees - Duration: 3:32.

-------------------------------------------

🤔 ESTA BOTELLA DE LOS '70 OCULTA ALGO INTERESANTE... | MochiLeandro 43 🌎 - Duration: 14:31.

For more infomation >> 🤔 ESTA BOTELLA DE LOS '70 OCULTA ALGO INTERESANTE... | MochiLeandro 43 🌎 - Duration: 14:31.

-------------------------------------------

Episode 3 - Andrew Ng: Influential leader in artificial intelligence - Duration: 49:06.

>> With the rise of technology often

comes greater concentration of

power in smaller numbers of people's hands,

and I think that this creates

greater risk of ever-growing wealth

inequality as well.

To be really candid, I think that with

the rise of the last few ways of technology,

we actually did a great job

creating wealth in the East and the West Coast,

but we actually did leave large parts

of the country behind,

and I would love for this next one

to bring everyone along with us.

>> Hi everyone. Welcome to Behind the Tech.

I'm your host, Kevin Scott,

Chief Technology Officer for Microsoft.

In this podcast, we're going to get behind the tech.

We'll talk with some of

the people who made our modern tech world

possible and understand what

motivated them to create what they did.

So, join me to

maybe learn a little bit about the history of

computing and get a few behind the scenes insights

into what's happening today.

Stick around.

Today I'm joined by my colleague Christina Warren.

Christina is a Senior Cloud Developer Advocate

at Microsoft. Welcome back Christina.

>> Happy to be here Kevin,

and super excited about

who you're going to be talking to today.

>> Yeah. Today's guest is Andrew Ng.

>> Andrew is, I don't think this is too much to say,

he's one of the preeminent minds

in artificial intelligence and machine learning.

I've been following his work since

the Google Brain Project,

and he co-founded Coursera,

and he's done so many important things and

so much important research on AI and that's

a topic that I'm really obsessed with right now.

So, I can't wait to hear what you guys talk about.

>> Yeah. In addition to his track record as

an entrepreneur, so Landing.AI, Coursera,

being one of the co-leads of

the Google Brain Project in its very earliest days,

he also has this incredible track record

as academic researcher.

He has a hundred plus really fantastically good papers

on a whole variety of topics in artificial intelligence,

which I'm guessing are on

the many a PHD student's reading list

for the folks who are trying to get

degrees in this area now.

>> I can't wait. I'm really

looking forward to the conversation.

>> Great. Christina, we'll

check back with you after the interview.

Coming up next, Andrew Ng.

Andrew is founder and CEO of Landing.AI.

Founding lead of the Google Brain Project

and co-founder of Coursera.

Andrew is one of the most influential leaders

in AI and deep learning.

He's also a Stanford University

Computer Science adjunct professor.

Andrew, thanks for being here.

>> Thanks a lot for having me Kevin.

>> So, let's go all the way back to the beginning.

So, you grew up in Asia?

And I'm just sort of curious when was it that you

realized you were really

interested in math and computer science?

>> I was born in London,

but grew up mostly in Hong Kong and Singapore.

I think I started coding when I was six-years-old.

And my father had a few very old computers.

The one I used the most was some old Atari,

where I remember there were these books

where you would read the code in a book and

just type in a computer and then you had

these computer games you could play

that you just implemented yourself.

So, I thought that was wonderful.

>> Yeah, and so that was probably the Atari 400 or 800?

>> Yeah. Atari 800 sounds right.

It was definitely some Atari.

>> That's awesome. And what sorts of

games were you most interested in?

>> You know, the one that fascinated me

the most was a number guessing game.

Where you, the human, would think

of a number from 1 to 100,

then the computer would basically do

binary search but chooses: Is it higher or lower than 50?

Is it higher or lower than 75 and so on,

until it guesses the right number.

>> Well, in a weird way,

that's like early statistical Machine Learning, right?

>> Yeah, and then, so at six-years-old

it was just fascinating that the computer could guess.

>> Yeah. So, from

six years- did you go to

a science and technology high school?

Did you take computer science classes

when you were a kid or...?

>> I went to good schools: St. Paul's in

Hong Kong and then ACPS in the Raffles in Singapore.

I was lucky to go to good schools.

I was fortunate to have grown up in

countries with great educational systems.

Great teachers, they made us work really hard but also

gave us lots of opportunities to explore.

And I feel like, computer science is not magic.

You and I do this, we know this.

While I'm very excited about

the work I get to do in computer science and AI,

I actually feel like anyone could do what I'd do if they

put in a bit of time to learn to do these things as well.

Having good teachers helps a lot.

>> We chatted in our last episode with Alice Steinglass,

who's the president of Code.org,

and they are spending

the sum total of their energy trying to

get K-12 students interested in

computer science and pursuing careers in STEM.

You're also an educator.

You are a tenured professor at Stanford and

spent a good chunk of your life in academia.

What things would you encourage students to think

about if they are considering a career in computing?

>> I'm a huge admirer of Code.org.

I think what they're doing is great.

Once upon a time, society used to

wonder if everyone needed to be literate.

Maybe all we needed was for

a few monks to read the Bible to us and we didn't

need to learn to read and write ourselves because

we'd just go and listen to the priest or the monks.

But we found that when a lot of us learned to read and

write that really improved human-to-human communication.

I think that in the future,

every person needs to be computer

literate at the level of being able to

write these simple programs.

Because computers are becoming so

important in our world and coding

is the deepest way for

people and machines to communicate.

There's such a scarcity of

computer programmers today that

most computer programmers end up writing

software for thousands of millions of people.

But in the future if everyone knows how to code,

I would love for the proprietors of

a small mom and pop store at a corner to

go program an LCD display

to better advertise their weekly sales.

So, I think just as literacy,

we found it having everyone being able to

read and right, improved human-to-human communication.

I actually think everyone in the future

should learn to code because that's

how we get people and

the computers to communicate at the deepest levels.

>> I think that's a really great segue

into the main topic

that I wanted to chat about today, AI,

because I think even you have used

this anecdote that AI is going to be like electricity.

>> I think I came up with that.

>> Yeah. I know this is your brilliant quote

and it's spot on.

The push to literacy in many ways is

a byproduct of

the second and third industrial revolution.

We had this transformed society

where you actually had to be literate in

order to function in this quickly industrializing world.

So, I wonder how many analogues you

see between the last industrial revolution

and what's happening with AI right now.

>> Yeah.

The last industrial revolution

changed so much human labor.

I think one of the biggest differences

between the last one and this one

is that this one will happen faster,

because the world is so much more connected today.

So, wherever you are in the world, listening to this,

there's a good chance that there's a AI algorithm

that's not yet even been invented as of today,

but that will probably

affect your life five years from now.

A research university in

Singapore could come up with something next week,

and then it will make its way to

the United States in a month.

And another year after that,

it'll in be in products that affect our lives.

So, the world is connected in a way that

just wasn't true at the last industrial revolution.

And I think the pace and speed will bring challenges

to individuals and companies and corporations.

But our ability to drive

tremendous value for AI, for the new ideas,

the tremendous driver for global GDP growth

I think is also maybe

even faster and greater than before.

>> Yeah. So, let's dig in to that a little bit more.

So, you've been doing

AI Machine Learning for a really long time now.

When did you decide that that's

the thing you were going to specialize

on as a computer scientist?

>> So, when I was in high school in Singapore,

my father who is

a doctor was trying to implement AI systems.

Back then, he was actually using XP systems,

which turned out not to be that good a technology.

He was implementing AI systems of

his day to try to diagnose, I think lymphoma.

>> This is in the late 80's.

>> I think I was 15 years old at that time.

So, yeah, late 80's.

So, I was very fortunate to learn from

my father about XP Systems

and also about neural networks,

because they had day in the sun back then.

That later became an internship at

the National University of Singapore

where I wrote my first research paper actually,

and I found a copy of it recently.

When I read it back now,

I think it was a very embarrassing research paper.

But we didn't know any better back then.

And I've actually been doing AI,

computer science and AI pretty much since then.

>> Well, I look at your CV and

the papers that you've

written over the course of your career.

It's like you really had your hands

in a little bit of everything.

There was this inverse

reinforcement learning work that you

did and published the first paper in 2000.

Then, you were doing some work

on what looks like information retrieval,

document representations, and what not.

By 2007, you were doing

this interesting stuff on self-taught learning.

So, transfer learning from unlabeled data.

Then, you wrote the paper in 2009 on

this large scale unsupervised learning

using graphical processing.

So, just in this 10-year period in your own research,

you covered so many things.

In 2009, we hadn't even really

hit the curve yet on deep learning,

the ImageNet result from Hinton hadn't happened yet.

How do you, as one of the principles,

you help create the feel,

what does the rate of progress feel like to you?

Because I think this is one of the things that people

get perhaps a little bit over excited about sometimes.

>> One of the things I've learned in my career

is that you

have to do things before they're obvious to everyone,

if you want to make a difference

and get the best results.

So, I think I was fortunate back in maybe 2007 or so,

to see the early signs

that deep learning was going to take off.

So, with that conviction,

decided to go on and do it,

and that turned out to work well.

Even when I went to Google to

start the Google Brain project, at that time,

neural networks was a bad word to

many people and there was a lot of initial skepticism.

But, fortunately,

Larry Page was supportive and then started Google Brain.

And I think when we started Coursera,

online education was not an obvious thing to do.

There were other previous efforts,

massive efforts that failed.

But because we saw signs that we could make it

work with the conviction to go in.

When I took on the role at Baidu at that time,

a lot people in the US were asking me, "Hey,

Andrew, why on earth would you want to do AI in China.

What AI is there in China?"

I think, again, I was fortunate

that I was part of something big.

Even today, I think landing.ai

where I'm spending a lot of my time,

people initially ask me, "AI for

manufacturing? Or AI for

agriculture? Or try to transfer calls using AI?

that's a weird thing to do."

I do find people actually catch on faster.

So, I find that as I get older,

the speed at which people go from being really

skeptical about what I do

versus to saying, "Oh, maybe that's a good idea."

That window is becoming much shorter.

>> Is that because the community is maturing or

because you've got such an incredible track record that...

>> I don't know. I think everyone's getting

smarter all around the world. So, yeah.

>> As you look at how machine learning has

changed over the past just 20 years,

what's the most remarkable thing from your perspective?

>> I think a lot of recent progress

was driven by computational scale,

scale of data, and then also by algorithmic innovation.

But, I think it's really interesting when something

grows exponentially, people, the insiders,

every year you say, "Oh yeah,

it works 50 percent better

than the year before." And every year it's like,

"Hey, another 50 percent year-on-year progress."

So, to a lot of machine learning insiders,

it doesn't feel that magical.

It's, "Yeah, you just get up and

you work on it, and it works better."

To people that didn't grow up in machine learning,

exponential growth often feels

like it came out of nowhere.

So, I've seen this in

multiple industries with the rise of the movement,

with the rise of machine learning and deep learning.

I feel like a lot of the insiders feel like, "Yeah,

we're at 50 percent or some percent better than last

year," but it's really

the people that weren't insiders that feel like,

"Wow, this came out of nowhere.

Where did this come from?"

So, that's been interesting to observe.

But one thing you and I have chatted about before,

there's a lot of hype about AI.

And I think that what happened with the earlier AI winters is

that there was a lot of hype about AI that

turned out not to be that useful or valuable.

But one thing that's really different today is

that large companies like Microsoft,

Baidu, Google, Facebook, and so on,

are driving tremendous amounts of revenue as well as

user value through modern machine learning tools.

And that very strong economic support,

I think machine learning is making a difference to GDP.

That strong economic support

means we're not in for another AI winter.

Having said that, there is a lot of hype about

AGI, Artificial General Intelligence.

This really over hyped fear of evil killer robots,

AI can do everything a human can do.

I would actually welcome a reset

of expectations around that.

Hopefully we can reset

expectations around AGI to be more realistic,

without throwing out baby with the bath water.

If you look at today's world,

there are a lot more people working on

valuable deep learning projects

today than six months ago,

and six months ago, there were a lot more people

doing this than six months before that.

So, if you look at it in terms of the number

of people, number of projects,

amount of value being created,

it's all going up.

It's just that some of the hype and

unrealistic expectations about, "Hey,

maybe we'll have evil killer robots

in two years or 10 years,

and we should defend against it."

I think that expectation should be reset.

>> Yeah. I think you're spot on

about the inside versus outside perspective.

The first machine learning stuff that I did was

15 years-ish ago when

I was building classifiers for

content for Google's Ad systems.

Eventually, my teams worked on some of

the CTR predictions stuff for the ads auction.

It was always amazing to me how simple an algorithm you

could get by with if you had

lots of compute and lots of data.

You had these trends that were driving things.

So, Moore's Law and things that we were

doing in cloud computing was making

exponentially more compute available

for solving machine learning problems

like the stuff that you did,

leveraging the embarrassingly parallelism

in some of these problems and solving them on GPUs,

which are really great at

doing the idiosyncratic type of compute.

So, that computer is one exponential trend,

and then the amount of available data for

training is this other thing,

where it's just coming in at this crushing rate.

You were at the Microsoft

CEO Summit this year and you gave

this beautiful explanation where you said,

"Supervised Machine Learning is

basically learning from data,

a black box that takes one set

of inputs and produces another set of outputs.

And the inputs might be an image and the outputs

might be text labels for the objects in the image.

It might be a waveform coming in that has

human speech in it and the output might be the speech."

But really, that's sort of at the core of

this gigantic explosion of

work and energy that we've got right now,

and AGI is a little bit different from that.

>>Yes, in fact to give credit where it's due.

You know actually many years ago,

I did an internship at

Microsoft Research back when I was still in school.

Even back then, I think it was

Eric Brill and Michele Vanko

at Microsoft way back had already published a paper

using simple algorithms, that basically

it wasn't who has the best algorithm that wins,

it was who has the most data for

the application they were looking at at NLP.

And so I think that the continuation of that trend,

that people like Eric and Michelle had

spotted a long time ago,

that's driving a lot of the progress

in modern machine learning still.

>> Yeah. Sometimes, with AI Research

you get these really unexpected results.

One of those I remember it was

the famous Google CAT result from the Google Brain Team.

>> Yes, actually, those are interesting projects,

while still a full time at Stanford,

my students at the time Adam Coates and others,

started to spot trends that,

basically the bigger you build in

your neural networks, the better they work.

So that was a rough conclusion.

So I started to look around Silicon Valley to see

where can I get a lot of

computers to train really really big neural networks.

And I think in hindsight,

back then a lot of us leaders of

deep learning had

a much stronger emphasis on unsupervised learning,

so learning without label data, such

as getting computers to look a lot of pictures,

or watch a lot YouTube videos without telling

it what every frame or what every object is.

So I had friends at Google so I wound up pitching to Google

to start a project which

we later called the Google Brain Project,

to really scale up neural networks.

We started off using Google's Cloud,

the CPU's and in hindsight,

I wish we had tried to build up

GPU capabilities like Google sooner,

but for complicated reasons,

that took a long time to do which is why I wound

up doing that at Stanford rather than at Google first.

And I was really fortunate to have

recruited a great team to work

with me on the Google Brain Project.

I think one of the best things I did was

convince Jeff Dean to come and work.

And in fact, I remember the early days,

we were actually nervous about whether

Jeff Dean would remain interested in the project.

So a bunch of us actually

had conversations to strategize,

"Boy, can we make sure to keep Jeff Dean engaged

so that he doesn't lose interest and go do something else?"

So thankfully he stayed.

The Google CAT thing was led by my,

at the time PhD student Quoc Le

put together with Jiquan Ngiam,

were the first two sort of

machine learning interns that

I brought into the Google Brain Team.

And I still remember when

Quoc had trained us on unsupervised learning algorithms,

it was almost a joke, you know I was like, "Hey!

there are a lot of cats on YouTube,

let's see this learning cat detector."

And I still remember when Quoc

told me to walk over and say,

"Hey Andrew, look at this." And I said, "Oh wow!

You had unsupervised learning algorithm

watch YouTube videos and learn

the concept of 'cat.' That's amazing."

So that winds up being an influential piece of work,

because it was unsupervised learning,

learning from tons of data for

an algorithm to discover concepts by itself.

I think a lot of us actually

overestimated the early impact of unsupervised learning.

But again, when I was leading Google Brain Team,

one of our first partners was

the speech team working

with Vincent Vanhoucke, a great guy,

and I was really working with Vincent and his team,

and seeing some of the other things

happening at Google and outside that caused a lot

of us to realize that there was

much greater short term impact to

be had with supervised learning.

And then for better or worse,

when lot of deep learning communities saw this,

so many of us shifted so much

of our efforts to supervised learning,

that maybe we're under resourcing

the basic research we still

need unsupervised learning these days

which maybe, you know,

I think unsupervised learning is

super important that there's

so much value to be made with supervised learning.

So much of the attention is there right now. And I think,

really what happened with

the Google Brain Project

was- were the first couple of successes,

one being the Speech Project

that we worked with the speech team on.

What happened was other teams saw

the great results that

the speech team was getting with deep learning with our help.

And so, more and more

of the speech team's peers ranging from

Google Maps to other teams

started to become friends and

allies of the Google Brain Team.

We started doing more and more projects.

And then the other story is after,

you know, the team had tons of momentum,

thank god, we managed to

convince Jeff Dean to stick with the project,

because one of the things that gave

me a lot of comfort when I wanted

to step away from a day-to-day

role to spend more time in Coursera was,

I was able to hand over

leadership of the team to Jeff Dean.

And that gave me a lot of comfort that I

was leaving the team in great hands.

>> I sort of wonder, if there's

a sort of a message or a takeaway

for AI researchers in

both academia and industry about the Jeff Dean example.

So for those who don't know,

Jeff Dean might be the best engineer in the world.

>> It might be true. Yes.

>> But I've certainly never worked

with anyone quite as good as him.

I mean, I remember there was this-

>> He's in a league of his own. Jeff Dean is definitely-

>> I remember back in long,

long ago at Google.

This must have been 2004 or 2005,

right after we'd gone public,

Alan Eustace who was running all of

the engineering team at the time would,

once a year, send a note out to everyone in engineering at

performance review time to get your Google resume

polished up so that you

could nominate yourself for a promotion.

First thing that you were suppose to do

was get your Google resume,

which is sort of this internal version of

a resume that showed all of your Google specific work.

And the example resume that he would send out was Jeff's,

and even in 2004,

like he'd been there long enough

where he'd just done everything.

And, you know I was an engineer at the time.

I would look at this and I'm like,

"Oh my god, my resume looks nothing like this."

And so I remember sending a note Alan Eustace saying,

"You have got to find someone else's resume.

You're depressing a thousand engineers

everytime you send this out."

Because Jeff is so great.

>> We're just huge fans really of Jeff.

So me, you know, fans of Jeff among them and just,

not just a great scientist but

also just an incredibly nice guy.

>> Yeah. But this whole notion of coupling

world-class engineering and

world class-systems engineering with AI problem solving,

I think that is something that we don't

really fully understand enough.

You can be the smartest AI guy

in the world and you know just have this sort of

incredible theoretical breakthrough, but

if you can't get that idea implemented,

not that it has no impact it just sort of

diminishes the potential impact that the idea can have.

That partnership I think you have with

Jeff is something really special.

>> I think I was really fortunate that

even when I started the Google Brain Team

I feel I brought a lot of

machine learning expertise and Jeff,

and other Google engineers

early team members like Rajat Monga,

Greg Corrado, just thought a 20 percent project for

him. But there are other Google engineers--

really first and foremost Jeff--they brought a lot of

systems abilities to the team.

And the other convenient thing was that,

we were able to get a thousand computers to run this.

And having Larry Page's backing and Jeff's ability to

marshal those types of computational

resources turns out to be really helpful.

>> Well, let's switch gears just a little bit.

I think it was really apt that you

pointed out that AI and

machine learning in particular are starting to

have GDP scale impact on the world.

Certainly, if you look at the products

that we're all using everyday,

there's many levels of machine learning involved

in everything from search to social networks to- I mean,

basically everything you use has got

just a little kiss of machine learning in it.

So, with that impact and

given how pervasive these technologies are,

there's a huge amount of

responsibility that comes along with it.

I know that you've been thinking a lot

about ethical development of AI

and what our responsibilities are

as scientists and engineers

as we build these technologies.

I'd loved to chat about that for a few minutes.

Yeah. There's potential to promulgate

things like discrimination and bias.

I think that with the rise of technology often

comes greater concentration of

power in smaller numbers of people's hands.

And I think that this creates greater risk

of ever-growing wealth inequality as well.

So, we're recording this here in California,

and to be really candid,

I think that with the rise

of the last few waves to technology,

we actually did a great job

creating wealth in the East and the West Coast,

but we actually did leave large parts

of the country behind,

and I would love for this next one

to bring everyone along with us.

>> Yeah. One of the things that I've spent a bunch

of time thinking about

is, from Microsoft's perspective,

when we think about how we build our AI technology,

we're thinking about platforms that we

can put in the hands of developers.

It's just sort of our wiring as a company.

So, the example you gave

earlier and the talk where you want someone in a mom

and pop shop to be able to program

their own LCD sign

to do whatever and everybody becomes a programmer,

we actually think that AI can play a big role in

delivering this future. And we want

everybody to be an AI developer.

I've been spending much of my time lately talking with

folks in agriculture and in healthcare,

which again you're thinking about

the problems that society has

to solve. In the United States.

the cost of healthcare is growing

faster than GDP which is

not sustainable over long periods of time.

Basically, the only way that I see

that you break that curve is with technology.

Now, it might not be AI. I think it is.

But something is going to have to sort of

intercede that pulls cost out

of the system while still giving

people very high quality healthcare outcomes.

And I just see a lot of companies almost every week,

there's some new result where AI can read and

EKG chart with cardiologists' level of accuracy,

which isn't about taking all of the cardiology jobs away.

It's about making this diagnostic capability

available to everyone because the cost is free

and then letting the cardiologist do

what's difficult and unique that humans should be doing.

I don't know if you see that pattern

in other domains as well.

>> I think there'll be a lot of

partnerships with the AI teams and

doctors that will be very valuable.

You know, one thing that excites me these days with

the theme of things like healthcare, agriculture,

and manufacturing is helping

great companies become great AI companies.

I was fortunate really, to have led the Google Brain team

which became I would say probably the leading force

in turning Google from

what was already a great company

into today great AI company.

Then, at Baidu, I was responsible

for the company's AI technology and strategy and team,

and I think that helped transform Baidu from

what was already a great company into a great AI company.

I think it really Satya

did a great job also transforming

Microsoft from a great company to a great AI company.

But for AI to reach its full potential,

we can't just transform tech companies,

we need to pull other industries

along for it to create this GDP growth,

for it to help people in healthcare deliver

a safer and more accessible food to people.

So, one thing I'm excited about,

building on my experience, helping with

really Google and Baidu's transformation

is to look at other industries as well to see

if either by providing AI solutions or

by engaging deeply in AI transmission programs,

whether my team at Landing.AI,

whether Landing.AI can help

other industries also become great at AI.

>> Well talk a little bit more about

what Landing.AI's mission is.

>> We want to empower businesses with AI.

There is so much need for

AI to enter other industries than technology,

everything ranging from manufacturing to

agriculture to healthcare, and so many more.

For example, in manufacturing,

there are today in factories

sometimes hundreds of thousands of people using

their eyes to inspect parts as they come off as

the assembly line to check for

scratches and things and so on.

We find that we can, for the most part,

automate that with deep learning

and often do it at a level

of reliability and consistency

that's greater than the people are.

People squinting at something

20 centimeters away your whole day,

that's actually not great for your eyesight it turns out,

and I would love for computers

rather than often these young employees to do it.

So, Landing.AI is working with

a few different industries to

provide solutions like that.

We also engage companies

with broader transformation programs.

So, for both Google and Baidu,

it was not one thing,

it's not that implement

neural networks for ads and so it's a great AI company.

For a company become

a great AI company is much more than that.

And then having sort of helped two great companies do that,

we are trying to help other companies as well,

especially ones outside tech become

leading AI entities in their industry vertical.

So, I find that work very meaningful

and very exciting.

Several days ago, I tweeted out that on Monday,

I literally wake up at 5:00 AM

so excited about one of

the Landing.AI projects, I couldn't go back to sleep.

I started getting and scribbling on my notebook.

So, I find these are really, really meaningful.

>> That's awesome. One thing I want

to sort of press on a little bit

is this manufacturing quality

control example that you just gave.

I think the thing that a lot of folks

don't understand is it's

not necessarily about the jobs going away,

it's about these companies being able to do more.

So, I worked in a small manufacturing company while

I was in college and we had exactly the same thing.

So, we operated a infrared reflow soldering machine

there which sort of melts,

surface mount components onto circuit boards.

So, you have to visually inspect

the board before it goes on to make sure

the components are seated and the solder

has been screened and all the right parts.

When it comes out,

you have to visually inspect it to make sure

that none of the parts of tombstond.

There are a variety of like little things

that can happen in the process.

So, we have people doing that.

If there was some way for them not to do it,

they would go do something else

that was more valuable or we

could run more boards so actually, in a way,

you could create more jobs because

the more work that this company could do economically,

the more jobs in general that it can create.

And I'm sort of seeing AI in

several different places like

in manufacturing automation as helping to bring

back jobs from overseas

that were lost because it was just sort of

cheaper to do them with

low cost labor in some other part of the world.

They're coming back now because like

automation has gotten so good that you

can start doing them with

fewer more expert people but here,

in the United States,

locally in these communities where

whatever it is that they're manufacturing is needed.

It's like these really interesting phenomena.

>> There was one part of your career

I did not know about it.

I followed a lot of your work at

Google and Microsoft, and even today,

people still speak glowingly of their privacy practices

you put in place when you're at Google.

I did not know you were into

this soldering business way back.

>> Yeah, I had put myself through college

some way or another. It was interesting though.

I remember one of my first jobs,

I had to put brass rivets into 5,000 circuit boards.

Circuit boards were controllers

for commercial washing machines and there were

six little brass tabs that you would put

electrical connectors onto and

each one of them had to be riveted.

So, it was 30,000 rivets that had to be done

and we had a manual rivet press and

my job at this company in

its first three months of existence right

after I graduated high school was to press,

rivet press 30,000 times, and that's awful.

Automation is not a bad thing.

>> In a lot countries we

work with we're seeing,

for example Japan, the country is

actually very different than the United States,

because it has an aging population.

>> Yeah.

>>And there just aren't enough people to do the work.

>> Correct.

>> So, they welcome automation

because the options are either automate or well,

just shut down the whole plant because it is impossible to

hire with the aging population.

>> Yeah. In Japan, it actually is going to become

a crucial social issue

sometime in the next 100 years or so

because their fertility rates are such

that they're in major population decline.

So, you should hope for really good AI there,

because we're going to need

incredibly sophisticated things to take

care of the aging population there,

especially in healthcare and elder care and whatnot.

You know, I think when we automated elevators.

Right? Once elevators had

to have a person operating them,

a lot of elevator operators did lose

their jobs because we switched to automatic elevators.

I think one challenge that AI offers is

that there will be as connected as it is today,

I think this change will happen very quickly,

or the potential for jobs to

disappear is faster this time around.

So, I think when we work with customers,

we actually have a stance

on wanting to make sure that everyone is treated well,

and to the extent, we're able to step in and try

to encourage or even assist

directly with retraining to help them find

better options, we're truly going to do that.

That actually hasn't been needed so far for

us because we're actually not displacing any jobs.

But if it ever happens, that is our stance.

But I think this actually speaks to

the important role of government with the rise of AI.

So, I think the world is not

about to run out of jobs anytime soon,

but as LinkedIn has said through

the LinkedIn data and many organizations,

and Coursera has seen and Coursera's data as well,

our population in the United States and globally

is not well-matched to the jobs that are being created.

And we can't find enough people for-

we can't find enough nurses,

we can't find enough wind turbine technicians,

a lot of cities,

the highest paid person might be

the auto mechanic and we can't find enough of those.

So, I think a lot of the challenge and

also the responsibility for nations or

for governments of a society is

to provide a safety net so that everyone has

a shot at learning new skills they need in order to

enter these other trades

that we just can't find enough

people to work in right now.

>> I could not agree more.

I think this is one of

the most important balances that

we're going to have to strike as a society,

and it's not just the United States,

it's a worldwide thing.

We don't want to under invest

in AI in this technology because we're

frightened about the negative consequences

it's going to have on jobs that might be disrupted.

On the other hand, we don't want

to be inhumane, incompassionate,

unethical about how we provide

support for folks who are going

to be disrupted potentially.

>> Yeah.

>> I think Coursera plays

an incredibly important role in

managing this sea change in that we have

to make reskilling and

education much cheaper and much more accessible to folks.

Because one of the things that we're doing is,

we're entering this new world

where the work of the mind is going to be far,

far, far more valuable even than it

already is than the work of the body.

So, that's the muscle that has

to get worked out and we've just got

to get people into

that habit and make it cheap and accessible.

>> Yeah. It is actually really interesting.

When you look at the careers of athletes,

you can't just train them in

great shape at age 21 and then stop working out.

The human body doesn't work like

that. Human mind is the same.

You can't just train, work on your brain until you're

21 and then stop working out your brain.

Your brain you go flabby if you do that.

>> Yes.

>> So, I think one of the ways I want the world to be

different is I want us to

build a lifelong learning society.

We need this because the pace of change is faster.

There's going to be technology invented next year and

that will affect your job five years after that.

So, all of us had better keep on learning new things.

I think this is a cultural sea change

that needs to happen across society,

because for us to all contribute

meaningfully to the world

and make other people's lives better,

the skills you need five years from now may

be very different than the skills you have today.

If you are no longer in college, well,

we still need you to go and acquire those skills.

So, I think we just need to acknowledge

also that learning and studying is hard work.

I want people if they have the capacity.

Sometimes your life circumstances prevent you from

working in certain ways, and everyone deserves

a lot of support throughout all phases of life.

But if someone has the capacity to spend

more time studying rather than

spend that equal amount of time watching TV,

I would rather they spend

that time studying so that they can

better contribute to their own lives

and to the broader society.

>> Yeah, and speaking again about the role of government,

one of the things that I think the government

could do to help with this transition

is AI has this enormous potential

to lower the costs of subsistence.

So, through precision agriculture

and artificial intelligence and healthcare,

there are probably things that we can do to affect

housing costs with AI and automation.

So, looking at Maslow's Hierarchy of Needs,

the bottom two levels

where you've got food, clothing, shelter,

and your personal safety and security,

I think the more that we can be

investing in those sorts of things,

like technologies that address

those needs and address

them across the board for everyone,

it does nothing but lift all boats basically.

I wish I had a magic wand that I could

wave over more young entrepreneurs and

encourage them to create startups that are

taking this really interesting,

increasingly valuable AI toolbox

that they have and apply it to these problems.

They really could change

the world in this incredible way.

>> You make such a good point.

>> So, the last tech thing that I wanted to ask you is,

there is sort of just an incredible rate of innovation

right now on AI in general,

and some of the stuff is what I call "stunt AI"

not in the sense that it's not valuable but it's-

>> Know go ahead. Name of names. I want to hear.

>> No, so I'll name our own name.

So, we, at Microsoft did

this really interesting AI stunt where

we had this hierarchical reinforcement learning system

that beat Ms. Pac-Man.

So, that's the flavor of what I would call "stunt AI."

I think they're useful

in a way because a lot of what we do is

very difficult for layfolks to understand.

So, the value of the stunt is holy crap,

you can actually have a piece of AI do this?

I'm a big classical piano fan and one of

the things I've always lamented about

being a computer scientist is,

there's no performance of computer science in general,

where a normal person can listen to

it or if you're talking about

an athlete like Steph Curry,

who has done an incredible amount of

technical preparation and becoming as

good as he is at basketball,

there's a performance at the end where you can

appreciate his skill and ability.

And these "stunt AI" things in a way are

a way for folks to appreciate what's happening.

Those are the exciting AI things for the layfolks.

What are the exciting things as

a specialists that you see on the horizon?

Like new things and reinforcement learning, coming,

people are doing some interesting stuff with transfer

learning now where I'm starting to

see some promise that

not every machine learning problem is

something where you're solving it in isolation.

What's interesting to you?

>> So, in the short term,

one thing I'm excited about is turning machine learning from

a bit of a black art into more of

a systematic engineering discipline.

I think, today, too much of machine learning

among a few wise people who happen to say,

"Oh, change the activation function in layer five."

And if for some reason it works,

then that can turn into a systematic

engineering process that would

demystify a lot of it and help

a lot more people access these tools.

>> Do you think that that's going to

come from there becoming

a real engineering practice

of deep neural network architect

or is that going to get solved with

this learning to learn stuff or

auto ML stuff that folks are working on, or maybe both?

>> I think auto ML is a very nice piece of work,

and ia a small piece of the puzzle,

maybe surrounding, optimizing

[inaudible] preferences, things like that.

But I think there are even bigger questions like,

when should you collect more data,

or is this data set good enough,

or should you synthesize more data,

or should you switch

algorithms from this type of algorith to that type of algorithm,

and do you have two neural networks

or one neural network offering a pipeline?

I think those bigger architectural questions go

beyond what the current automatic algorithm is able to do.

I've been working on this book,

"Machine Learning Yearning"

mlyearning.org, that I've been

emailing out to people on the mailing list for free

that's trying to conceptualize my own ideas, I guess,

to turn machine learning into

more of the engineering discipline

to make it more systematic.

But I think there's a lot more that

the community needs to do beyond what I,

as one individual, could do as well.

But that will be really exciting when we can

take the powerful tools of

supervised learning and help a lot more people are

able to use them systematically.

With the rise of software engineering

came the rise of ideas like,

"Oh, maybe we should have a PM."

I think those are Microsoft invention, right?

The PM, product manager, and then program manager,

project manager types of roles way back.

Then eventually came ideas like

the waterfall planning models or the scrum agile models.

I think we need new software engineering practices.

How do you get people to work

together in a machine learning world?

So all sorting it out to Landing.AI ask

our product managers do things differently,

then I think I see

any other company tell their product managers to do.

So we're still figure out these workflows and practices.

Beyond that, I think on a more pure technology side

[inaudible] again as I do

transform entertainment and art.

It'll be interesting to see how it goes beyond that.

I think the value of reinforcement

learning in games is very overhyped,

but I'm seeing some real attraction in

using reinforced learning to control robots.

So early signs from my friends

working on projects that are not

yet public for the most part,

but there are signs of meaningful progress

in the reinforced learning applied to robotics.

Then, I think transfer learning is vastly underrated.

The ability to learn from-

so there was a paper out of Facebook where

they trained on an unprecedented

3.5 billion images which is very, very big

3.5 images is very large,

even by today's standards,

and found that it turns out

training from 3.5 billion, in their case,

Instagram images, is actually better than

training on only one billion images.

So this is a good sign for

the microprocessor companies, I think,

because it means that, "Hey,

keep building these faster processes.

We'll find a way to suck up their processing power."

But with the ability to train on really,

really massive data sets to do

transfer learning or pre-training

or some set of ideas around there,

I think that is very

underrated today still. And then super long term-

We used the term unsupervised learning to describe a really,

really complicated set of

ideas that we don't even fully understand.

But I think that also will be

very important in the longer term.

>> So tell us something that people wouldn't know about you.

>> Sometimes, I just look at those bookstore

and deliberately buy a magazine

in some totally strange area that I

would otherwise never have bought a magazine in.

So whatever, five dollars,

you end up with a magazine in some area that you

just previously knew absolutely nothing about.

>> I think that's awesome.

>> One thing that not many people know about me,

is I actually really love stationery.

So my wife knows, when we travel to foreign countries,

sometimes I'll spend way too

long looking at pens and pencils and paper.

I think part of me feels like, "Boy,

if only I had the perfect pen and the perfect paper,

I could come up with better ideas."

It has not worked out so far,

but that dream lives on and on.

>> That's awesome. All right.

Well, thank you so much,

Andrew, for coming in today.

>> Thanks a lot for having me here, Kevin.

>> That was a really terrific conversation.

>> Yes, it was a ton of fun.

It was like all of my best conversations,

I felt like it wasn't

long at all and was glancing now at my phone and

I'm like, "Oh, my god. We've just spent 48 minutes."

>> One of the questions that you asked Andrew was,

what technology is he

most impressed by and excited by

this coming down the pike with AI?

I wanted to turn that back on you

because you've been working with

AI for a really long time at Google,

and at LinkedIn, and now at Microsoft.

So what have you seen that really excites you?

>> Several things. I'm excited that

this trend that started a whole bunch of years ago,

more data plus more compute equals

more practical AI and machine learning solutions.

It's been surprising to me that

that trend continues to have legs.

So, when I look forward into

the future and I see more data coming online,

particularly with IoT and the intelligent edge as

we get more things connected to the Cloud that

are sensing either through cameras or

far field microphone arrays or

temperature sensors or whatever it is that they are,

we will increasingly be digitizing the world.

Honestly, my prediction is that

the volumes of data that we're gathering now will

seem trivial by comparison to the volumes that

will be produced sometime in the next 5-10 years.

I think you take that with all of

the super exciting stuff that's happening with AI silicon

right now and just the

number of startups that are working

on brand new architectures

for a training machine learning models,

it really is an exciting time,

and I think that combo of more compute,

more data is going to continue

to surprise and delight us with

interesting new results and also deliver

this real world GDP

impacting value that folks are seeing.

So that's super cool.

But I tell you, the things that really move me,

that I have been seeing lately are the applications

into which people are putting this technology in

precision agriculture and healthcare.

Just recently, we went out to one of our farm partners.

The Microsoft Research has been working

with the things that they're doing with

AI machine learning and edge computing in

this small organic farm in

rural Washington state is absolutely incredible.

They're doing all of this stuff with a mind towards

"How do you take a small independent farmer

and help them optimize yields, reduce the amount of

chemicals that they have to use on their crop,

how much water they have to use so you're minimizing

environmental impacts and raising

more food and doing it in this local way?"

In the developing world,

that means that more people are going to get fed.

In the developed world,

it means that we all get to be a little more healthy

because the quality of

the food that we're eating is going to increase.

There's just this trend, I think,

right now where people are just

starting to apply this technology to

these things that are parts of human subsistence.

Here's the food, clothing, shelter,

the things that all of us need in order to

live a good quality life.

I think as I see these things and

I see the potential that AI has

to help everyone have access to a high quality of life,

the more excited I get.

I think in some cases, it may be

the only way that you're able to deliver these things at

scale to all of society

because some of them are just really expensive right now.

No matter how you redistribute the world's wealth,

you're not going to be able to tend to the needs of

a growing population without

some sort of technological intervention.

>> See, I thought you were

going to say something like, "Oh,

we're going to be able to live in the world of

Tron Legacy or the Matrix or whatever."

Instead, you get all serious on me and

talk about all the great things that in

the world changing awesome things

that are going to happen.

I'm going to live in my fantasy but I

like that there are very cool things happening.

>> I did

>> over my vacation read "Ready Player One" and

despite its mild dystopian overtones.

>> It's a great book. I like the book.

>> That's a damn good book.

I was like, "I want some of this."

>> I'm with you. I'm with you.

I was a little disappointed in

the movie but I loved the book.

Yeah. We can talk about this offline but

we'll end this now.

>> Yeah.

>> Well, awesome Christina.

I look forward to chatting with

you again on the next episode.

>> Me too. I can't wait.

>> Next time on Behind the Tech,

we're going to talk with Judy Estrin

who is a former CTO Cisco,

serial entrepreneur, and as a Ph.D. student,

a member of the lab that created the Internet protocols.

Hope you will join us. Be sure to

tell your friends about our new podcast,

Behind the Tech, and to subscribe. See you next time.

For more infomation >> Episode 3 - Andrew Ng: Influential leader in artificial intelligence - Duration: 49:06.

-------------------------------------------

Jupiter Meme animation [Backstory] - Duration: 0:41.

For more infomation >> Jupiter Meme animation [Backstory] - Duration: 0:41.

-------------------------------------------

Trump Scores Monumental Victory — American Workers To Get Back Billions - Duration: 5:41.

For more infomation >> Trump Scores Monumental Victory — American Workers To Get Back Billions - Duration: 5:41.

-------------------------------------------

500K Pre-Party | Inside KlientBoost Episode 006 - Duration: 7:51.

- Friday catered lunches, they start today.

And we should also think...

What the...

- This is my philosophy on celebrations.

I think is it needs to happen more often.

It doesn't happen enough and I don't think that team members and employees know enough

about how the company is doing and they're not seeing that transparency and so, you know,

a lot of people...

I try to make it unfair.

That's basically my philosophy on celebrating and high-fiving accomplishments, because I

wanna make this a place that is unlike any other place.

And, so I think more of these accomplishments and good times will happen more often.

You know, the more that we actually do 'em and the faster we grow.

So, I mean that's my side of it and I think that if I were on the other side of the fence,

I would enjoy my boss to do that, as well.

And, so that's a big reason why I wanna keep doing it.

- I'm Jenn.

I am the office manager.

Right now, we're setting up for a 500K party, which is tomorrow.

We also got some KB swag.

We got Nike pants that I just went and picked up.

We got Converse high-tops that everyone's gonna be wearing.

And, we got our new 500K shirts.

So, I'm going to bag those all up so I can decorate the office and set it all up on people's

desks.

- So, where's Arik?

Arik, I promised you something.

Do you remember what I promised you?

Oh yeah

Something is being delivered today.

So, I promised Arik that if we hit...

When we hit 500K, he was gonna get Gucci sandals and, so, I want you to be wearing them for

the rest of the day.

Is that okay?

Okay, cool.

So, as you guys already know, we actually...

What am I doing?

Sorry.

We hit 500K.

You guys can start passing these around.

As just a little thank you.

There you go, Graham, that's for you.

Start passing them around.

So, now that it's kind of custom that every time we hit a milestone, we all glam up together,

we look like a squad.

So, you're more than welcome to open them.

And some of you

you'll notice there's a difference between the t-shirts.

The people who are going on the trip have a palm tree on the sleeve.

The people who are not, have an oversized dollar bill because it was way too big to

actually iron on your sleeve.

So, you get it on the side.

Alright, you guys ready to move on?

Friday catered lunches, they start today.

And we should also think...

What the...

- Confucius say...respect your parents.

- Oh, I have no idea where we're going.

JD posted on our Slack channel, make sure you dress warm but, how warm?

Do my warm clothes need a passport holder?

Do my warm clothes need to be fully insulated like we're going to Antarctica?

Like, I think those are valid questions, alright, let's go ask him.

Do we need a jacket?

- No, no, no, the cold part was that.

- [JD] We've gotta go catch a bus, actually.

Which way is it?

This way?

- [JD] Everybody start stretching...we going bowling.

- By the way, Jen is the one who basically was the architect for everything that you

see tonight.

So, massive props to her.

- [Jenn] Thank you.

- It's a lot of stress and there's a lot of changes and things like that, too.

She's planning the vacation, as well.

So, she's doing amazing.

- Thank you.

- We're about to go bowling and then we're about to have some food and maybe go to Dave

and Buster's and challenge each other a little bit.

-[Stone] Are you excited?

I'm really excited.

This is my first event with KlientBoost.

First milestone.

Go on in.

- This is for you Stone.

Không có nhận xét nào:

Đăng nhận xét