Thứ Sáu, 28 tháng 9, 2018

Youtube daily Sep 28 2018

(energetic music)

- Today is the very best day there has ever been

to be in the floor covering business.

On the Floor with John Weller

How you doing Todd, welcome to Sarasota.

I see that you've settled in very nicely

with your short pants.

Todd is the CEO of AdHawk in New York City,

he's down here on vacation and he was gracious enough

to come spend some time during his vacation

and hang out with us at FloorForce.

- Yeah, I got to get out of the snow,

we had, I think, eight inches of snow in New York.

So coming down to Florida, had to rock the shorts.

- Awesome man, so let's get into it.

So, for people who don't know Todd's story,

Todd was a Googler, worked at Google

and worked on the AdWords team.

And I gotta ask the question, what prompted you,

what inspired you to start AdHawk and actually jump

out of the work heaven environment?

- Yeah, so a little on my background,

and then I'll get into that.

So my confounder and I both came from

the AdWords team at Google, there we worked

with, you know, thousands of small businesses,

helping them grow their AdWords accounts

from a couple hundred dollars a day to a couple thousand

to tens of thousands of dollars a day.

And our job was to help them do that profitably.

Google's a great place to work, right?

You can sit on beanbags, you get a free lunch.

There's really nothing not to like.

The real honest truth is, if you wanna have a bigger impact,

if you wanna do more than just your day to day job,

you wanna kind of get into the grind a little bit,

have an impact with individual businesses,

dive a little deeper, really help them explore

the opportunities outside of just Google,

you obviously can't do that within Google.

So, for us, we knew that Facebook was important.

We know that AdWords is important.

We know that websites are important for customers.

And it's really, you're kind of pigeonholed

a little bit while working at Google.

- What would you say to a flooring retailer who says,

hey man, I've tried AdWords, I've actually done it twice.

It doesn't work in my market,

and it just doesn't work for us.

- Yeah, I would say a couple of things, right?

When people come to us and say that,

we're able to look under the hood

and within a couple of minutes, identify why it didn't work.

So, the first thing I would look at,

if you think it doesn't work in your market,

I would really look at yourself in the mirror and say,

did I give this a professional and expert level setup?

Was I optimizing this every single day or every single hour?

Or did I just watch a YouTube video online

and try to get this set up?

So, at it's core, was it set up correctly?

I think something we've done really well for customers

that say that is do an audit, right?

Data speaks for itself, so we can look in the account,

and say, hey, here are the five reasons why it didn't work

and actually give you an honest answer

if it's gonna work for you in the future.

But I've seen AdWords, I've seen AdWords specifically

in this industry work for over 700 retailers,

so I'd be really hard pressed to say it doesn't work.

Now, you might get a higher CPA based on your location

just because it's more competitive

or you might get a lower CPA in your location.

But to say it doesn't work and you can't do it profitably,

I mean look at Google, right?

They're a massive business, we see hundreds

of flooring retailers doing it well.

You just might have to take a different strategy

than what you're typically used to.

We have some retailers that search campaigns

work really well.

We have some retailers where marketing works well,

call only ads, Gmail ads, and that's also part

of why you need to work with, you know, an expert.

Because there are so many options out there.

Almost too many options that, yes,

search might not work for you or marketing

might not work for you, but you need to know

all of the options that are available

before you just say, this isn't going to work.

Because the truth is, if this

isn't gonna work, then what's gonna work?

You're not gonna advertise in the Yellow Pages anymore.

You know, Google really is where people do their searching,

they type in, flooring store near me.

Or they type in, you know, buy hardwood flooring,

you need to be there.

- So Todd, I think we have about 700 flooring retailers

that we now work on together with AdWords.

You know, and I think about this all the time

and we do a lot of testimonial videos,

but in your mind, does any story stick out

as one where we really did have an impact on one

of these flooring retailers that you wanna talk about?

- Yeah we actually had a flooring retailer come to us,

that you brought to us, I think about two years ago.

They were managing themselves,

they were kind of hesitant to work with us, if you remember.

But they came to us, I think their CPA

was about 60 dollars per lead.

Came to us and the account was done pretty well, right?

It was optimized well but we knew for sure

that if we put technology and algorithms to the account,

we could see better performance.

And I remember, after about two months,

after, you know, we were working together

on the account, we saw the CPA drop

from about 60 dollars to 35 dollars.

And when I looked at how that was done with our algorithm,

I mean we really leveraged the data

that you guys provided us on the location level.

So what happened was, they were targeting this one city

but what they weren't doing is using

what's called location bid adjustments

across all the small neighborhoods.

So instead of targeting one city with one bid,

what we were doing was targeting over 150 neighborhoods

with individual bid adjustments

that were changing on an hourly or daily basis.

And then, on top of that, with some of the information

you guys provided us, we were actually

layering income levels on top of that.

So we kind of leverage the power of demographics

of customers that we have at scale that you provided us

on top of the bid adjustments we're able to do in AdWords

and we saw the CPA decrease by about 50 percent.

- We do AdWords right now

with about 50 percent of our clients.

So there's 50 percent of our clients,

over 500 of them have never done AdWords,

have never done digital marketing.

So, being that you're an ex-Googler and you're here,

I'd love to hear just the basics

from a very high level view, how does AdWords work?

And if you've never done it before, what should you expect

in your first endeavor of advertising on Google's platform?

- Yeah, so AdWords works, when someone types something

into Google, there's text ads on top

of the Google search results.

Now the really awesome thing about AdWords is

it's pay per click, sometimes in the industry known as PPC.

And what that means is you only pay when a customer

actually clicks on your ad and gets to your website.

So you don't pay for customers just to see it,

you know, you think about billboards

and other things like that, you're paying for impressions

or people to see it, they're not actually physically

visiting your store or visiting

your online store, your website.

Whereas with Google, you're actually paying

for someone to come to your website.

So it's a much more qualified visit

than your typical billboard or

Yellow Pages ad or something like that.

Now once you launch your first AdWords campaign,

I gotta be honest, you shouldn't

expect results three days in.

Anyone that promises you that is selling you snake oil.

You know, once you launch your AdWords campaign,

it takes time to optimize it, find the right audience

because we don't wanna just get you leads.

It's important to get quality leads,

if they're not quality leads, your phone will ring

with people asking you nonsense questions

and kind of wasting your time.

So, I tell all retailers and businesses

that are just starting up, the first 30 days

is all about data gathering.

Making sure we have the right data on your customers,

on the demographics, on what a qualified lead means to you.

You know, 30 to 60 days is really optimizing

and trimming the fat, trimming all the wasted spend,

really narrowing focus in

on the most qualified customer for you.

And then from then on out, I would say about 70 days

or so on it's kind of a race of, how can we push the needle?

How can we move the budget?

How can we get more leads into the funnel?

And that's kind of how we look at it,

but again, the really great thing about AdWords is,

at bare minimum, these people are clicking on your ad,

going to your website, and learning more about you.

Sometimes we can track these leads, right?

If they call you, we can track that lead.

If they fill out a form, we can track that lead.

If they don't do either of those, we can run retargeting

which is when that ad follows them around the internet

and says, hey, don't forget we offer

this really awesome flooring, come back to our website.

But I think the one thing that I've heard most

from retailers is they've seen their in-store traffic go up.

So a customer can go to your website, click on your ad

and decide, alright, this is really awesome,

I'm not gonna call them right now but I'm going that way,

I'm gonna pass their office when I go to work tomorrow,

maybe I'll stop in their store.

So there's this brand initiative with Google,

with AdWords as well, this brand awareness

that isn't very trackable but from what we've seen

over the last, you know, couple of years,

it's pretty impactful.

- The number one question I get from retailers

when we're having a conversation about AdWords is,

I'm a retailer, I've never done AdWords.

It scares me to death, what should

my budget be to start doing AdWords?

- Yeah, it's definitely a tough question

and we definitely get asked this a lot.

I think, in order to come up with your budget,

you should first look at what is your goal, right?

Like what are you trying to accomplish?

Is it five qualified leads every month?

Is it 20 qualified leads every month?

And then based on that, we have a ton of historical data

from the 700 somewhat retailers that we're working with,

you know the conversion rates that,

you know, we see on the website.

We can analyze the website traffic,

the cost per clicks in the location

and then we can actually back out an answer

to what the budget to be based on all of that.

Now, you know, if I was hard pressed to give an answer,

I would probably say a thousand dollars a month

is a good target to start.

But that said though, Google takes time to optimize, right?

We need statistically significant data

in order to make good decisions and in order to optimize

to decrease the cost per lead over time.

Now if you're spending 500 dollars a month,

we might not have statistically significant data

for six, eight, or even ten months.

However, if you're spending a thousand

or two thousand dollars, we may be able to get to that

maximum level of optimization in just three or four months.

So, really what I would tell anyone who's thinking about

starting on AdWords and, you know, trying to figure out

what their budget should be, I would focus

on your goals rather than the budget.

And then from the goals, the FloorForce team

can kind of do a deep dive in your website traffic,

the demographics of your customer,

the cost per clicks in your city.

And they can actually back out a number

of what your budget should be.

And we've done that for a lot of customers

and we've seen, you know,

pretty good success doing it that way.

- When Todd came for the interview,

he said to me, can I get to do a fun fact?

(both laugh)

So what is the fun fact that you would like to share

with our audience here at FloorForce?

- Yeah, I've had to practice fun facts for awhile,

you know, working at Google, you do a lot of ice breakers,

you do a lot of these games to get to know people around you

and the fun fact I always go with is,

I have over 140 brothers and sisters.

- Wow, what is that?

- So when I was growing up, my parents did foster care.

So, from about the age of 12 to 18,

we had different people living in our house,

and that was either one day, sometimes it was a week,

sometimes two people actually stayed with us

for over a year.

I still keep in touch with as many of them as I can.

But when I tell that fact, I get the same type of look

from you which is like, how in the world do you have

that many brothers and sisters?

But yeah, it was definitely impactful for me growing up

and it's something I hope to continue later on in my life.

- That's awesome, that's really cool.

Thanks for coming while you're on vacation,

hanging out here at FloorForce.

Tell your girlfriend, thank you for letting you come

at seven o'clock in the morning and hang out with us.

I think it was insightful, I hope everybody

got value out of this conversation

and we hope to have you back.

- Yeah, thanks so much for having me

and I know, a couple months ago,

we brought a couple of FloorForce retailers

out to our office in New York.

We actually gave them a tour around Google,

did a little digital marketing summit.

I'd love to make an offer to anyone watching this video,

if you sign up with FloorForce and end up,

you know, getting online with digital advertising,

we'd love to take you on a tour of Google in New York.

Our doors are always wide open, so please feel free

to come by, reach out to John,

I'm sure he could help get that set up.

And for anyone that's still considering getting online,

I'm willing to offer any FloorForce retailer

365 dollars in free AdWords spending.

- Who's paying for that?

- We're gonna split it.

- Deal.

- So, any customer that's thinking

about spending money online and is hesitant about it,

again, we are going to comp your first 365 dollars

in ad spend and on top of that, please come to our office

in New York, we'd love to tour you around Google.

And in the next few months, we're gonna put together

a FloorForce AdHawk summit at Google

for FloorForce customers only.

- Thanks, Ben.

- Yeah, take it easy.

- Good seeing you.

For more infomation >> Ex Googler Explains Google Ads | Smart Partnerships - Duration: 13:59.

-------------------------------------------

Fortnite Announces A Limited Time Skin - Duration: 0:32.

This or Kanye Roblox as a Halloween costume?

For more infomation >> Fortnite Announces A Limited Time Skin - Duration: 0:32.

-------------------------------------------

MS Dhoni gets disappointed as Chahal drops Liton Das catch on Jadeja Ball Ind Vs BAN Final Asia Cup - Duration: 2:22.

For more infomation >> MS Dhoni gets disappointed as Chahal drops Liton Das catch on Jadeja Ball Ind Vs BAN Final Asia Cup - Duration: 2:22.

-------------------------------------------

Guess that Marvel Character with Tom Green! - Duration: 1:08.

Everyone, I'm Tom Green and we're playing Guess That Marvel Character.

That is Throg. Is that guy covered in bees? Captain Beekeeper Mishap?

Swarm would be a good one right. Is that the name? Oh, Swarm. Duh!!

That's Stilt Man right there. Remember that guy. Yeah I love that

Captain Eyeball Head. That's my favorite one. I used to read that a lot when I was

a kid. I collected all that Captain Eyeball Head comics. The Orb?! No that's

Captain Eyeball Head! Is it Exoskeleton? Marrow? ewwh... That's, that's Howard The Duck

right there. That's The Slime Ball or something. Doop, okay, Doop, I love that

one first of all. The initial D, that's...that's, of course that's D-Man. I've had a

great time playing today follow me on Facebook or on Instagram @tomgreen or

Twitter @tomgreenlive

Thanks

[beep]

Wanna hear how loud I can clap? Want to hear how loud?

Check it out.

[clapping]

[loud clapping]

[very loud clapping]

I've got a loud clap. Got a loud clap.

For more infomation >> Guess that Marvel Character with Tom Green! - Duration: 1:08.

-------------------------------------------

Citing Due Process, Senator Jeff Flake Announces He Will Vote For Brett Kavanaugh | NBC News - Duration: 3:56.

For more infomation >> Citing Due Process, Senator Jeff Flake Announces He Will Vote For Brett Kavanaugh | NBC News - Duration: 3:56.

-------------------------------------------

¡Demandan a Bill Cosby por no pagar 280 mil dólares! | Un Nuevo Día | Telemundo - Duration: 3:47.

For more infomation >> ¡Demandan a Bill Cosby por no pagar 280 mil dólares! | Un Nuevo Día | Telemundo - Duration: 3:47.

-------------------------------------------

Descubre que debes hacer si tu hijo no quiere estudiar | Un Nuevo Día | Telemundo - Duration: 7:21.

For more infomation >> Descubre que debes hacer si tu hijo no quiere estudiar | Un Nuevo Día | Telemundo - Duration: 7:21.

-------------------------------------------

Meizu Pro 7 Plus - There are no former flagships - Duration: 7:29.

For more infomation >> Meizu Pro 7 Plus - There are no former flagships - Duration: 7:29.

-------------------------------------------

A 15 años de su muerte, ¡Celia Cruz sigue viva! | Un Nuevo Día | Telemundo - Duration: 2:58.

For more infomation >> A 15 años de su muerte, ¡Celia Cruz sigue viva! | Un Nuevo Día | Telemundo - Duration: 2:58.

-------------------------------------------

Analyzing AI Actors - Duration: 39:43.

Here's a question for you: Imagine we've built AGI.

So you wake up one morning, hop on Twitter, word is floating around that X has built AGI.

Who would you want X to be?

You've got four options.

A: Alphabet.

E.g.

Google.

Two, you've got the US government.

Three you've got Baidu, which is one of the leading AI companies in China.

And four you've got the Chinese government.

Now bear in mind this is a question about who you would want to be in control of the

technology, not who you think is most likely to get there.

And no, you are not allowed to say, "I don't want any of these actors to develop AGI."

Alright, who wants A: Alphabet?

B, the US government?

C, Baidu?

And D, the Chinese government?

Alright who is over 50% confident of the answer that you just gave me?

Nice.

Good Bayesians.

So the point here isn't that there is a correct answer and I'm not going to tell you what

the answer should be.

The point here being that I think this is one of the most important questions that we

need to be able to answer.

The question here being who do we want to be in control of powerful technology like

advanced AI?

But also the question of who is likely to be in control of that?

And these kinds of questions are critical and really, really difficult to answer.

So what I'm going to do for you today is not to answer the question.

What I'm going to try to do is to equip you with a framework or a methodology for thinking

about how you can go about answering these questions sensibly, or at least generating

hypotheses that kind of make some sense.

So the proposition is this: That you can frame AI governance as a set of strategic interactions

between a set of actors that each have a unique and really large stake in the development

and deployment of advanced AI.

The set of actors that I think are most important and I'll spend time talking about today are

large multi-national technology firms who are at the forefront of developing this technology

and states.

Specifically national security and defense components of the state apparatus.

As a meta-point, because we love meta-points, this is going to be a talk that demonstrates

how we can do tractable research in AI governance and AI strategy, given information that we

have today, to figure out what futures could look like, should look like, that are more

likely to be safe and beneficial than not.

So hopefully by the end of this you can feel like there are some things that we can figure

out in this large landscape of questions that all seem really large and uncertain.

So I'll take you through three things.

First I'm going to expand on this case for why looking at actors and strategic interactions

is one of the most fruitful ways of looking at this problem.

And then I'm going to take you through a toy model for how you can think about strategic

interactions between firms and governments in this space.

And then finally, I'm going to apply that to a case study which gives you some meat

to the bones of what I'm talking about.

And we'll end by a few thoughts on how you can take this forward if you're interested

in using this.

So in terms of the propositions, I think there are three key reasons, and quite obvious ones,

for why focusing on actors is a good idea.

Number one, actors are part of the problem, and a big part of it at that.

Specifically misaligned actors who have different goals that can somewhat lead you to a suboptimal

outcome.

The second is that actors are very much exactly the people who are shaping the solutions that

we talk about.

So at any point at which we talk about what solutions to AI governance look like those

are products of actor decisions that are being made.

Number three, I think we are less uncertain about the nature of actors in this space than

we are about a bunch of other things.

And so gravitating towards the things that we are more certain about makes a bunch of

sense.

So I'm going to run through these in turn.

Number one, ask yourself this question: Why do we not assume that the deployment and development

of transformative AI is a given?

You would tend to come across two types of answer to this question.

The first bucket of answers tends to be that it's just a really, really hard technical

problem.

It's not easy to guarantee safety in the design and deployment of your system.

Putting that bucket aside, the second bucket tends to rely on you believing these three

statements.

Number one: That there are a number of actors who are out there who prioritize capabilities

above safety.

Number two: You also have to believe that these actors aren't incompetent.

If they were incompetent, we wouldn't have to worry about them, but you have to be convinced

that there's at least a subset of them that have the ability to pursue capabilities above

safety.

And that leads you to number three: Which is that plausibly they could get there first.

So if you believe these three things, then you believe that misaligned actors are going

to be at least part of this safe development and deployment problem that we need to solve.

Reason number two why focusing on actors makes sense.

We often talk about solutions, and if you read a bunch of the research in this space

you'll have propositions floating around of things like multilateral agreements, joint

projects, coordination things, etc.

The quite obvious thing to state here is that all of these are products of actor choices,

capabilities, incentives.

Upstream of these solutions are a set of actors that are haggling and tussling over what these

solutions should look like.

And so, analysis-wise, we should be focusing upstream to try to figure out what solutions

are likely versus unlikely, what solutions are desirable and undesirable.

And then critically, how do you make the thing that is likely the desirable thing that you

actually want?

Reason number three is because we are less uncertain about actors.

Here are a couple of photos of my colleagues who work in AI strategy.

There's a ton of uncertainty in this space.

And it's kind of a bit of an occupational hazard just to be comfortable with the fact

that you have to make some assumptions that you can't really validate at this point in

time, given the information that we have.

The point here isn't that uncertainty is a bad thing, it's just kind of a thing that

we have to deal with.

The point here though, is that I think among a number of things, we are less uncertain

about the nature of actors compared to a lot of our parameters that we care about.

The reasons being that A: You can observe their behavior today, more or less.

B: You can look at the way that these very same actors have behaved in the past in analogous

situations of governing emerging and dual-use technologies.

And three, we've spent a lot of time across a number of academic disciplines trying to

understand the environments that constrain these actors, whether that's in economics,

policy, politics, legal situations, etc.

And so we have a fair number of models that have been developed through other intellectual

domains that give us a good sense of what constrains these actors and what supports

these actors' behaviors in that sense.

So three reasons why actors are a good thing to focus on.

Number one: They're part of the problem.

Number two: They're part of the solution/they design the solutions.

And number three: We have less uncertainty, although still a fair amount of uncertainty,

about what these actors do, think, how they behave.

So gravitating towards those interactions between them makes a bunch of sense, as plausibly

an area that can tell you some stuff about AI strategy.

I'm going to assume that you buy that case for why focusing on actors is a good idea.

And we're gonna segue into actually talking about the actors that we care about.

So here are a subset of who I think the most important actors are that we need to think

about in the space of AI strategy.

Number one: You've got the US government.

Now the US government, in 2016, really first came out and said AI is a thing that we care

about.

It kicked off by the Obama administration establishing an NSTC subcommittee on ML and

AI.

And subsequently across the year of 2016 we hosted five public workshops, there were requests

for information on AI and that culminated a set of reports at the end of 2016 that made

the case collectively that AI is a thing that is a big deal for the US economically, politically,

socially.

Since the change in administration, there's been a bit of other stuff going on that's

distracted the U.S. government.

But what's not to forget is that the DOD sit alongside/within the U.S. government and they

haven't lost focus at all.

So turning a little bit of a focus to the DOD specifically, in 2016 as well, they commissioned

a bunch of reports that explored the applications of AI to DOD's missions and capabilities.

And that set of reports made a case for why DOD, specifically, should be focusing on AI

to pursue military strategic advantage.

AI was also placed at the center of the third op-sec strategy, which was the latest piece

of military doctrine that the U.S. put forward.

The last little data point is that in 2017 Robert Work established a thing called the

algorithmic cross warfare functional team thing.

And what the remit of that team is explicitly is, to quote, "accelerate the integration

of big data and machine learning into DOD's missions."

And that's a subset of the data points that we have about how much DOD care about this.

So that's US.

Now we're gonna turn to the Chinese government, who, in quite a different fashion, but in

similar priority, has placed AI at the center of their national strategy.

Among many data points that we have, I'll point out a couple.

We had the State Council's New Generation AI Development Plan published in 2017.

And in that there was a very explicit statement that China wanted to be the world's leading

AI innovation center by 2030.

At the report to the 19th Party Congress, President Xi Jinping also reiterated the goal

for China to become a science and tech superpower, and AI was dead center of that speech.

Turning again to the military side of China, the People's Liberation Army have also not

been shy about saying that AI is a thing that they really care about and really want to

pursue.

There's a number of surveys from the Center for New American Security that does a good

job of summarizing a lot of what PLA is pursuing.

And as of Feb 2017, there were a number of data points that told us that the Chinese

government were pursuing what they call an "intelligentization strategy."

Which basically looks like a unmanned automated warfare.

And as you can imagine, AI plays a very central technical role in helping them achieve that.

Last but not least, there was the establishment of the Civil Military Integration Development

Commission.

And that's headed up by President Xi Jinping, which signals how important it is to China.

And what that does, among a number of other things, is it makes it incredibly seamless

to have civil AI technologies translated through to military applications as a state mandate.

So that's China.

And the last subset of actors I'll point to are multinational technology firms.

And these are the folks who are conclusively leading the way in terms of developing the

actual technology.

I'll point out a couple of the leading ones in the US and China, and the reason there

being that A: They are the leading ones worldwide.

But B: Also there is something there about them being US v. Chinese companies, and I'll

say a little bit more about that in a sec.

But you've got the likes of Alphabet, DeepMind specifically, Microsoft, etc.

In China, you've got Baidu, Alibaba, and Tencent.

And these guys are all competing internationally to be leading the way.

And they also have some interesting relationships with their governments and their defense components

as well.

So these are the actors that we're talking about.

What do we do with information about them?

How do we look at what they do, what they think, how they act?

And how do we interpret that in a way that's useful for us to understand the space of AI

strategy?

What I'm gonna do is give you a toy model for how you can think about doing that, and

this is one of many ways you can be considering how to model this space.

First you can break down each of the actors into three things: their incentives, their

resources, their constraints.

Their incentives are the things that they're rewarded for.

What behaviors are they naturally, structurally incentivized to pursue?

And what behaviors consistently are rewarded such that they keep pursuing them?

Resources.

What does this particular actor have access to that other actors don't?

Whether that is money, whether that's talent, whether that's hardware.

And constraints, finally, are the things that constrain the behavior of these actors.

What do they care about that stops them from doing the thing that's optimal for their goal?

That can be a lack of resource, that can also be things like public image, and a number

of other things that any given actor could care about.

So each individual actor can be analyzed as such, and then you can start looking at how

they interact with each other in bilateral relationships.

And the caricature, simplified dichotomy here is you can get two types of relationships.

You can get synergistic ones or conflictual ones.

Synergistic ones are the ones where you have people pursuing similar goals, or at least

not mutually-exclusive goals, and there are complimentary resources at play, and/or the

other actor has the ability to ease a constraint for the other actor.

And so naturally you fall into this synergy of wanting to support each other and cooperate

on various things.

On the other hand you can have conflicts.

So conflicts are areas where you've got different goals, or at least divergent goals, but that's

not sufficient.

You also have to have interdependency between these actors.

You need to have one depend on the other for resources or one to be able to exercise constraints

on the other, such that you can't ignore the fact that the other actor is trying to pursue

something that's different to what you want.

It's key to flag that synergy sounds nice and conflict sounds bad, but you can get good

synergies and bad synergies, good conflicts and bad conflicts.

An example of a good synergy is one where you incentivize cooperation pretty naturally

between two actors that you want to cooperate on something like safety.

An example of a bad synergy, which we'll talk about in a second, is one where you incentivize

the pursuit of say, a somewhat unsafe technology and the pursuit of that technology is rewarded

by the other actor.

An example of a good conflict could be one where you introduce friction, such that you

slow down the pace of development or incentivize safety or standardization because of that

friction.

An example of a bad conflict is one where you can get race dynamics emerging between

two, for example, adversarial military forces.

So that's just the point being don't fall into the trap of thinking that synergies are

always good and conflicts are always bad.

And last but not least, if you really want to go wild, you can look at a set of bilateral

relationships in a given context.

That's what I do.

I look at a set of bilateral relationships in US and a set of bilateral relationships

in China, and try to figure out how this mess can be structured and make sense and tell

you something about what's likely to occur in that given, say, domestic political context

that I care about.

This is kind of all a little bit abstract, so we're going to take a sec to concretize

this by looking at a recent case study, which is the Google Project Maven case study.

For those who aren't familiar with what happened here, the long of the short of it is that

in March 2018, Google, it was announced, against Google's will, that they had become a commercial

partner for the Project Maven DOD program.

And what Project Maven is, is it's a DOD program explicitly that's about accelerating an integration

of AI technologies, specifically deep learning and neural networks, to bring them into action

in active combat theater.

Now, when we look at this case study, we can try to put it into this framework and understand

a) which actors matter, b) what matters to these actors, and c) how that's likely to

pan out, and then we can compare and contrast to what actually panned out, and that can

tell you something about how these strategic interactions end up mattering.

I'll also take a bit of a step back and say this is an interesting case study for a number

of reasons, not least because it's a microcosm case study of this bigger question of what

happens when a government defense force wants to access leading AI technology from a firm.

That, in general, is a question that we actually care a lot about, and we specifically care

about how it lands and what happens and who ends up getting control of that tech.

So, when we're walking through this case study, think about it as an example of this larger

question that is generally very decision relevant for the work that we do.

The first actor we can think about is DOD.

Their incentives are quite clear.

They want to have military strategic advantage in this particular case by pursuing advanced

AI tech.

The resources that they have is they have a lot of money.

The constraints that they have is that they typically don't have in-house R&D capabilities

basically, so they don't develop leading AI tech within DOD.

That means that they have to go to a third party.

Enter Google management, who make decisions on behalf of Alphabet.

The incentives that they have, again caricatured, but plausibly somewhat accurate, is that they

are pursuing profit, or at least a competitive advantage that will secure them profit in

the long run.

The resource that they mostly have is this technology that they're developing in-house.

The constraints that they have, among many other constraints, but the constraint that

ended up mattering here, surprisingly, was this public image constraint.

Google has a thing about doing no evil, or at least not doing enough evil to get attention,

and that ended up being the thing that mattered a lot in this case.

Last but not least, you've got Google engineers.

These are the employees of the company.

These guys, again, simplified caricature of their incentive is that they want rewarding

employment.

They want employment that is not just financially rewarding, but somewhat aligns with the values

of them as humans, them as individuals, and also aligned to the reputation that they want

to have as a person.

Resources is themselves.

Okay, so we haven't caught up.

AI talent is one of the hottest commodities around and people will pay a stupid amount

of money to get a good AI engineer these days, and so by being an engineer you are that really

good resource.

A constraint that they face, is that they don't have access to decision-making tables.

As an employee, you are fundamentally structurally limited by what you can do or what you can

say, in terms of it affecting what this company does or doesn't do.

Think a bit for a sec and think about these actors and think about how they're likely

to interact with each other, and this is an exercise in trying to figure out if you can

observe this behavior about key strategic actors in the AI space.

What should we assume is going to happen and is that a good or bad thing/what's the end

outcome in terms of things that we care about like control of this technology?

Because I'm running out of time, I'm going to give you a spoiler alert, and these are

the two main bilateral relationships that ended up really mattering in this case.

You had a synergistic one between Google management and DOD.

This is quite an obvious one where DOD had a bunch of money to give, and they wanted

to get tech, Google had the tech, they wanted the money.

So, that's a kind of contractual relationship that fell out pretty naturally.

What was particularly kind of interesting, House of Cards-y about this one, is that the

contract itself wasn't that large.

It was $15 million, which is pretty dang large for most people, but for Google that's not

much.

What was key though, is that as part of Project Maven, it helped Google accelerate the authorization

that they got to access larger federal contracts.

Specifically there's one on the horizon called the JEDI program, which is about...

I'm not...

JEDI actually stands for something quite sensible, it just happens to be that the acronym worked

out for them, and what that contract is about is it's about providing Cloud services to

the Pentagon.

That contract is worth $10 billion, which even an actor like Google doesn't sniff their

nose at.

So, by engaging in Project Maven, that by all accounts helped them accelerate the authorization

for them to be an eligible candidate to vie for that particular contract.

We'll revisit that in a second, but that one is still live and that's a space to watch

if you're interested in this set of relationships that we're talking about.

In any case, that's a synergistic one.

Then you've got the conflictual one, and this emerged between Google management and Google

engineers.

Basically, Google engineers kicked up a fuss and they were really upset when they found

out that Google had engaged in Project Maven.

A number of things that they did was a) to start this letter, an employee letter that

was signed by thousands of employees.

Notably by Jeff Dean, who was head of Google AI research, as well as a number of other

senior researchers that really matter.

The letter basically asked Google to stop engaging in Project Maven.

It was also... reportedly, dozens of employees resigned as a result of Project Maven as well,

particularly when Google Cloud wasn't budging and they were still engaging in that.

So Google management actually knew this was going to be a problem.

There was a number of leaked emails, and in those leaked emails the head of Google Cloud

was very explicitly concerned about what public backlash would occur, and throughout the whole

thing there were a number of attempts by Google management to host town halls and host meetings

to assuage the engineers, and that didn't do enough or it didn't do much.

These two relationships are somewhat conflictual.

One wants Google to pursue the contract, one doesn't, and the finale of this whole case

study is that in June they announced that they would not renew their Project Maven contract.

So Google was going to continue until 2019, and then they weren't going to uptake the

next round that they were originally slated to uptake.

In some ways, this was surprising for a number of people, and you can get all psychoanalysis

on this and say there are a number of things you can get from this that tells you something

about where power sits within a company like Google.

The cliffhanger though, which is a space that we need to continue to watch, is that as a

result of this whole shenanigan, Google recently announced their AI principles, and in that

they made statements like, "We're not going to engage in sort of warfare technologies

or whatnot", but there was also an out in there that basically allowed Google to continue

to engage in Pentagon contracts, eg., this JEDI program thing that they really want.

The cynic in you can think about this as a case where Google basically just won because

Google assuaged the concerns of their employees.

They looked like they responded to it, but in practice they can still pursue a number

of military... or at least government contracts that they originally wanted to pursue.

In any case, the meta point here is that you can look at a case study like this, think

about the actors, think about what strategic interactions they have with each other, and

it can plausibly tell you something about how things are likely to pan out in terms

of control and influence of a technology in this case.

Key takeaways to leave you with is a) there's a case for looking at strategic interactions

as a domain by which you can get a lot of information about AI strategy.

Particularly you can look at what's likely to occur in terms of synergies and conflicts,

and what bottlenecks are likely to kick in when you think about cooperation as a mechanism

you want to move forward with.

Not just descriptively so, you can also think about strategic interactions as a way of telling

you what you should be doing now, to avoid outcomes that you don't want.

If you can see that there's a conflict coming up that you want to avoid, or a synergy coming

up that is a bad synergy, or will translate into unsafe technologies being developed,

then you can look upstream and say, "What can we tweak about these interactions, or

what can we tweak about the incentive structures of these actors to avoid those outcomes?"

Finally, meta point is I've got a ton more questions and hypotheses than I have answers

and that's the case for every researcher in this space.

And so, as Carrick mentioned, there's a bunch of reasons to think this is a really good

area to dive into and if you have any interest in doing analysis like what I described, or

to address any of the questions that were on Carrick's presentation, please come talk

to us.

We'd love to hear about your ideas, and we'd love to hear about ways of getting you involved

and getting you guys tackling some of these questions as well. Cool.

Thanks very much.

All right.

Have a seat guys.

We've got some time for a little Q&A, and again, through the Bizzabo app or on the website

at sf.eaglobal.org/polls would be the place to submit your questions.

A number have already come in.

One question just for starters, a lot of the AI talk at an event like this tends to focus

on AGI, that is general intelligence, but I wonder if you think that this kind of governance,

and the dynamics that you're talking about, becomes important only as we approach general

intelligence or if it becomes important much sooner than that potentially?

Do you want to take this?

I think there's a set of things which I hope are robust things to look at regardless of

what capabilities of AI we're talking about, and I think the mindset that at least I approach

it with, and I'd say this is pretty general across the Governance of AI Program that we

both work at, is that it's important to focus on the high stakes scenarios, but there is

at least a subset of the same questions that translates into actions that are relevant

for kind of more nearer-term applications of AI.

I do think though, that there are some strategic parameters that significantly change if you

assume scenarios of AGI, and those are absolutely worth looking at and will to some extent change

the way that you analyze some of those questions.

I would also like to add that it depends a little bit on what part of the question you're

looking at.

I think when we think in terms of geopolitical stability and balance of power and offense

defense dynamics, that near term applications matter a lot, and trying to imagine keeping

that sort of stable and tranquil, as you potentially then move from that up to something potentially

like AGI, so that you're not already adversarial or having these dynamics is quite important.

A question about kind of the, just the two nations that you spoke about, the United States,

China.

It seems that the cooperation between enterprises and the government is much tighter and more

collaborative in China.

How do you think that... first of all, is that a fair assumption?

And how do you think that affects where this is likely to go?

Yeah, I think that's absolutely a fair assumption.

I think one of the key differences, among many differences, but one of the most notable

ones in China is that the relationship between their firms and their government and their

military is, I don't want to say monolithic necessarily, but at least there's a lot more

coherency between the way that those actors interact, and also the alignment in their

goals is a lot more similar than you get in the US.

I think in the US it's pretty fair to assume that those are three pretty independent actors,

whereas in China that assumption is closer to not being true, I think.

In terms of the implications of that, there are a number of them.

The really kind of obviously robust implications of those are that the pace at which I think

China can move with respect to pursuing certain elements of the AI strategy is a lot quicker,

and is a lot more coherent.

I would also plausibly say that the Chinese government have more capacity and more tools

available to them to exercise control and influence over their firms than the US government

has over US firms.

That has a number of implications, which I don't want to go on record to put on paper.

But you can use your imagination and figure out what that will tell you about certain

scenarios of AI.

Is there any sort of... we clearly see some power exerted by Google engineers in this

case.

It maybe is unclear exactly where things shake out, but it's a force.

Right?

I mean people can leave Google.

They're eminently employable in lots of other places.

Yeah.

Are there any examples or signs of that same consciousness among Chinese engineers, who

might say, "Hey, I'm just not gonna do this."

That's an excellent question.

I don't have a very clear answer to that.

There are a number of researchers, who are working on getting answers to that, and will

have better answers than I do.

I'll particularly flag Jeff Ding, who's a researcher at the Governance of AI Program,

who does excellent work on trying to understand what the analogous situation looks like in

China.

There's Brian Z, who is potentially in the audience.

Yo.

Brian!

Hey.

Brian is also working on this, and trying to understand it better, as well.

So, yeah.

I can't comment on that necessarily, but there's been less data points, is the one thing that

I conclusively can say.

What I will say is, there might be something like a third option, where Chinese AI researchers,

again, who are quite employable, could go to DeepMind or something that maybe seems

a little more neutral, if this is something where they don't like the dynamic.

But I'm not sure this is something that has actually taken off.

Or again, this might be something where having something like an intergovernmental research

body, that's pursuing science in a pure sense and has this international credibility, might

be quite useful.

It can provide an exit for people, who are not quite sure, if there is a race dynamic,

that they want to be engaged in a race dynamic.

Lot of questions coming in through the app.

We'll try to do as many as we can.

It's just about time for our lunch, but we can stay for a couple extra minutes.

One question on the possible role of patents and intellectual property in this.

Do those rules have force, or not really?

In the United States, the US Department of Defense has the right to use any patent and

pay just compensation, so you can't actually use a patent to block DOD.

There's a special exemption for it.

Other IP, between firms, it's not uncommon for Chinese firms to steal a lot of American

intellectual property.

I don't know how this would work, for example, with the Department of Defense interacting

with a Chinese patent.

I haven't looked into that.

A number of people complimented you on the model of focusing on these actors and then

the general paradigm.

But one somewhat challenging question is, how much do you think individuals matter to

this analysis?

For example, in this Google case, at least one questioner says, "Eric Schmidt, personally,

is a big part of this story".

So, if you zero in on that one person, you get these idiosyncratic possibilities, where

maybe if it was just one individual swapped out, things could be quite a bit different.

What do you think about that?

Yeah.

That's an excellent question.

Schmidt definitely is a particular individual, who had a lot of influence on this particular

case.

I suppose there's two ways to answer this question.

One is that, yes individuals matter, but I tend to lean towards assuming that people

assume that they matter more than they actually do.

I think fundamentally, a person like Eric Schmidt is still constrained by the structural

roles that he has been given, in relation to some of the institutions that matter.

So that's one answer.

I think two, even if individuals do have a fair amount of influence in this case, if

we're talking about trying to do robust analysis, it's a better analytical strategy to focus

on an aggregate set of preferences that's housed in an entity that's likely to exist

for a while.

Or you actually have to assume that they will likely have a role to play in AI strategy

and governance.

I think individuals tend to turn over a lot quicker, than is reasonable to place a lot

of an analytical weight on them for.

Schmidt again is a bit of an exception in this case, because he's been around and dabbling

in both of these scenes, in terms of defense and Google, for a fair amount of time.

But I think there are also fewer individuals, who you could point to.

But there are a larger number of actors, whose behavior you can consistently observe historically

as well, which is really where the power of the analysis comes from.

What I would also add is that, I think sometimes it matters a little bit who the individual

is and what they're motivated by.

I think for most people, maybe their values and what motivates them is a little underspecified.

As a result, they can be pushed around by the dynamics around them.

Whereas, I think that one of the reasons why it makes sense to have people who are motivated

by EA considerations and altruistic considerations, to get involved in government and to get involved

in these firms, is because they can potentially be steadier and less subject to the currents,

and keep their eye on what part of this is actually important to them.

Whereas, I'm not sure that's the case that most people have that bedrock.

So that's a great segue into our second to last question, which is who has how much power

here, as you guys see it?

The questioner asserts or puts forward the hypothesis that, there's not that much talent

in this space.

The best talent is so scarce, that maybe the most power really is there, which would suggest

an evangelism opportunity, or a very specific target for who you'd want to reach out to

with a particular message.

Do you think that's right?

How do you see the balance of power, as it exists today?

I don't know the overall balance of power.

I do think it is the case, and it seems to be the case, that researchers do have a lot

more power than they would in normal industries.

Which is why I think the Department of Defense actually needs to cater to AI researchers,

in a way they haven't really ever catered to cryptographers or other bodies where they've

gone in with their money and papered over things.

With that being the case, I think if the AI research community allows itself to treat

itself as a political bloc that has values, and those values it wants to advance, then

it will have to be taken seriously.

The AI research community, generally, has very good cosmopolitan values, they do want

to benefit the world, they don't have very narrow, parochial interests.

I think having them treat themselves as a political bloc and maybe evangelizing them,

to treat themselves as a political bloc, could be a fantastic lever in this space.

One damper to put on that.

I promise I'm not a skeptic by nature, but historically, I think research communities

haven't mattered as much as one would hope.

That's also true.

Looking at cases like biotechnology, nanotechnology, and what not where you had somewhat analogous

concerns pop up.

You also have had this transnational research community vibe exist.

Not even just vibe, actually, but institutions, professional networks, and what not, that

institute the existence of that epistemic community.

That has had limited influence on decisions that are made by key actors in this space.

It hasn't had no influence.

That's absolutely not true.

There are some really good examples of this transnational research community mattering

a lot, but I think that's been fewer and further between, than one would hope.

I'd like to say something on both sides of this, because I think you're right.

This is a difficult line.

There was an idea with the International Air Force.

When people were proposing this they were saying, "Aviators are the natural ambassadors.

They're in the sky.

They fly between countries.

They're so international, that of course they would never bomb one another.

This wouldn't make sense."

They were saying this immediately before WWI.

Then like that, it wasn't even a question.

So there's a thing where you can be captured, again like cryptography and other areas.

To some extent, physics, during The Manhattan Project.

The physicists were not American, most of them.

They came over and they still engaged in this interest.

But also with physicists, afterwards they were a lot of the push towards disarmament,

towards safety protocols, towards taking this quite seriously.

They still are actually a really important part of that, so it's a little unclear.

I think with AI researchers, given that they do seem to have a somewhat coherent set of

values, and they are a small group, they might be more on one side of this than some of the

others.

But yeah, I agree.

It's not a guarantee and it's not easy.

One can hope.

So maybe the last question then would be, can you sketch somewhat of a vision, I'm sure

it's still a work in progress, for what rules or governance regimes we should be trying

to put in place?

We've got certain bodies of researchers putting forward statements.

We've got Google now putting forward, some pretty cosmopolitan values and principles.

But what is the framework that everybody might be able to sign onto?

Do we have a vision for that yet?

I think it probably doesn't make sense to try and have too substantive of a vision,

at this point.

I like the idea - I'm sorry I'm being a lawyer here for a second - but a procedural vision.

This idea where you say, "What we agree to is that everyone will have a say.

That we won't move until we've hit this procedure in place, and that we've taken into consideration,

not just the actors who are relevant in sense of having control of this, but the people

who do not have much say in this, or who aren't having access to the levers."

To some extent, other moral considerations like animal welfare, and the benefit of the

earth, and these things.

I also think that, mostly in terms of substantive research, we're trying to push towards something

like this procedure and coordination, with the hope that this naturally falls out.

More than putting forward too much of a substantive suggestion.

The exemption to this, being something like a commitment to a common good principle, which

I think is almost the same thing as a procedural thing, because it's underspecified in some

ways.

I agree entirely with that.

I would hesitate for folks thinking about working in space, to try to drive towards

articulating specifically what it is, that the end goal is.

As I alluded to, I think there's a lot of uncertainty, so those things are more likely

than not, to not be robust at the stage.

That being said, I think there are number of robust things, like the common good principle

and the common commitment to that.

I am, for as much as I sounded like a wet blanket on that, I am hopeful that research

communities are really important.

I think anything that can A. Boost the power of them, B. Make that community stronger and

more coherent, in terms of encapsulating a set of values that we want, that is good.

Then as well, I think the other robustly good thing is, to acknowledge that states aren't

the only ones that matter in this case, which is a pitfall that we tend to fall into when

we're talking about international governance things.

In this case, firms matter a lot, like a lot, a lot.

And a robustly good thing is to focus on them, and place them somewhat center stage, at least

alongside states.

And to understand how we can involve them, in whatever the solution looks like.

Awesome.

Well this has been an outstanding hour.

You guys are gonna be available for office hours after this?

Immediately after this, yes.

Alright, fantastic.

How about another round of applause for Jade Leung and Carrick Flynn.

For more infomation >> Analyzing AI Actors - Duration: 39:43.

-------------------------------------------

Orrin Hatch and Lindsey Graham Deliver Proof That Dems Orchestrated Kavanaugh Hit Job - Duration: 3:11.

For more infomation >> Orrin Hatch and Lindsey Graham Deliver Proof That Dems Orchestrated Kavanaugh Hit Job - Duration: 3:11.

-------------------------------------------

Finish More Music - Knowledge - Duration: 2:38.

For more infomation >> Finish More Music - Knowledge - Duration: 2:38.

-------------------------------------------

Graham's Words Push Sarah Sanders To Light Up Every Dem on Committee - Duration: 2:17.

For more infomation >> Graham's Words Push Sarah Sanders To Light Up Every Dem on Committee - Duration: 2:17.

-------------------------------------------

Marc Anthony, Will Smith y Bad Bunny juntos y revueltos | Un Nuevo Día | Telemundo - Duration: 6:04.

For more infomation >> Marc Anthony, Will Smith y Bad Bunny juntos y revueltos | Un Nuevo Día | Telemundo - Duration: 6:04.

-------------------------------------------

Small Foot, la película animada que te llegará al corazón | Un Nuevo Día | Telemundo - Duration: 1:56.

For more infomation >> Small Foot, la película animada que te llegará al corazón | Un Nuevo Día | Telemundo - Duration: 1:56.

-------------------------------------------

Which is the Country with Most Universities in the World - Duration: 1:46.

- Hello everybody.

Pavel from QS here.

We have another video from the trivia series

and today, we're gonna find the answer to the question.

Which country has the most universities in the world?

Let's go!

Which is the country with most universities in the world?

- America.

- America.

- America.

- America.

- Which country has the most universities in the world?

- Whew!

United States.

- USA probably.

- USA?

- Okay, good guess.

Good guess, not bad.

Okay.

So which country has the most universities in the world?

Before I tell you the right answer,

let's talk about USA which is not the right answer.

USA has a bit more than three thousand universities,

61 from which are in the top 100.

And let's say something about UK because you know,

we're in the UK.

UK has a bit less than three hundred universities,

nine of which are in the top 100.

And the country with the most universities

in the world is

India.

There you have it.

If you have any questions, just leave them in the comments.

Thank you for watching and I will see you

in the next one.

For more infomation >> Which is the Country with Most Universities in the World - Duration: 1:46.

-------------------------------------------

Horóscopo hoy, 28 de septiembre de 2018, por el astrólogo Mario Vannucci | Un Nuevo Día | Telemundo - Duration: 2:58.

For more infomation >> Horóscopo hoy, 28 de septiembre de 2018, por el astrólogo Mario Vannucci | Un Nuevo Día | Telemundo - Duration: 2:58.

-------------------------------------------

Los Recoditos se la juegan por su nuevo vocalista | Un Nuevo Día | Telemundo - Duration: 3:20.

For more infomation >> Los Recoditos se la juegan por su nuevo vocalista | Un Nuevo Día | Telemundo - Duration: 3:20.

-------------------------------------------

76 m2 Vacation Home Consistent Older Standards And Needs For Modernization - Duration: 2:40.

76 m2 Vacation Home Consistent Older Standards And Needs For Modernization

Không có nhận xét nào:

Đăng nhận xét