Scroll

From Tool to Sidekick - Human/Machine Teaming with Jamie Winterton

We’ve conditioned ourselves to look at our technology in a similar way we look at a box of tools: as instruments that passively do what we make them do.  When we think of the future of artificial intelligence, it’s tempting to leap to fully autonomous solutions一 when exactly will that Tesla finally drive by itself? In our interview with Jamie Winterton, we explore a future where AI is neither a passive tool or a self-contained machine but rather an active partner.

Human/machine teaming, an approach where AI works alongside a person as an integrated pair, has been advocated by the U.S. Department of Defense for several years now and is the focus of Jamie’s recent work at Arizona State University where she is Director of Strategy for ASU’s Global Security Initiative and chairs the DARPA Working Group. From testing A.I. assisted search and rescue scenarios in Minecraft to real war time settings, Jamie takes us through the opportunity and the issues that arise when we make technology our sidekick instead of solely our instruments.

The central challenges of human/machine teaming? They’re awfully familiar. The same thorny matters of trust and communication that plague human interactions are still front and center. If we can’t understand how A.I. arrived at a recommendation, will we trust its advice? If it makes a mistake, are we willing to forgive it? And how about all those non-verbal cues that are so central to human communication and vary person to person? Jamie recounts stories of sophisticated “nerd stuff” being disregarded by people in favor of simplistic solutions they could more easily understand (e.g., Google Earth).

The future of human/machine teaming may be less about us slowly learning to trust and giving over more control to our robot partners and more about A.I. learning the soft skills that so frequently make our other interpersonal relationships work harmoniously. But what if the bad guys send their fully autonomous weapons against us in the future? Will we be too slow to survive with an integrated approach? Jamie explains the prevailing thinking on the topic of speed and autonomy vs. an arguably slower but more optimal teaming approach and what it might mean for the battlefields of the future.

Note: Our conversation on human/machine teaming follows an introductory chat about data breaches, responsible disclosure and how future breaches that involve biometric data theft may require surgeries as part of the remediation. If you want to jump straight to the human/machine teaming conversation, it picks up around the 18 minute mark.

About this episode

We’ve conditioned ourselves to look at our technology in a similar way we look at a box of tools: as instruments that passively do what we make them do.  When we think of the future of artificial intelligence, it’s tempting to leap to fully autonomous solutions一 when exactly will that Tesla finally drive by itself? In our interview with Jamie Winterton, we explore a future where AI is neither a passive tool or a self-contained machine but rather an active partner.

Human/machine teaming, an approach where AI works alongside a person as an integrated pair, has been advocated by the U.S. Department of Defense for several years now and is the focus of Jamie’s recent work at Arizona State University where she is Director of Strategy for ASU’s Global Security Initiative and chairs the DARPA Working Group. From testing A.I. assisted search and rescue scenarios in Minecraft to real war time settings, Jamie takes us through the opportunity and the issues that arise when we make technology our sidekick instead of solely our instruments.

The central challenges of human/machine teaming? They’re awfully familiar. The same thorny matters of trust and communication that plague human interactions are still front and center. If we can’t understand how A.I. arrived at a recommendation, will we trust its advice? If it makes a mistake, are we willing to forgive it? And how about all those non-verbal cues that are so central to human communication and vary person to person? Jamie recounts stories of sophisticated “nerd stuff” being disregarded by people in favor of simplistic solutions they could more easily understand (e.g., Google Earth).

The future of human/machine teaming may be less about us slowly learning to trust and giving over more control to our robot partners and more about A.I. learning the soft skills that so frequently make our other interpersonal relationships work harmoniously. But what if the bad guys send their fully autonomous weapons against us in the future? Will we be too slow to survive with an integrated approach? Jamie explains the prevailing thinking on the topic of speed and autonomy vs. an arguably slower but more optimal teaming approach and what it might mean for the battlefields of the future.

Note: Our conversation on human/machine teaming follows an introductory chat about data breaches, responsible disclosure and how future breaches that involve biometric data theft may require surgeries as part of the remediation. If you want to jump straight to the human/machine teaming conversation, it picks up around the 18 minute mark.

Meet our guest

Jamie Winterton

From Tool to Sidekick - Human/Machine Teaming with Jamie Winterton

Transcript

Jack: [00:00:00]Welcome to the second recording. Second Halloween, third Christmas, Jamie.Let's try this again. And Dave, hello there. How's uh, I see you're calm andrelaxed and no, no stress at all there. Mr. CEO, CMO, founder human change. Uh,

Dave: [00:00:25]Yes. Oh, I foolishly spoken to the universe that things would calm down afterRSA and the universe has smoke to me. It has smoke to me

Jamie: [00:00:39]kind of a jerk.

Dave: [00:00:41]So yeah. Yeah. So here we are, first world problems, all of them, but you know,the funny thing about this one problem, since they

Jack: [00:00:49]still feel like problems.

So for those joining us, we had a fantastic interview withJamie a few weeks ago. Sometimes our love of technology is unrequited. So thisis our second try. It's only the second time we've had to do this. Weexperienced that with chincy early on, but technology anyway, third Christmas,second Halloween and whatever.

Yeah, I

Dave: [00:01:18]did get a picture of our moments of second Halloween. Now. I think it may, itmay have to enter

Jamie: [00:01:23]there too. Okay, good. I was actually going to ask like, Hey, maybe could youput the thing back on and we can, you know, get a screenshot for promotionalpurposes, but if you did, then that's great. It also means I'm going to takeoff my devil horns because I can't look at them.

So

Dave: [00:01:37]really? No, I was thinking that I was

Jamie: [00:01:39]feeling, I gave a whole presentation with them on. At a conference once it wastalking about wicked problems. And I thought, well, why not look wicked then?Where these ones.

Dave: [00:01:50]Love it. Love it. Yeah. The picture was, was not bad. You guys look good? Jacklooks like he's suffering with us, but he doesn't look too bad because he's gotthe Blowfish there.

So

Jack: [00:02:02]yeah. Cuddles the puffer

Dave: [00:02:03]fish. Yeah, you have that. I have banana Joe. So my son gave me this and thisis what I use on days where I kind of feel like a jerk and I need to rememberhow to be happy. It's this guy, this has been Anna Jo and he is he's awesome.And he looks at me like right across from me and every, now I look at him andit's like, okay, breathe, like smile.

You're having fun. It's just a ride. Yeah. So I have to havebanana Joe. I know you have suffering bastard to Jack, right? I think, um,They're necessary. I

Jack: [00:02:35]have banana Joe's socks that I often wear when I do stand up. So yeah, I'm afan.

Dave: [00:02:42]Let's jump right back into this. And the nice thing about recording again isI'm sure it'll be even more amazing this time and while it's no longer a secondChristmas, so we're not going to do that.

That was so five weeks ago. Yeah,

Jamie: [00:02:56]I couldn't put the tree. I mean, I could put the tree back up for you, but itjust seemed like a lot of work. So I appreciate your suggestion of secondHalloween. It's just a lot easier to manage. And it also means we get manypeanut butter cups at the end of the episode.

So Heredia I'm in. I'm in nice.

Dave: [00:03:16]Yeah,

Jamie: [00:03:16]no, it's

Dave: [00:03:16]great. Let's start with your perspective. So you just reached a critical momentwith your PhD and

Jamie: [00:03:24]talk us through it. Yeah, so I decided that being a single mom and working fulltime, I just needed more in my life apparently. So I'm pursuing a PhD. I'mactually looking at incentives in cybersecurity.

So. It's not going to be a surprise to anyone on thispodcast, that there are problems in cybersecurity and not just with thetechnology, but with the policy and with the way humans interact withtechnology. So what I want to do in this research is look at how there aredifferent sort of forces within the system that push and pull.

So my background is in physics. I was a scientist atLockheed Martin for 11 years. So I want to make models of things and try toparameterize them and understand them in a certain way. And thinking aboutcybersecurity as a complex system, you have. Users of technology. You havecompanies, you have government agencies, you had hackers and securityresearchers, and the way that all these things are coming together, I thinkactually is undermining our security.

It's not necessarily helping, if you think about a couple ofexamples. So back in 2017, I was really honored to be invited to the Senate togive expert testimony on data breaches. And this was right around when Equifaxhappened. So Equifax 145 and a half million credit records siphoned offprobably by a foreign adversary.

And I sat on a panel behind Richard Smith, who was the CEOof Equifax at the time. He basically got yelled at for a couple hours by somesenators and then retired with many millions of dollars of golden parachute.Maybe a year before that though, you have somebody like Justin Schaffer who wasan independent security researcher, finds a whole treasure trove of personalhealth documents.

Just open on the web from a dentist's office tries to reportthat to the dentist's office. And then the FBI hauls the guy out in his boxershorts in front of his kids at 6:00 AM imbalances in the system where bigcompanies are not. Taking the kinds of steps that they need to, to securereally important data, not just for individuals, but for national security.

And then people who want to report these vulnerabilities,who are getting hammered by the computer fraud and abuse act. This is themotivation for me to go in and say, something's wrong in this system. We'respending a lot of money from both industry and government to try to fix it.It's not getting any better.

So we got to dig a little deeper,

Dave: [00:06:04]the comical thing for me. Is, and this is going to be a little bit of an openRaven commercial. So I will apologize in advance, but look, most organizationsdon't know where their data is, what data they have or how it's beingprotected. We've assumed that we know where data was, which used to be a decentassumption.

And the OG days, when you had database administratorsremember those, and you had really politely behave data stores and databaseswith names like. Oracle and the rest of it's brethren that isn't even close toreality anymore. They move into the cloud. And then all of a sudden it's like,oh, well, you know, the data goes up into S3 and they get things like, oh, wegot a cloud word.

And it says that a bucket's open and that's supposed to tellyou something. It tells you squat, you can have a bucket and S3 bucket. Whichcan contain anything. They can be a petabyte, they can be a megabyte and it canactually close off in the internet and it can be encrypted and have a fileinside it that is un-encrypted and exposed to the internet.

And all of a sudden it's like, you're supposed to know theold mode of like, oh, well we know what data is out there, because this iswhere we put things that ain't true, even in the least. And there was a greatarticle by a guy named Joel David Darsa. By Andreessen Horowitz, who at the endof last year, he called 20, 20 a, something like that here.

That was good for security or something like that. Joel isway more eloquent than I am. And he said, Data is the new endpoint. And I thinkthat's kind of the stark reality in the world of cloud and where we're headedis it's no longer okay. Just to infer data problems from where the infrastructureissues are.

And it's only going to continue to get worse. And the answerisn't go out and buy open rates. Yes, we think we help with that. The answer isyou got to get serious about our data. People will say, oh, data's the new oil.Well then why the hell is it okay to have continual Exxon Valdez spills? Sure.All over the place.

Jamie: [00:08:16]And not only are you like to extend your metaphor. Exxon Valdez oil spills,where foreign adversaries are running over with their buckets. Like great. Thisis awesome. Thank you. Thank you for this data. You look at the different kindsof breaches that have happened to huge scale. And layer them on top of oneanother.

And you get a very detailed perspective of the United Statesand how we behave as a nation and our security apparatus. I mean, the OPM databreach included, not just social security numbers and addresses and familyinformation, but interviews on. People's personal lives. These are all of thedocuments that were used to make decisions on people to get their securityclearances.

That's very useful information for somebody

Jack: [00:09:06]that was also one where the information that wasn't in it was really deadlybecause anybody that didn't have that information and the OPM breach, but wasstationed in a foreign embassy or other overseas posts. There's a certain typeof person who doesn't have any records in OPM who is overseas.

And most of their jobs are really boring, not like the moviestuff, but there's still really dangerous. And so anecdotally rumor mill says abunch of people got exported out of countries really fast when that breach, butit became common because if you. Haven't been interviewed for a clearance, andyet you have something that requires a clearance and you're in, you know, Chandonprovince or something.

That's probably a good idea to put you in a diplomatic bagand ship you home fast. And, you know, that's just the really blatant, obviousinstance stuff. The data aggregation. From that you take, who's got a clearanceof some level and who's the obvious one is the financial data, the Equifaxyou've got financial trouble and the clearance, if I were an adversary, whatcould I do with this information?

Right.

Dave: [00:10:20]So after that, they got them out of the country. They got them back to a safeplace, fine. You know, they can change their identity and so forth. Yeah,hugely disruptive to both what they were doing on behalf of the country and totheir life safety. But let's say that it included their biometric data, which increasinglythings will you can't change your biometric data.

I mean, yes, you can do surgery, but holy shit, if that'sthe answer to a data spill is. Hey, we need a whole bunch of surgery because wejust had a woopsie on your biometric. Sure.

Jack: [00:10:56] Iwas annoyed changing my password because that stupid parking app had a breach.I don't know that I would want to go in for new retinas.

Right.

Jamie: [00:11:04]And what counts and biometric data, you know, there's things like themeasurements between certain places on your face. And at what point are youjust biometric data, really the physical representation of you. And you'vegotta be reconstructed that's. Not reasonable.

Dave: [00:11:19]Well, it's very, very interesting.

So the talk that Neil and I did at RSA, which is now public,we did a whole bunch of looking into these issues with facial recognition aspart of surveillance, right? It's a key part of surveillance, especially inChina and as it was happening, there was a law passed in Illinois. BIPA I won'ttry and fumble through the acronym, but basically the premise is this is thateven if it's publicly available, People cannot use your things like face orfacial geometry as part of their solution.

Without your permission, you have to explicitly givepermission for someone to use your biometric data, even if they can derive itfrom public sources. It's super interesting. So these companies like ClearviewAI, which basically scraped the web out there in social networks, gathering upall this information, all of a sudden like.

It's a non-starter and it should be a non-starter honestly,because as much as there's a benefit to the services of, and they have saltedhorrifying child exploitation cases in a blink of the eye, which is awesome.But by the same token, if anybody can just take your biometric data and use it,however, the hell they want.

That takes us to a very dark, dark place. I mean, at thatpoint we've effectively commercially arrived at a moment of digitalauthoritarianism, which is not unlike the horrors that are happening in Chinaright now with the week of this.

Jamie: [00:12:49]Right. Yeah. And then if you think about all of these algorithms, you know, alot of these, these data sets are so huge.

They're not meant for like a person or people to pickthrough on their own. So this is where machine learning comes in. Thesealgorithms are created to look for commonalities and different features inthese data sets. And so then what can you tell from them? You know, there canbe insights that we're not even aware of, whether it's an individual's Corpusof data online, or, you know, again, going back to these big data breaches thathave millions and millions of records that's right.

Concerning when the adversary can move quickly. And at scalelike that, We dug ourselves into a hole here. I have a way like Marines areterrible. Not everything is terrible though. I think

Dave: [00:13:41]it's not. So where does your perspectives take all this? How do you wrap thatup into something that at the end of the day, He'll will defend his, yourthesis.

Jamie: [00:13:50]So the prospectus is basically a proposal that says, this is the research thatI want to get into for my PhD. And so that I've been approved. I can go and dothis research. I mean, I'm done with classes and field exams and all the otherhurdles. I just got the last stretch. Someone said, oh gosh, it's like, you'realmost done.

I'm like, well, that's kind of like being almost done atmile 20 of the marathon. Yeah. They say the marathon breaks down into two equalparts of the first 20 miles in the last six. And this is true in my experience,but where do I go with it? So I'm actually really concerned about. How we canupdate the computer fraud and abuse act to be more reflective of the world thatwe live in.

It was created in 1984. There have been very few meaningfulupdates to it. Since we now live in a world where there are an average of 11internet connected devices in each home, as opposed to 10% of households havingcomputers in back in 1984. And we're online all the time and all these waysthat we don't even realize.

If our laws are shutting down, people from being able toreport things that they see or that they find, I'm sure everyone on this callhere, and many people listening to the podcast have accidentally found thingsout in the world and I've had to decide, what am I going to do with thisinformation? Am I going to disclose it to the company or the organization whereI find it?

The potential consequences are huge. So. There's a privacyresearcher and journalist who goes by the name of descent to DOE she writes ablog called Pogo was right. And after the Justin Shaffer incident, where hetried to disclose all of this open health information and got hauled off by theFBI, somebody dumped 400 vulnerabilities on her and said, you know what?

I was going to try to work through these I'm out. It's toomuch. She wrote a piece where she takes one of these vulnerabilities and triesto get some assurances from the company that she'll be protected. If shereports it, it takes 12 emails and two weeks for her to get the assurances. Butonce it's reported, it's fixed in three hours.

This is not sustainable to do for that number ofvulnerabilities for all of the things that we see out there every day. Sothat's my hope is what I really want to do is not. Yes. I want to get throughand have some fancy letters after my name, but also to create something wherepolicymakers can look at it and go, wow, here's some things that we canactionably change.

And this is how we

Dave: [00:16:29] doit. That's important work at the face of it. It's laughable that that can besuch a central log governing what we do as an industry. And it was written like10 years before eight. Yeah,

Jamie: [00:16:42]that's a great point.

Jack: [00:16:45]The first person convicted under CFAA was Robert Tappan, Morris of Morris wormfame.

And sometimes they joke he was the last one properlyconvicted, which is, which is not true, but it hasn't evolved since then. Andso Morris worm, some folks listening to this, they're going to have to Google.What's a Morris worm Morris worms. Why we have a cert, there are a bunch ofthings that were foundational, but that was a really long time ago.

Disclosure decades ago, rainforest puppy did his disclosurething and that we have not solved the disclosure debate. And you talked aboutthe FBI, they have fits and starts. There was a great program at the bureau forseveral years that had the unfortunate name of cyber ninjas, which now bringsup. The election audit or whatever word you want to use for that in Arizona,but they tried to help.

And then they had a reorg and the program was killed. Andthat was a way to reach out to the bureau to try to get attention for reportingthings. And right now, I think the, the only thing that I've seen in the recentyears that has given me any hope is the bug crowd thing that spun off the Caseyand crew spun up disclosed.io.

That's it. Which helps, but then it's just one approach andwe're still arguing over the use of the term responsible disclosure, whichoffends a lot of us because that's responsible. It's usually you use a wordused as a weapon against the researcher, as opposed to a usual, so coordinatedor whatever you want to do, but we're still having this conversation decadesafter.

Jamie: [00:18:22]Right. Well, and I think like Katie was saying some really smart things aboutthis stuff about, you know, bug bounty programs and do they actually help andwhere do they help?

Jack: [00:18:32]Right. And she's as somebody that she, I mean, she was involved early one withhacker one, but she also sees the shortcomings. She's been at this for a while.

It's not a panacea, there's a place for them. It's not auniversal tool by any means. And if you're not ready for it, Dave, we talkedwith Melanie about. The wonderful social media app that has a bug bountyprogram and nothing else. And it's not even a real program.

Jamie: [00:18:58]Yeah, a lot of them are so neutered, nothing happens.

And then if you just decide, you're going to have a bugbounty program without actually planning it out again, and the wrong approachesdo these things can actually be detrimental. So we think that's why we've gotto scrape it down to what are some of the fundamental pushes and pulls betweentechnology and policy that are making things still bad.

We're all trying really hard. Things are still kind of bad.

Jack: [00:19:26]And then there's people just trying to use their computers eitherprofessionally or personally. And we as a security industry, going back to thesixties, Willis ware made this point. You know, if we don't factor in the factthat people are trying to use these things, they're just going to work aroundus.

Dave: [00:19:44]So interesting. I didn't know that responsible disclosure traced back to RFP,you know, Jeff fora stall. I can see why it would, he was really active and hadsome doozies in his time, but we've been at this for a long time, but I'll saythis in fairness to the industry, you know, you brought up the example ofMelanie that was a privacy disclosure, more than anything.

And it was like, we talked about it was directly related totheir business model and the fact that we live in a world where technologyimplementation and business practices that go with it. Are so profound. Theyoccupy the space in our dialogue is very different than what it was back then.But if we look at vulnerabilities and what's happened with bonesvulnerabilities, there's an argument that they're munitions now.

And maybe not much of an argument and these instances, whetherthey're being used as weapons for, you know, a nation state, adversaries and soforth, and I'll confess, this is not something I've put a lot of thought into.So I'm sure there's smart people who are going to throw things at me for this.But at the end of the day, when we started talking about disclosure, it feltlike a big topic at the time, but that conversation just has so much moreweight now.

We haven't made the progress we should have, but I thinkalso the importance, the gravity of that conversation is dramatically more thanwhat it was when we started the conversation back in the heady days of the latenineties. Wow. I am just sucking the air out of the room today. I did want youout of disseminate chainsaw, ranty day.

Jamie: [00:21:16]It's all good.

Dave: [00:21:19]You guys need me. I'll be over

Jamie: [00:21:21]Dave. We should keep chainsaw, Dave. I like the chain Sunday concept.

Dave: [00:21:27]So let's talk about what you do at ASU. You have a fresh and funky and uniquerole at ASU.

Jamie: [00:21:36]My title is the director of strategy at the global security initiative and whatthe global security initiative is.

We're not part of a particular department or a school, butwe organize by basically mission area. We primarily work with department ofdefense and intelligence community sponsors. And we bring people from differentacademic units around problem spaces. So we have a center for cybersecurity anddigital forensics, and it's not just computer science, but it's computerscientists and people from the law school and the business school and cognitivepsychology and social psychology and all of these pieces that from theuniversity.

They have something to do with cybersecurity research andeducation because it's, yes, it's a computer problem, but it's so much morethan a computer problem with, you know, we discussed that a lot to in when weopened. So I, I get to think about like, what are some of the big. Problems outthere. And how do we organize ourselves in the university to be able to betteraddress some of these problems?

So another center we have is the center for human AI robotteams. This one is super fun. Having worked in defense for close to 20 years atthis point, that's a horrifying number to say out loud. But having worked indefense for as long as I have, you start to see trends and you start to seewhere things are going.

And a lot of work and money has been poured into advances inAI, in autonomy, in robotics. And there hasn't quite been the focus of who'sgoing to use this, which people are actually going to be using this and it toJack's point earlier about we build technology. We don't think about the actualusers. So.

This center led by professor Nancy Cook. She's a globallyrenowned expert in teaming and teaming research. What are the different roleson a team, whether it's different people or people and AI or people in AI androbots, how do we get these heterogeneous groups to work well? And actuallyaddress mission needs.

Dave: [00:23:52]What's the reigning hypothesis. And I'll tell you, I'll start out with a quotefrom my initial research here, because I think it's a decent way of kickingthis off. It's from Lieutenant general. Objection and AI's most valuablecontributions will come from how we use AI to make better and faster decisions,including gaining a deeper understanding of how to optimize human machineteaming.

Take you

Jamie: [00:24:15]from there. I really like the words that he uses there to optimize humanmachine teaming. So this is kind of a paradigm shift, instead of just thinkingabout machines as tools. We need to start thinking about them as partners, butif they're going to be partners, we need to give them the roles where.

They can succeed and we can succeed and the whole team canbe a success. So I don't know if any of you guys use Siri or Alexa? I don't,but I have spent time watching my dad tried to use Siri and it is a frustratingmess. Every time Siri doesn't understand him and he doesn't understand how toask Siri the right kinds of questions, and he kind of just ends up yelling atit.

And it's not good for anybody. So, this is a case in whichthat relationship between the human and the machine is not a positive one.Think about this, you know, instead of just my dad, trying to get directions toa place that he wants to go, but if we want to be able to use the positivebenefits of AI, the very quick analysis over enormous datasets things thathumans can never comprehend.

No matter how many of us there are. We need to figure outwhat those interfaces are. How do we have that conversation? Otherwise, themachines are annoying and we don't want to use them. And then it's just a lotof time and money and opportunity wasted.

Dave: [00:25:45]Well, I think Siri is equal parts, frustrating and comedic splendor all in one.

I mean,

Jamie: [00:25:54]Siri to a kid.

Dave: [00:25:55]It's awesome. I'll confess like, oh, it's hysterical. I mean, it's, it'scommitted goal, but there's times when actually it's really freaking useful andyou want to think that it works and it just. Does it, and I'll confess likewhen I do that, I go over and I kick my Roomba. Dammit. I'm not going to throwmy phone, but someone of some sort needs to pay for this travesty, take itRoomba.

Jamie: [00:26:22]The machines are going to come for you first, Dave.

Jack: [00:26:27]Dave, you're talking about a military take. And Jamie, I know you have thoughtson this. One of the challenges is who the operator, who the human operator isbecause there's some people in some roles where leaning much heavier on themachine for a big scope will be a value.

And there are other specialists who need some veryspecialized stuff to take them the last 16th of an inch. And there are otherpeople that need to be taken the first couple of miles and it may be thepeople, or it may be the roles. And that's one of the things that justfrustrates the heck out of everybody is when you have that mismatch betweeneverybody's yelled at their navigation system, whether it's on your phone orit's a dedicated GPS or whatever I have long conversations with mine is alwayswrong.

The one that drives me nuts continuously is. When I used todrive up and down the east coast, it would tell me that leaving Cape Cod, thatI should go down 95 through Connecticut, through New York city, even though itknew. Yeah, it should. No, it has all the data. Google has all the data to knowthat that was going to put me trying to go over the George Washington at fiveo'clock in the afternoon on a weekday.

GW is clear right now. You should go that way. I'm like nodummy. Yeah.

Jamie: [00:27:46]I also hate proceed to the route. If I knew where the root was, I wouldn't needyou. Don't tell me how to proceed to the route. This is your purpose. So yeah,

Dave: [00:27:57]all these frustrations, they make a really, I think, a cogent argument for whyyou need human machine teaming.

And I think if what we're saying is with human machine teamto a degree, is. The machines are simply not going to be capable of doing thisautonomously in a way that creates the right level of trust and safety and soforth. Not that the human doesn't trust them, which I think is another issue.And I don't want that people have gotten into, but simply the fact that it justisn't good enough.

Let's say it this way. It is not good enough. Anybody whothinks the machines are going to take over. Go talk to Siri better yet atJamie's dad have my son, anybody else. And you quickly realize the singularityproblem is not upon us, but that isn't really at the heart of this. Is it? Imean, we're not simply saying that it isn't enough.

We're saying that this might be a better outcome with humanmachine teaming than autonomous AI. Isn't that where it's going.

Jamie: [00:28:52]Yeah, because are we asking you to do the wrong things? Are we makingassumptions? Even humans specialize into roles. There are going to be thingsthat in my role I'm exceptionally good at and things that I'm not that's truefor any human.

And this is going to be true for AI as well. Understanding,how do we take the good parts of it? The parts that are really valuable andbuild that into a team in a way that humans can leverage that and then humanscan do what they're really good at. So there's a lot of research into this andpart of the way we research this are is through test beds.

We create scenarios. We have. Humans and their AI partnersin these scenarios and we'd throw some challenges at them and see how do theycommunicate with each other and how do they behave? We have a big project rightnow with the defense advanced research projects, agency or DARPA. Aboutcreating social intelligence for AI.

And what we've done at ASU is built a Minecraft test bed ofsearch and rescue operations. So people and their AI helper go through thisMinecraft test bed and look for people who were wounded, people who weresurvived. And then we measure how they're discussing with one another. We'remeasuring whether or not the AI is giving helpful feedback to them.

And then we can assess our rate. So which AIS are going tobe good at this kind of options. We're working with other companies anduniversities who are making the AI agents. Then they bring them to ourMinecraft test bed and we run through all of these different experiments to seewhat works.

Jack: [00:30:39] Ithink we ask computers to do things they can't do or can't do yet.

It's like we do this at work with the people we work with.We assign tasks to the wrong people. We haven't solved that problem withtechnology. I occasionally I think I told this on the one that we lost, but Igot frustrated because not to get social and political, but people can probablyfigure out the way I view the world.

And so I also do some, a lot of traveling in RVs and so Iwas kind of tired of the. The politics of most part of years and ask Google tofind me leftist campgrounds. And it suggested RV parks and liberal Kansas,which was on me for that failure. It really was. I should have known better.

You know, I'm going to go out on a limb and say that thecampgrounds in liberal Kansas are full of really nice people whose politics areprobably to the right of mind.

Jamie: [00:31:33]Well, anyone who's written code knows the computer will do what you tell it tonot necessarily what you want it to do. And it's that difference of figuringout, and then how do we express in a human conversation?

There are all kinds of cues that we're giving each otherconstantly, even though we're not in the same room, I can see Jack's video andDave's video and they can see mine. When one of us nods, the others will nod.If you ask me a question, I don't know the answer to, or want to think aboutit. I can. Spin verbally for a little while and say, you know, wow, Dave,that's a really good question.

And I'm glad you asked that while I think of what toactually say AI, isn't that smart yet? It doesn't know that those are thingsthat we as humans expect. And when it diverges from those things, we get veryuncomfortable. If you were to ask me a question and I just sat there to thinkabout it. It would be really weird.

So these are some of the things that we need to think about.How do we make AI less uncomfortable for us to work with? How can it pick up onsome of our cues instead of us just telling it what to do through code orthrough the words that we use? Can it tell things from our intonations, fromour facial expressions and our body language and the pauses that we use.

That will make it a better collaborator.

Dave: [00:32:54]And just to make it a hundred percent clear, the types of human machine teamingthat the DOD is talking about. Now, I'll use an example here from the army. Itsays they're interested in using autonomous vehicle technology, run resupply,convoys, and contested environments in conflict zones while fully autonomoustrucks don't exist yet, there are promising concepts are a mix of mannedunmanned trucks that can put fewer soldiers at risk on such dangerous missions.

Perfect sense. Right? Search and rescue instances like withthe much maligned Boston scientific digital dog, you know, where you can sendin that $80,000 digital dog to trot in and see if there's a human in there.Makes perfect sense. Drones over wildfires near and dear to my heart here inLos Angeles.

Makes a ton of sense to send in drones in order to notautonomous drones, but drones that are manned by a person who is steering thecontrols and so forth. All of these seem like really obvious applications, butI love it. The question like. At some point, do we end up, is the end statefully autonomous, or we simply kind of slowly peeling our fingers off thetechnology as it grows up.

And the article I sent out to y'all this morning before theshow, I think would hint that other countries think that, you know what, we'regoing to go fully autonomous damn the torpedoes. We're just going to do it. Andas a result, could it force our hand to basically go fully autonomous as well?If there's simply less concerned about safety concerns and collateral damagethan we are

Jamie: [00:34:33]several years ago, IBM's Watson competed in jeopardy.

Do you remember this? They had their. Top human contenderscompeting against Watson and Watson ended up winning the tournament. The reallyinteresting thing about it, I thought was that when Watson was right, it wasinstantaneously. Right. But when Watson was wrong, it was instantaneously andoften hilariously wrong, so wrong.

And this is the problem with using AI without any sort ofhumans in or on the loop. In military or defense settings, is that. These areareas where we really have to get it right. What's at stake. Isn't just ahilariously wrong answer because of a misinterpretation of a jeopardy question,but giving recommendations for a series of troops to go into a place that wouldbe very dangerous.

A lot of these military decisions have to be made veryquickly, but there's so much information out there. It's just so much data fromall the different sources. Having AI, try to help pull out some of the salientfactors in these heaps and heaps of data. But I think not just for, you know,the safety and protection, but just the accuracy of the mission, having humansinvolved.

But then it means that you really do have to focus on thecollaboration pieces of it as well.

Jack: [00:36:01]Dave, you talked about drones. I mean, there are things like whether they'rethe higher end, personal drones or military or government, whatever, there arethings that the computers should be able to do for us, which is help us notcrash the drone.

If you have friends who have played with drones, they've allcrashed them. So we add some stuff and if you're doing like firefighting stuff,add temperature sensors, and be like, you know what, it's getting really hot.We're getting in an area where they're going to be updrafts and I'm going to beupside down and corkscrew into a fire.

And then you're not going to get pictures. And then humansare going to be trying to fight this fire and where it's not going to beunhappy. So there are things where you still want to steer it. It's like weneed to see what's over the Ridge. But I'm going to help you, not stuff me intothe dirt. That's a step.

And then, you know, how far do you go to, to running a path?You know, is the predictive nature of it in context matters and with all thedata that. We have at our disposal, in the power of the systems that we use, westill can't always grasp context, things that we haven't thought of. And again,like thermals, you know, oh, there's a hill we're flying towards a Ridge.

There's a fire on the Ridge, or there's no fire on theRidge, but the sun's baking on it. And once it gets close, it's going to try toflip this drone, there, little things that we know, and it's like, okay, look,the D or E or whatever rev might take factor in those sorts of things. But.Those first few steps.

I think there's a lot that can be done to make things betterfor us. Yeah.

Dave: [00:37:30]It strikes me that so much of this is going to come down to cloud computing andtaking story and a whole bunch of data, like you said, from a bunch ofdifferent sources from a whole bunch of sensors, and then being able to doprocessing at the edge in many instances, And it's actually similar to some ofthe surveillance problems that neither when I were digging into and some of thecomputer issues, there actually, it's a lot of modern computer issues and notcome to think of it.

But having said that, I wonder if we end up in a situation,I wonder if. Both approaches are right, but in different scenarios as is oftenthe way in this messy world, for example, like you said, you know, Jamie, thereare situations when the decisions are so weighty and multi-variate andqualitative. That in those situations, there is no way in hell.

We ever hand that off to AI. We'd be happy for AI to guideand recommend probably predominantly with data you as even opposed to likeoptionality on paths. We can do that work afterwards through our own synthesis.I also wonder if there's situations where it's like. Nah, let it rip sendingthe robots, but we really don't care that much.

There's nothing really, that could possibly go wrong here.Just let out digit dog and send in 50 and see what comes back.

Jamie: [00:38:51]Right. Yeah. And then context is huge. What are the risks in the scenario? Whathappens if you get one of these weird AI answers? You know, AI gets trained onall of these examples, you know, thousands of millions of examples of things,but we don't.

Always know how it's learning. There's a big push forexplainable AI. We don't really understand how AI is coming to the decisionsthat it makes and not being able to reverse engineer. It can be veryfrustrating. So I worked on a project back when I was leading a group ofLockheed Martin. About where helicopter pilots could land in dangerous areasand providing some recommendations for, you know, this looks good, thisdoesn't, there were a lot of different features.

I'm in one of them was pretty straightforward, Beijingneural network. But when the Intel folks would say, why did it pick this area?And not that area, we really couldn't tell them in most cases because thelearning algorithms are kind of a black box. That I think is part of thecontext we need to keep in mind is when we need the answers for why this it'sgoing to be hard for machines right now, at least to tell us

Dave: [00:40:09]there's an argument that says you don't necessarily need to understand how itworks.

If it's delivering the result you want, but at the end ofthe day, If we're talking about trust and our willingness to extend it furtherand to take it into scenarios or to, you know, let go of it and let the machinedo it explainability and understanding of what's happening is fundamental totrust, which I think at some point we just end up in a position where we'resimply not comfortable enough to use it in a way that we could, because wecan't understand it well enough to extend that trust to it.

I imagine that's one of the more significant issues here,isn't it?

Jamie: [00:40:48]Yeah. And I actually saw that on this project at Lockheed as well, talking tothese Intel analysts and saying great, you know, we have all of these differentways to collect data, different modes of imaging. And what do you guys use whenyou're making your decisions and putting together your landing zone plans?

And they basically said, yeah, we don't understand or trustany of your nerd stuff. So we use Google earth. And that was a real eye openerfor me. We had done so much research, not just we like my team, but overall.The government had paid for so much research and so many different ways tocollect imagery, but the trust wasn't there, we hadn't made it into somethingthat they felt comfortable using.

And so they wouldn't, they were comfortable with Googlealerts. So this really shaped my thinking about how to pursue these projects.Who were we developing these projects for? What's their context. And we've got.Yeah, these kids out in where we're deployed locations, who are working atstrange hours of the day or night, I'd had one of them call me one time.

I picked up my phone at my desk at Lockheed and this veryfar away person saying, man, this is. Airman Smith ma'am and I just have somequestions for you. I said, wow, this is great, but what time is it there? Isaid, oh, I don't know. Ma'am it's it's too goddamn hot. I'm so sorry forcursing. Ma'am and I said, you're you're at war.

You can say whatever you want. It's fine. But thoserelationships were built and explaining. How it's working. Yeah. If it's givingyou the answer you want then great. But once it doesn't humans lose faith intechnology actually much faster than they lose faith in fellow humans. And wealso know how to rebuild faith between humans.

We don't know how to do that with technology nearly as well.

Jack: [00:42:45] Ithink with some particularly in those military applications, I been contextmatters. If your GPS app sends you on a stupid route, you'll mutter and maybebe a little late. If you're on a road trip. If you are in a unnamed unit thatresponds to commands from J sock and you're in an MH six or a team of MH six isgoing somewhere, the consequences are way higher.

And you know that to build that trust, it's like, all right,you guys know what you're doing, but here's what the computer says. You'regoing to land too close to a wall and you're not going to get the lift youneed. This is your call. This is your call kernel, but the computer is throwingthis warning.

It's really easy to pick the super extreme conditions, butthat's happened in Afghanistan, successful rate until it was time to take off.And it's like, uh, oh, we're too close to a wall. But to your point, you can'ttake those kinds of people. And just say, now don't do that. Right.

Jamie: [00:43:46]And there are so many other factors they're trying to assess at the time.

What is the mission for, what are they trying to accomplish?If you need to get into a space to get somebody out of there, you may be lessconcerned about how you then leave. You've got to get a person out of abuilding if that's your primary focus. And it's hard to have a machineunderstand all of that.

So this is why we talk a lot about like, I don't know, weused to call it tipping and queuing. So basically making recommendations, havea machine, make smart recommendations. So then a human expert can know what todo with them.

Jack: [00:44:19]Sometimes you go in looking for somebody and you bring them back with you.

And sometimes you go and looking for somebody and you do notbring them back. And that's a big difference, which is maybe not that easy toexplain to the computer and unless you've been there. And I don't think a lotof people are retiring out of. Those sort of roles into computer sciences.That's true.

Dave: [00:44:39]Yeah. Human machine teaming go from here. What feels like the path forward foryou? If you're to layout the next five, 10, 20 years gaze into your crystalball project out from mine, Traft studies and everything today. Give us afeeling of where you think we end up over the next five, 10, 20.

Jamie: [00:45:02]A lot of the work going on right now, I'm thrilled to see.

Human centric work going on. So not just can we builddifferent kinds of algorithms from a computer science perspective, but how arepeople actually going to integrate with them? I think based on some of thiswork, we'll get AI that understands human patterns of communication. We've hadhow many thousands of years to come up with our communication patterns, notjust verbal, but nonverbal as well.

And we shouldn't change what we're doing just for thecomputers. It might be better to bend to the computers, to the humans. Givenhow long we've been being humans. You know, we've been humans for a long time.Where do I see it going? I think we'll start to get AI that can interact withhumans more easily in a word native way to understand our intent, to understandwhen humans are.

Confused or frustrated. I think we'll start to see. AI isnot just artificial intelligence is a generalized thing to apply to a lot ofdifferent areas, but we'll start to specialize it to things that humans need much.Like you would call a plumber or an electrician for a particular job. Youwouldn't hire a plumber to do your electrical work.

We'll start to. Get artificial intelligence agents that canfit into our lives in these specialized ways. One of the big questions about AIis always in the training. And if we're going to train AI to work with people,not all people are the same. So I think at some point in the future, and thisis probably a bit further out.

Being able to train an AI to the way that you talk and theway that you communicate as an individual could be a possibility as well. Ifyou imagine this in a military settings, that's my job and the things that Ido, even though there are a lot of different people that might be puttingtogether helicopter landing zone reports.

Their processes may be slightly different and the way thatthey go about it may be slightly different. And to have an AI partner thatunderstands how you do that. So like, if it says, Hey, you always start lookingat optical imagery and then you get to synthetic aperture radar. Since I'm yourAI helper, I'm going to work on the synthetic aperture radar part for you.

And then when you're done with that, I'll have some thingsready for you to assess.

Dave: [00:47:30]That's really cool. So it basically augments the human habits that it learnsand does it in a natural way. That would be amazing. I mean, it seems like Jackwas saying before the basic logic that's missing from AI today, that justseems, it seems really far out.

But by the same token, that would be insanely useful and youcan see how it would take with you in that fashion. And you wouldn't have totrust it with everything it's compelling.

Jamie: [00:48:02]Right. And it could learn the kinds of things that you are willing to trust thethings that you'd rather do on your own. And then even some things that it'shard to.

Put into words. So humans, aren't always good at explainingwhy they do the things they do. Particularly when they're experts. We have areally interesting DARPA project right now, looking at the pairing of automatedcyber reasoning systems and expert human hackers. We found through somequalitative assessment that asking human hackers really good ones, why they dowhat they do relies on a lot of this tacit knowledge.

It's just the experience, you know, why did you think tolook over there and then pivot over there? Well, I had seen it before, or itseemed right or something while the way this was set up, led me to thatconclusion.

Dave: [00:48:53]Oh, thank God. I thought we weren't going to be able to get on our DanielConaman reference in this one.

I mean, he talks about this same thing when he's describingsystem one and kind of gut instinct and you know how it's going to build up andfirefighters and so forth. And I think, I think that's fundamentally. That'sfundamentally true. No matter what the profession is is you accumulate all ofthis instinctual knowledge based upon the deep level of experience that isnever processed by your neocortex.

Really isn't, you know, you, you know, it deeply

Jamie: [00:49:30]and there aren't, there aren't that many expert human hackers. There aren'tenough to have them working in every government agency and in every companydoing your pen testing, et cetera. So if we can develop automated cyberreasoning systems that can learn from expert human hackers and be able toprovide some of those insights, then you start to get some interestingheterogeneous models of people and AI working together to secure systems.

Jack: [00:50:00]There's an interesting thought talking about that system. One stuff in, inhuman brains. It comes back to, why did the AI say I should do this? Or how didthe AI come up with that answer? How did our snake think to poke at that inFirefox? Right. You know, it's like, I don't know. I just, I figured that wasthe way they go.

It's like, well, that's not a good answer, but I should haveworn my new t-shirt lady friend got me a t-shirt after we watched some of, uh,Robert  talks, which just says on thefront, my amygdala could. Beat up your prefrontal cortex,

brain nerd humor, but yeah,

Dave: [00:50:38]reptile brain for the wind baby. Yeah.

Jack: [00:50:41]Military stuff. And because I'm, I live near flexi and have friends who cameout of that military world and also are training federal law enforcement. I getinto these conversations at coffee shops that you don't expect an old hippie ina clearly retired.

Special ops person to get into at the coffee shop. It'slike, so the brain does this, you know what we call muscle memory, but you haveto know what you know, and that's one of the things with the military. You'reback to the, where could we help them? It's like, you know, no matter whatsituation you're in, if you, uh, get pushed down into fight flight freeze mode,you can think about one or two things at a time.

And you'd better not have to think about, that's why the,you know, the weapons training is what is you better not have to think abouthow an or fires or whatever weapon you've been issued, but if you can make it.So I don't have to think friend or foe. I mean, we go back to world war II.That was a huge project for interphone radar IFF, particularly the U S workedon.

Let's not make the person that's getting bombed, decidewhether or not to shoot at that airplane. Right. Yeah, because this is not anew problem, even in the military space.

Jamie: [00:51:55]Yeah. And this is why people have to rehearse things over and over and over andover and over again. One of my center directors, he directs our center fornarrative and disinformation is also a captain in the Naval reserves.

He used to fly helicopters for the Navy and they would dotraining where, you know, they you're in a helicopter seat and they'd dump youinto the pool and they do it over and over and over again. So that if you godown in the ocean, it's the muscle memory that you can use to get out. Yeah.But, you know, it's, it's hard to explain how it is.

My brain learned that way. And it's more acceptable to say,you know, for a human expert to say, yeah, I don't know. I just felt like itwas over there, but if a machine tells you that it's really irritating, so wedon't treat the machines like our human partners. And I think that's fine. Ithink that's totally okay.

We should adapt the machines in their context to be in rolesthat are going to be helpful and not annoying.

Dave: [00:52:48]That feels like a decent place to wrap up that part of the conversation andslide into speed round. So let's slide into speed round here. What is the lastmedia book poem podcast movie that you digested that you found particularlytasty and impacted you and changed your life in some way?

Jamie: [00:53:11]Oh, that's a great question. So right now I found this. Rapper named DESA. Shelives in Minnesota. She's part of a group called Doomtree and she is not only aphenomenal rapper and musician, but she has this podcast called deeply humanand she dives into interviews. All of these experts on these different facetsof what it's like to be human.

There's one episode on standing in line where humans good atstanding in line. It seems really weird, but I love everything she does deeply.Human is great. I think I listened to like four hours of those podcasts at onepoint, just all in a row, because it was so fascinating. There is another bookthat I read recently.

That was just a very strange read. It's nonfiction. It'scalled Kim Jong IL production. It's not a new book, but it's about when thenorth Korean regime kidnapped a south Korean movie star and producer and forcedthem to make movies in North Korea for several years, actually, until they wereable to escape.

It's one of those things where you think what truth truly ismore stranger than fiction.

Dave: [00:54:22]I've heard great things about that. I haven't checked it out yet becausegenerally I'm watching the things that a nine-year-old would watch, but yeah,well, so

Jamie: [00:54:31]then I hope you've seen, I hope you've seen, um, the Mitchell's versus themachines then.

Dave: [00:54:37]Yeah, I got to say, I almost made a reference to it earlier. And I, afterwatching that, I kicked the Roomba a little less hard than what I did before.I'm not going to lie. Yeah, cause

Jamie: [00:54:48]you see what it can do. Yep. Yes, no. I love that movie. It was so clever.

Dave: [00:54:52]So cool. Really well done. Really well done. And the way it handles LGBTQissues is pretty awesome.

So yeah. There's so many things to like about that movie. Somany.

Jamie: [00:55:05]Yeah. It's not just a kid's movie. It's hilarious.

Dave: [00:55:08] No,who's on speed. Dial for you when you dial someone for help and or anotheropinion on something, who is it typically.

Jamie: [00:55:16]There are two people that come to mind. Um, one is my close friend, Nadia, whois actually also my boss.

We built the global security initiative together here atASU. So we've taken it from a small three person organization six years ago to,I think we have 36 employees now, but we then through a lot of different thingsand you know, she's great at helping me think through issues. You know,everybody gets stuck in their brain sometimes.

So she's been great. I also think of my cousin, the closestthing I have to assist her. My cousin is only, you know, maybe nine monthsyounger than I am, and she has such a great perspective. She's comes from asocial science background. She's very empathetic and really helps me to givepeople the benefit of the doubt when I need to get benefit of the doubt.

Dave: [00:56:08]Very cool. Very cool. That's important for that. Jack has suffering bastard andhis puffer fish, and I have the name of Joe. It's

Jamie: [00:56:17]good to have your people. Hey, suffering. Bestard how you doing, buddy? It'ssuffering. I can tell by the look on his face.

Dave: [00:56:24]Poor guy. What makes you hopeful? What's gotten better in the past 12 months,as strange as that seems as that feels to say right after coming out of 2020.

And if we constrained it to love you to cyber security andthings related, what gives you hope?

Jamie: [00:56:44]I think all of the craziness of 2020 has given us the opportunity to thinkabout what we want next. Some people will talk about when we go back to normal,we don't have to go back back to the way that we were doing everything.

We have this opportunity to think of what differences wewant in the way that we organize our workplaces and the way that we organizeour teams and even our lives. It's been tough, especially for the parents, youknow, My son is 11. It has been a lot harder for people with much smaller kids,but I think as hard as it's been, it's given us, um, silver linings, I've beenable to spend some time with my kid that I probably wouldn't have beforehandrunning back and forth and doing the normal work kind of stuff.

But you asked specifically about the world of cybersecurityand security. Generally, I am seeing more, more of an emphasis on humanconnections and interactions. And while the technology is absolutely important,the people that use the technology are important as well. And there's more of aresearch focus there there's more attention being paid, um, across the board.

And people are just more cognizant of some of the issues andthe things that they can do to help. So those are the things that make mehopeful.

Dave: [00:58:02]Awesome. Great answer. All

Jack: [00:58:05]right. Thank you, Jamie. Thanks for putting up with us twice.

Jamie: [00:58:09]I enjoyed it. Thank you so much.