Scroll

Fixing Security's Human Problem: Behavioral Engineering at Robinhood

In cybersecurity, we have teams focused on managing vulnerabilities. We have SOCs who spend their days obsessing over threats. App sec teams. Data privacy teams. In the typical, modern cybersecurity team, we have exactly zero people focused on helping humans defend themselves and the organization in spite of a massive increase in scams and fraud that are squarely aimed at tricking people into making bad decisions. Are we really more at risk from a new foreign adversary or CVSS 9 vulnerability than we are from an executive or someone in Finance being deceived by a scammer? 

Enter Behavioral Engineering. A new-ish discipline introduced by forward leaning cybersecurity teams that recognizes the pivotal role that humans and key behaviors play as part of our overall security posture. What do we mean by key behaviors? How we share sensitive information. What we do when we authenticate. How we react when we see something suspicious. And so on. 

In this episode of Security Voices, Jack and Dave interview the Behavioral Engineering (BE) team of Robinhood, Masha Arbisman and Margaret Cunningham, as well the CISO, industry veteran Caleb Sima. In this roughly 60 minute session we establish a clear definition for BE, explain how it works in the real world and how it contrasts with commonplace practices such as “name and shame” benchmarking of vulnerability remediation progress. We’ll also clarify why security awareness training often sucks and how BE addresses historical security program deficiencies. 

Before wrapping up with practical advice of how and why to get started with your own BE program, we learn why you should never say that humans are the weakest link. And why you probably should actually click on things. Lots of things. And just tell someone about it afterwards it went funky.

About this episode

In cybersecurity, we have teams focused on managing vulnerabilities. We have SOCs who spend their days obsessing over threats. App sec teams. Data privacy teams. In the typical, modern cybersecurity team, we have exactly zero people focused on helping humans defend themselves and the organization in spite of a massive increase in scams and fraud that are squarely aimed at tricking people into making bad decisions. Are we really more at risk from a new foreign adversary or CVSS 9 vulnerability than we are from an executive or someone in Finance being deceived by a scammer? 

Enter Behavioral Engineering. A new-ish discipline introduced by forward leaning cybersecurity teams that recognizes the pivotal role that humans and key behaviors play as part of our overall security posture. What do we mean by key behaviors? How we share sensitive information. What we do when we authenticate. How we react when we see something suspicious. And so on. 

In this episode of Security Voices, Jack and Dave interview the Behavioral Engineering (BE) team of Robinhood, Masha Arbisman and Margaret Cunningham, as well the CISO, industry veteran Caleb Sima. In this roughly 60 minute session we establish a clear definition for BE, explain how it works in the real world and how it contrasts with commonplace practices such as “name and shame” benchmarking of vulnerability remediation progress. We’ll also clarify why security awareness training often sucks and how BE addresses historical security program deficiencies. 

Before wrapping up with practical advice of how and why to get started with your own BE program, we learn why you should never say that humans are the weakest link. And why you probably should actually click on things. Lots of things. And just tell someone about it afterwards it went funky.

Meet our guest

Robinhood

Caleb Sima: Chief Security Officer at Robinhood Markets, Margaret Cunningham: Senior Staff Behavioral Engineer at Robinhood, Masha Arbisman: Information Security Manager

Caleb Sima is the Chief Security Officer at Robinhood Markets. Prior to joining Robinhood, Caleb served as VP of Information Security at Databricks, a leading data analytics and machine learning company where he built the security team from the ground up. Before Databricks, he was a Managing VP at CapitalOne, where he spearheaded many of their security initiatives. Caleb also founded SPI Dynamics and BlueBox security, which were acquired by HP and Lookout. He is regarded as one of the pioneers of application security and holds multiple patents in the space and is also the author of Web Hacking Exposed. He serves as an advisor, investor, and board member for security companies. 

Margaret Cunningham is a Senior Staff Behavioral Engineer at Robinhood. She specializes in integrating principles of cognitive science and human performance with technology R&D, risk detection, threat modeling, and analytics to design proactive interventions that positively influence security, trust, safety, and privacy outcomes. 

Masha Arbisman is an Information Security manager who prioritizes leading with empathy and compassion. Thrilled to pave the way for Behavioral Engineering as a function and hope to make it a staple in every security organization worldwide.

Transcript

Unknown Speaker  0:00  

Welcome back to security voices. And it indeed has been a while Jack's been gallivanting across the US doing all sorts of interesting retirement things. What's the quick list Jack?

Unknown Speaker  0:13  

The high points involve 5600 mile road trip in 13 days, which involves eating and drinking a lot in amazing places like Chicago and Santa Fe and toes in Vegas. And before that, building an acoustic guitar in five days at the Campbell Folk School and signing up for woodcarving classes. And then I don't know what else I just can't keep up the Forge is almost ready to go. Set up some equipment today. So the shop is pretty much ready to start heating and beaten. And that's just the high point.

Unknown Speaker  0:46  

Cool. And you'll notice in there was no security voices. We had an episode. Oh God, we thought we had an episode, we recorded a brilliant conversation that ended up being complete silence between Galena and Tova of clarity and myself. She's promised us a Redux on that one. But that is why security voices had a weird summer break. So welcome back. And we're delighted to have today the Robin Hood crew with us to talk about behavioral engineering. So let's do a quick round of intros here we're going to jump right in this is going to be a shorter episode than usual. In the normal kind of Hollywood Squares style that Zoom has got us accustomed to I'm going to start with the squares on my side. Let's start with Masha, if you can give us your location today, where you're at currently, on the planet, along with a little bit of what you do over at Robinhood would be great as well as the last book you read or the one you're currently reading.

Unknown Speaker  1:44  

All hard questions. Funnily enough, I find myself in my childhood bedroom in the Bay Area on a trip visiting my parents. So Bay Area, California. I am currently the behavioral engineering manager at Robin Hood, just a fantastic group. And current books. I actually just finished a completely non work book called Light lark, if anybody is into weird young adult fantasy, it was fantastic. But my next upcoming book is actually all about Montessori parenting, which is a lot of places of where I get my connection and correction for things we do at behavioral insurance.

Unknown Speaker  2:20  

Cool. All right, we'll have to compare notes on fantasy and sci fi for sure. I just started this whole foot.

Unknown Speaker  2:30  

So I have not heard of that one. Yeah, yep, yep.

Unknown Speaker  2:33  

Okay, over to Margaret. Hey,

Unknown Speaker  2:36  

Margaret Cunningham, here sitting in Austin, Texas, where it finally has cooled off a smidge. I work with Masha on the behavioral engineering team. My background is Psychology. I'm an applied experimental psychologist. And other than that I just finished the first 15 Lives of Harry August, which is like a time travel sci fi situation. And I'm also reading maybe rereading, I should say, human engineering book by Wiccans which is very nerdy. But whenever I want to refresh for core things that I find interesting, I go back to that one.

Unknown Speaker  3:14  

Outstanding, Caleb, one or only Caleb cmoh. Over to you.

Unknown Speaker  3:18  

last book I read was Ray Dalio is changing world order, which is a very interesting book. And the current one I'm reading is very, very different from that. It's called Raising lions, which is about raising really rebellious and tough kids, which, by the way, is extraordinarily fascinating, really enjoying that book right now.

Unknown Speaker  3:37  

My parents wish that that had been out 60 something years ago.

Unknown Speaker  3:41  

Same here. Because what's happening is like the world's repeating right, my kids are doing what I was doing to my parents.

Unknown Speaker  3:49  

Classic mother's curse, right? I hope you have kids just like you I have to make a plug. I haven't finished it yet. But I'm reading Dennis Fisher's latest book be gone. For those that know Dennis our friend in in the industry.

Unknown Speaker  4:02  

What's the subject matter, Jack?

Unknown Speaker  4:05  

It's crime mystery. He's brought one before that was I thought kind of dark, but he didn't. But that tells you what kind of stuff he must enjoy. It's a you know, somebody that's trying to avoid being a cop and gets dragged back into ugly things. Because I don't know if you guys have noticed this. But once you're the mindset, it's really hard to turn it off. You see things and whether or not you're supposed to you just see them and dig in. So this is a very visceral take on that. And Dennis is just, you know, a delightful human being and so that's what I'm doing. So anyway, just a plug for Dennis. They're

Unknown Speaker  4:41  

awesome. All right. And Caleb didn't announce his title, but he's currently the CISO at Robin Hood. And a one time entrepreneur who created a famous product back in the day called Spider dynamics and you call that app spider. Was that it? Did I get it right?

Unknown Speaker  4:57  

I don't even remember

Unknown Speaker  4:59  

that A long time ago,

Unknown Speaker  5:02  

I would assume it was a different age different. I don't know, maybe it was like a different lifetime somewhere.

Unknown Speaker  5:09  

About four years ago, I couldn't fathom how you can forget your own company and your own products. But now three and a half years in open Raven, I get it completely. I totally understand those repressed memories. All right, let's start out with one of you who is brave enough to venture a definition of behavioral engineering. You can choose but the Encyclopedia Britannica one is no fun. Let's start with maybe the definition, the definition you'd give your non techie friend, as you sit at a bar.

Unknown Speaker  5:40  

Yeah, I can give it a whirl. behavioral engineering is applying behavioral science principles to the security space so that we can measurably improve human performance and reliability across you know, security, privacy, and trust initiatives. And you might be like, okay, cool, I don't know what that is still. And I'd say focus on the word science, and then focus on improve human performance and reliability, I don't think that you can do one without the other. So you kind of have to do both where you are understanding what's going on deciding what the best intervention might be to improve performance, improve reliability, and then taking a beat afterwards to measure what happened. This has really helped us promote a culture of ownership across, you know, all of our interdisciplinary teams. So it's pretty cool.

Unknown Speaker  6:31  

All right. Let's unpack that a little bit here. What's the why behind this? I'll play devil's advocate here for a moment. Let's say that I kind of get it. But why does it matter? What type of threats does it stop? What types of things does it do? What illness does it stand to cure inside a conventional security organization?

Unknown Speaker  6:51  

So I might, I'm going to ask you a question, which is maybe maybe not the way this usually goes. But tell me a place where humans aren't.

Unknown Speaker  7:02  

Fort Myers Beach

Unknown Speaker  7:05  

temporarily soon. Some people were like, No, I'm staying. Right. So that actually tells you a lot about the situation.

Unknown Speaker  7:15  

They even they are gone at this point. Yeah. Yeah. No, no, no, honestly, no, it makes sense. But let's, let's tackle it this way. And matter of fact, let's let's pick on Caleb here for a minute. So Caleb, you've been a C. So before I think you were at data breaks before Robin Hood. And presumably I'm gonna go out on a limb and say you didn't have a behavioral engineering team there. What sparked your interest in this here? And did you did you start the behavioral engineering discipline wasn't here before you came in? What differences do you notice?

Unknown Speaker  7:46  

It was here before I came. But here's, here's where I think a lot of us as security practitioners missed the boat and things. I think we focus a lot on technical, we focus a lot on what's going on in the machines and the processes and the policies and the compliance. But at the end of the day, I think we all know one thing, which is we're an influencing group, our job is to try to influence the rest of the organization to act safely or to care. And at the end of the day, we're all we're trying to do is influence and change the behavior and culture of the organizations to be safer. And so when we think about that, we spend so much time and so much money on technologies and the new gadgets and the new products. But have we really looked at investing? How do we look at changing behavior, changing people? And understanding how to do that? You know, when you look at every other organization that spends money in marketing and advertising and go to markets and sales, what are you trying to do? You're trying to influence behavior, you're trying to convince people to think the way that you want them to think. But when we think about operations, security teams, you don't see investments in that, except maybe when it comes to education, right, wherever your security team has, go through this PowerPoint, go through these tests and quizzes, but not really investing How do I really influence the culture and the behavior of the organization? And it's pretty amazing. Like when you actually think about investing in that the differences and changes that you might say, so I think that's really where I think behavioral engineering becomes a practical reality.

Unknown Speaker  9:29  

All right, I think we may have the instigator of the whole thing at Robinhood in front of us here and Masha Masha, were you the one who got this to catalyze this into motion?

Unknown Speaker  9:39  

I'm actually happy to say that I'm not it happened before my time. I did, however, help instigate it as a professional at Yahoo at my prior job. So it was really wonderful to see that the group had made an impact elsewhere and was being taken on in the industry and there was somewhere else to go. Before this. I thought I was you know, one of very few You in a very small flock.

Unknown Speaker  10:01  

Let's pick a win from Yahoo, since it's safely in the rearview mirror here, not Yahoo. Well, maybe, but at least the job. Deep apologies, sorry, not sorry, give us a success story from Yahoo, or like a key lesson and one of your projects so we can kind of wrap our head around what it is.

Unknown Speaker  10:19  

Yeah, yeah. And that kind of answers to the why the why, for me is all around prioritizing people, it's finally treating people as the answer instead of just the problem. You know, I know I'm touching on one of Margaret's pet peeves, but you'll hear a lot that people are the weakest link in security. And then the why for me is to flip that narrative, a win from Yahoo. And this is more of a win because it was a loss that turned into a win, it was a big learning moment, we had a really large population of reporters, and we thought that it would be really wonderful to create a basically email banner from everybody as email that gets external emails coming in. So we wanted to kind of prove to people that they should pay more attention to external emails, which is fantastic. So you know, 5pm, on a Thursday, maybe not Friday, we decided to flip a switch and add an external banner to absolutely everybody's incoming emails from external vendors. What we didn't think about was that our population of reporters happens to get hundreds of 1000s of millions of emails a day, and 90% of them are external to the company. So they woke up Friday morning, and all hell broke loose the kind of messages that I got into my inbox of pure hate I had never received anywhere else in the world. And I felt so awful. And what I realized is that we put in a security control that we thought was going to help minimize risk for the company without actually getting to know the audiences that we were implementing them for. And that's what behavioral engineering solves, you know, we're actually thinking about the people, we're taking time to research our population or our customer base, and we're creating solutions that help them in their everyday lives and help us keep the community safe.

Unknown Speaker  12:05  

When I first came here, I didn't have a good understanding of what behavioral engineering was, and Marsha Margaret really helped teach me on this. And then one way that I also kind of think about it is, by the way, you too, can can jump in on this, but it's like, it's like the user experience, right? Like when you build a product, there's no way you'll ever go without having user experience, like, how does the user actually interact with the product? How do they use it? What's the most effective ways of delivering the functionality that you want? And similarly, here, I think, when we think about a security team, and the services and products we offer, and how do we influence, you have to have a user experience, you have to know what your customer does, how they think, what's the best way to approach them. And that sort of to me is the equivalent when I think about it.

Unknown Speaker  12:55  

And that works for me as a product person that makes a lot of sense. One of the things that comes to mind, is just how much of a reaction is this to things moving past remotely exploitable vulnerabilities and targeted attacks, and so forth. And just say, I've seen so many scams, and so many phishing attacks, how much of this is just a maturity step of security in that direction, recognizing that an RFC might be really expensive, against the right target, it may not be possible to do other things to get an environment or in all likelihood, it's probably just a hell of a lot cheaper and easier, just to hack the human. How much of it of this as a reaction to just knowing that now, there's so much people hacking that's going on. I think things

Unknown Speaker  13:42  

have changed a lot since like the beginning of security and tech, what's happened in the past 15 years. What's happened in the past five years, is that I think so many more people are connected. And so many organizations are dependent on technology. Yes, there's always going to be a focus on traditional security threats and vulnerabilities. But the landscape of people has expanded, like enormously, people are the most attacked, because as you said, it's easy to attack them. And what's curious to me is that we focus so much on those little moments of human failure. And we never take into account the millions of instances of human resiliency that we don't keep track of. So I think it's a fascinating space. I think that the more connected and the more, we break down the barriers between people and devices and people in technology, I'd argue there's very little barrier left, the more important it is to understand human strengths as well as the human weaknesses that can be exploited. So I think the focus is shifting due to the nature of the progress, we'll call it. I'm not sure if I agree with the word progress, but that's what we'll call it.

Unknown Speaker  14:58  

It feels like you made an important point in there that I want to, I want to talk out a little bit, that we have so many devices, and they're with us so often now. And you know, seemingly, we can make decisions at any point in time, I can use an app on my phone to approve bills that maybe I shouldn't, at any point in time. So it feels like that's the number of moments that we have, where we can be exploited and make those decisions and how close those devices are to our life seems like it's part of the driver here as well, independent of anything else. So much more of our day is exposed to our work and our life, particularly post pandemic two, when everything's kind of glob together. It feels like a recognition that we are so much a part of the attack surface now in a way that we weren't when we could kind of hide behind our firewalls. And we didn't have devices that were with us all the time, and so much information wasn't out there on us.

Unknown Speaker  15:53  

Maybe making some comments on sort of your aspect of attackers. Just recently, is this a response to attackers using and hacking people? I actually don't think this is new at all right? Like, I mean, we've all known about Kevin Mitnick books focusing on social engineering, since you know, way back in the days, and social engineering is the fastest way. And the best way to probably get to what you really want. I actually think that the reason why we're seeing the rash of attacks on social now was maybe a combination of of a couple of things. One, I actually do think security in general is starting to become better, right? Like, I think protections are becoming better and enterprises. And also, I think that the that's risen sort of the attackers focus on, it's not just about running a bunch of scripts, like actually, we might have to like, get off our butt and do some work. And so they are growing guts, and they are just being blazed, and just mass calling people, mass texting people, and using that social aspect of things. And I think that like that is that's maybe the new part is I think attackers are getting a lot more gutsy in their use of physical and social means to get what they want. And you're starting to find out, it's breaking down everywhere, right? Like all of these companies who have invested all of this technology, it's breaking down through all I can just send SMS phishing, to like 500 people and guaranteed to get three. And all I need is three. So is behavioral engineering, or is that focus really, because of a response? I don't think so. I actually think what we're seeing is a lack of behavioral engineering being invested in today.

Unknown Speaker  17:38  

It's an interesting point, I found myself must have been a couple months ago as talking to someone who wasn't in the security field. It was a company that was it wasn't necessarily a young company, I'd say it was a little further along than you consider a young company. But having said that, the guy asked me he's like, what, what should I really be doing? Give me like the basics. And I found myself saying to him, first and foremost, have a secondary approver on any wire transfer. Because that's the moment where you're probably going to get really screwed, is, you know, if you wire transfer out money, you just can't get back more so then, you know, because to your point, Caleb, there's so many defenses built into things like Gmail, not perfect. But having said that, there's a fair amount there's security and other we never had before. We're dealing with crazy spam in a lot of the worms and adware and spyware and a lot of these things that kind of quietly go away or become lesser issues, you know, we tend to forget about it because they're out of sight out of mind. But we are very fortunate. There's a bunch of big problems that haven't gone away, but kind of exist below where we spend most of our headspace. I

Unknown Speaker  18:45  

had a recent real estate transaction and who was funny because after all these years of EFT everything, lawyers were adamant that there were going to be checks in the mail. Because real estate EFT fraud is so rampant right now that they didn't want the information. They didn't want anybody to have that information. They said we're gonna send you we're gonna mail checks. That's the way this works. It's just like, wow, we've come a long way haven't be

Unknown Speaker  19:13  

it strikes me that there's going to be some people that hear this particularly given that that Kevin Mitnick reference and, you know and say, Well, geez, what's the connection here to security training, right, like Kevin Mitnick went on to start a company called no before to train people, you know, so that people wouldn't get duped. That's sort of the obvious reductionist response to this. How is this not solved by security training? Give us the rebuttal on that one. And what you would say in response,

Unknown Speaker  19:42  

there are some really great things about security training. Training itself is pretty complicated field you have high fidelity training, low fidelity training, active training, passive training, all of these different components that can make training better or worse. Security Training is Marriage very challenging because one of the things that gets people the most to make mistakes is that you get them in this like hyped up emotional state. Sometimes we call it like a hot state versus a cold state. So last week when I was doing my training, I totally understood the whole situation. I knew what the risk was. But when somebody calls me with sirens in the background, or crying, asking me for access to something, I'm freaking out, I'm super hyped up, I'm anxious, I'm worried I might do whatever it takes to solve that situation. Even if you know, in the back of my little lizard brain, I know it's not right. So there are some ways that training can be used to help people practice in those hot states, giving them the environment to actually play make the mistake, get their emotions tied up into it, because you can practice making better decisions when you're not in the best emotional state. I know, Masha, you've had some experience on that. If you want to dive in. Yeah,

Unknown Speaker  21:03  

absolutely. It's just like you said, Have you guys heard that the term practice makes perfect. I can't stand it. Because the truth of the matter is that perfect practice makes perfect. So trainings really great in that it allows you to practice things. But if you're practicing the wrong behavior over and over again, you're just ingraining that wrong behavior. So most industry standard training is very modular, read this, take this little quiz say that you read it, you have tested great, you're trained, you're ready to go. Our training takes kind of a different idea to it to us actually trying to create a safe space specifically safe, that people can practice the right behavior over and over again. I also have a gripe with sorry, Kevin, but no befores training has a lot to do with not clicking on fishy links. But like what the hell does that mean? You're leaving such a subjective interpretation to the end user of what suspicious means or what fishy means. Were the behavior and fishing that you should be treating is report absolutely everything that gives you the wrong feeling. In general, the more that you report, the easier it is for us to answer,

Unknown Speaker  22:11  

Masha, I think about clicking all the time. It's 80%

Unknown Speaker  22:15  

of what we do on the internet, how are you going to stop people from doing the thing that they do fun? It is fun. It's really

Unknown Speaker  22:21  

fun to click stuff. And like I'm sorry, I love to have fun. I'm also curious. So if it looks weird, I'm going to click it. I'm probably like offender number one. Sometimes sorry, Caleb. But I've actually done this, like in a conference like, hey, group of hundreds of security practitioners who obsess over security all day, can you promise me that you've never clicked the wrong thing. No one will raise their hands because they're not, hopefully not that dumb. We can't tell people to stop clicking things is wild.

Unknown Speaker  22:54  

We don't want to teach them to be afraid of their email or free to researching or afraid of connecting with people.

Unknown Speaker  22:59  

That's the point of the web, right? It's the point of HTML, right? It's clicky things. It's what you're supposed to do. What I learned long ago, when I had a real job where I actually had people that I was responsible for, I thought that training was successful, when every now and then somebody would say, I'm not sure if this was okay. And that was a when the idea that training is gonna solve your problems is pie in the sky. It's nonsense. But the idea that every now and then somebody says, I'm not sure this is right. It's like, okay, that may have put us hours, days, months ahead of the curve on tracking something down. And that's great. And also, then the correct answer to I think I clicked on something I shouldn't have is not to yell at them. It's to say, Hey, let me take a look at that. But really, thank you so much for telling us. Let's dig into it. And by the way, if anything else weird happens because of this or anything, just anything at all, let me know. And we will dig into it. But I really appreciate you letting me know, or letting us know and not yelling at people and not doing those horrid fishing exercises.

Unknown Speaker  24:15  

One of the things I'd like to point out is, I think behavioral engineering gets very, very quickly cordoned into this training box. When in reality, there's a lot more than that. And like, let me give you an example. You can say what's a standard thing, we always talk about security, how to make developers write secure code, right? So you can educate, you can train, or like if you're in a security team, how do you provide tools, utilities or processes to enable developers to write safe code by default, but what does that look like? Do you write it in their IDE? So as they're coding, it automatically fixes or suggest things? Do you have gates in your SDLC pipeline process that flag errors? Is it compile time errors that flag and do these issues. And I think this is where the behavioral engineering part comes in, which is, what is the best way that we can train behaviorally the engineers to do it through our process in our product. So for example, maybe the right way on our culture is to change the pipeline, so that as developers check in code, the errors come back. Or maybe it's better that I do it at compile time. Or maybe it's better that I point out positive things about what they do versus negative things about what they do. There's a lot of decisions and designs that you have to think about when you say, the best way to get an effective outcome of making sure developers write safe code is by changing the way that I either do my pipeline do compile time versus IDE versus education versus what's built into platform versus what is not. Those are the features and flows and functions that create that user experience for the developer. And you have to think about that, you got to think, what's the right way. Let me give you another example. I did this in my two jobs ago, in our SDLC pipeline, we put in a static code analyzer, which basically just outputs like 50 billion false positives, and 50 billion real issues. And so you tell engineers, hey, you have this dashboard, and it's red every time you have a vulnerability, right? Well, then what happens is you have the car alarm problem, okay, I'm never going to pay any attention to this. So then what we decided is, hey, we're only going to solve classes of issues, we'll just pick one class of issue. And if you don't have that class of issue, it's green. Even though you may have 50,000. Other problems, you're gonna see green on your dashboard. And what happens is people get used to seeing green, and then when they see red, they're like, Oh, that's a problem. And they pay attention to it. That's like a gamification, behavioral engineering, designing the problem. And so like, again, that's not education or training, but it's about that flow.

Unknown Speaker  26:58  

Yeah, and all of those different types of environmental factors that you can control, for lack of better words. But I mean, that has been like a variable control. When we make those changes, we spend the time to identify the best types of outcome metrics that would show the value in these things. Like, if I go to Caleb and say, it seems better, he's gonna say, that's not good enough, Margaret, for me to invest in this new process. He's gonna say, how much better? And how are you measuring better? And I'll say, well, people are patching things 20% more quickly, people are responding to this type of suggestion within five minutes. And those are the types of things that are very, a second thought or third thought for potentially an engineering team. But it's first of mind for us, because of the way we think of human behavior and performance.

Unknown Speaker  27:51  

Another thing that's first of mine that I think a lot of people forget about, our first question is not how were they not doing it, but why what's standing in their way of getting things done? A lot of the time, they might have the resourcing technically available, but we're not incentivizing on the right thing. No engineer is incentivized on patching or on going back and fixing vulnerabilities. They're incentivized on creating the best new shiny thing. And that's how they get promoted. So sometimes it's all about cultural shifts and taking our company in a way that's actually incentivizing the right payers.

Unknown Speaker  28:23  

So you've used vulnerability as an example. So having had my grubby paws on the space quite a bit, one of the things I've seen pretty frequently and used effectively, that may be awful. I want you guys to tear it apart here for a minute, I'd like you to chew on it is the name and shame. Here's our wall of here's one group. And its progress. Here's another group and its progress. And here's another, Margaret, is that a blasphemous sin against behavioral engineering? Is it recognizable? Is it advisable or depart?

Unknown Speaker  28:59  

I mean, it's not great. I mean, to say that it's pretty old school. A lot of industries that are high risk, that also need to be high reliability, focus on creating something called adjust culture, where most of the blame is shifted away from individuals and into systemic factors. You may know about the Swiss cheese model, you may know about socio technical systems, it's all the nerdy way of saying, it's not always the individuals fault, where you were able to observe the problem. That is one very small piece of the puzzle. So when you say, Oh, well, there's Margaret, having a blast clicking things, put her at the top of the board, where we shame people. It's really not very effective. And then again, you know, you're not going to get those people coming forward saying, hey, like, last night, it was midnight. I just clicked this thing. Sorry. My computer's being weird. The ability to get that type of warning, or get those types of questions just immediately dissipates like people don't want to do it.

Transcribed by https://otter.ai