Scroll

Threat modeling life: Prepping for the rest of us with Michal Zalewski (lcamtuf)

Hidden bunkers, stacks of canned food and piles of artillery. Disaster preparedness has become an Internet meme and these are some the “prepper” community’s showcase images. But most of us who have lived through the recent pandemic, the Capital insurrection on January 6th and more no longer take the threat of a major disaster lightly. For those of us not willing or able to dig out a backyard bunker, is there a rational middleground where we can feel well-prepared for whatever comes next? 

Software security legend Michal Zalewski (lcamtuf) answers this question and many others in his third book Practical Doomsday: A User's Guide to the End of the World. Using familiar threat modeling principles, Michal explores everything from evacuation gear and bulletproof vests to the genuine probabilities of civil war and a zombie apocalypse. In what can only be described as an unbelievable coincidence, Jack and Dave’s hour long interview with Michal was recorded the same day Silicon Valley Bank collapsed and was taken into government receivership. 

In spite of the understandably dire subject matter, Michal’s equal sense of optimism and pragmatism steer us towards the middle path of rational risks and what a “normal” person should consider doing to be ready. It’s not nearly as hard as you might think and the peace of mind gained was well worth taking a hard look at the worst case scenario. 

This interview is nearly cleanly separated into two parts as we focus on the opportunity and threat of artificial intelligence around the 32 minute mark, starting with Michal’s approach to writing. The real threat of generative AI to drive truly deceptive attacks takes center stage as we explore how the ability to easily generate compelling documents, images, video, etc. may make it nearly impossible to distinguish between reality and a scam. 

No conversation on AI and threats seems to be able to avoid mention of the singularity threat, however, Michal keeps true to form and narrows in on the much more likely “paperclip problem” of mundane AI optimizing humans out of existence. This was one of our favorite episodes in ages, we hope you enjoy it and learn as much from it as we did. We also hope you got your money out of SVB, just like Dave did the week after this was recorded. Stay safe.

About this episode

Hidden bunkers, stacks of canned food and piles of artillery. Disaster preparedness has become an Internet meme and these are some the “prepper” community’s showcase images. But most of us who have lived through the recent pandemic, the Capital insurrection on January 6th and more no longer take the threat of a major disaster lightly. For those of us not willing or able to dig out a backyard bunker, is there a rational middleground where we can feel well-prepared for whatever comes next? 

Software security legend Michal Zalewski (lcamtuf) answers this question and many others in his third book Practical Doomsday: A User's Guide to the End of the World. Using familiar threat modeling principles, Michal explores everything from evacuation gear and bulletproof vests to the genuine probabilities of civil war and a zombie apocalypse. In what can only be described as an unbelievable coincidence, Jack and Dave’s hour long interview with Michal was recorded the same day Silicon Valley Bank collapsed and was taken into government receivership. 

In spite of the understandably dire subject matter, Michal’s equal sense of optimism and pragmatism steer us towards the middle path of rational risks and what a “normal” person should consider doing to be ready. It’s not nearly as hard as you might think and the peace of mind gained was well worth taking a hard look at the worst case scenario. 

This interview is nearly cleanly separated into two parts as we focus on the opportunity and threat of artificial intelligence around the 32 minute mark, starting with Michal’s approach to writing. The real threat of generative AI to drive truly deceptive attacks takes center stage as we explore how the ability to easily generate compelling documents, images, video, etc. may make it nearly impossible to distinguish between reality and a scam. 

No conversation on AI and threats seems to be able to avoid mention of the singularity threat, however, Michal keeps true to form and narrows in on the much more likely “paperclip problem” of mundane AI optimizing humans out of existence. This was one of our favorite episodes in ages, we hope you enjoy it and learn as much from it as we did. We also hope you got your money out of SVB, just like Dave did the week after this was recorded. Stay safe.

Meet our guest

Michal Zalewski

VP of Security & Privacy Engineering at Snap Inc.

Michal Zalewski has been actively involved in disaster preparedness for more than a decade, including the publication of a popular 2015 guide titled Disaster Planning for Regular Folks. By day, he is an accomplished security researcher who has been working in information security since the late 1990s, helping companies map out and manage risk in the digital domain. He is the author of two classic security books, The Tangled Web and Silence on the Wire (both No Starch Press). He spent 11 years at Google building its product security program before joining Snap Inc. as a VP of Security & Privacy Engineering. Zalewski grew up in Poland under communist rule, lived through the collapse of the Soviet Bloc, and moved to the US in 2001.

Transcript

[00:00:00] Michal 'lcamtuf' Zalewski: The universe is a harsh mistress.

I grew up in Poland in the final days of the sort of, you know, Soviet and post communist rule.

We are now living in interesting types. Again, I'm still having fun. Just remember that the voice inside your head, Very unreliable narrator,

but it's equally possible that we are just sort of, you know, mesmerized by a really cool parlor trick.

As much as I despise Lennon, I think he had a good quote here. There are decades when nothing happens, and there are weeks when decades happen.

My childhood experiences told me to never say never.

It was not a reputable profession to go into. It was, you know, not something that your parents would approve of.

I think a lot of people confuse wishful thinking with principled risk management,

[00:01:21] Jack: hidden bunkers, stacks of canned food, and piles of ammunition. Disaster Preparedness has become an internet meme, and these are some of the prepper community's showcase images. But most of us who have lived through the recent pandemic, the capital insurrection on January 6th and more, no longer take the threat of a major disaster lightly.

Software security legend Michael Zaki, who goes by the handle, Elum tough answers this question and many others. In his third book, practical Doomsday, A User's Guide to the End of the World using familiar Threat Modeling principles, Michael explores everything from evacuation gear and bulletproof vests to the actual probabilities of things like civil war and a zombie apocalypse in what can only be described as an unbelievable coincidence, Dave and my hour long interview with Michael was recorded on the same day that Silicon Valley Bank collapsed and was taken into government receivership.

In spite of the understandably dire subject matter, Michael's equal sense of optimism and pragmatism steer us towards the middle path of rational risks and what a normal person should consider doing to be ready. It's not nearly as hard as you might think, and the peace of mind gained as well worth taking a hard look at the worst case scenarios.

This interview is cleanly separated into two parts. We focus on the opportunity and threat of artificial intelligence in the second half, but we start that section with a conversation about Michael's approach to writing, the real threat of generative AI to drive truly deceptive attacks. Take center stage as we explore how the ability to easily generate compelling documents, images, video, and more, may make it nearly impossible to distinguish between reality and a scam.

No conversation on AI and threat seems to be able to avoid mention of the singularity threat. However, Michael keeps true deform and narrows in on the much more likely paperclip problem of mundane AI optimization, optimizing humans right out of existence. This was one of our favorite episodes in ages.

We hope you enjoy it and learn as much from it as we did. We also hope you got your money out of Silicone Valley Bank just like Dave did the week after this was recorded. Stay safe, y'all. Welcome back to Security Voices. We have a great episode, uh, really looking forward to this conversation, but first we have to check in with Dave and see how's Startup land going, buddy?

What's, how's things, anything rain on your parade lately, Mr. Startup, c e o? 

[00:04:03] Dave: Yeah, you know, things always rain on our parade. This is yet another kind of unhappy surprise, but, uh, we're momentarily at, at this moment, rather, we're trying to get our money out of Silicon Valley Bank. And I was amusing this morning as I got up and I read through our show notes and I only put a together the, the show notes for this like literally like a couple weeks ago.

And you're looking at it and you're thinking, wow, this is, this episode is well timed. We're gonna talk about Michael's book on practical doomsday, and I will paraphrase it as like a sane guide to prepping and in survival. And you know, you're thinking like this is pretty well timed and we just came through the pandemic.

There's a war going on like this seems pretty rational. Little did I know that, you know, we'd be served up a fresh reason as to why we should prepare for disasters. I didn't think I'd become the case study of it, but here we are. Unbelievable timing. 

[00:05:00] Jack: Our guest is, uh, Michael Saluki. Who has written a bunch of books, who has been a foundational figure in a variety of security areas, web security.

In the early days, you were a pioneer and I believe you remain so, but, uh, it's a little detour. When I was adding application security and web security people to the, uh, history project that I do, the shoulders of InfoSec, I spent a lot of time chatting with our snake about it, and we were talking about web stuff and he said, well, we start here and pointed to Elham tough, uh, Michael Zaki.

So there's a foundational interest in insecurity that you've, you've done Michael, but uh, you've done a whole bunch of other stuff and I'm really glad we've got you on. So welcome to the show. Thank you for the 

[00:05:45] Michal 'lcamtuf' Zalewski: introduction, and thank you for having me. You know, I, I've been in the industry pretty much ever since I was getting started in the late nineties.

And, and back then we didn't really have a security industry to speak of, right. It was not a reputable profession to go into. It was, you know, not something that your parents would approve of. You know, it worked out for me. I worked for a number of cool companies, did a bunch of interesting research along the way, published a couple of books.

I'm still having fun. 

[00:06:12] Dave: We'll get to the security stuff, but. Let's start with the book. Like you said, you've written multiple books in the past, but your most recent book really, I mean, while it has, it has some overtone and it covers a little bit of cybersecurity, it's really about applying, uh, you know, try and explain this in my own words, but as I was reading it, it's really kind of applying threat model in, it's applying a security mentality somewhat to the world around us and preparing for disasters, a whole variety of disasters that are out there.

How would you describe the book and what drove you to write it? 

[00:06:47] Michal 'lcamtuf' Zalewski: I think you're hitting the nail on the, on the head. I think, you know, on one hand it's a weird subject for a computer guide to write about, you know, on my previous books were about InfoSec, but at the same time it always felt to me like a very natural extension of what we do for a living.

We are basically paid to manage risk in the digital realm and, and in that space, it's normal to ask if you have backups, if you have an IR process in case of a breach. But if you start talking about the importance of, you know, having a fire extinguisher in your home, you're suddenly one of the weirdos, right?

Like, people give you funny looks and that always struck me as odd, but I, I guess I'm probably in the minority here. But yeah, there was sort of, you know, no natural sort of, you know, tipping point. Inflection point. Nothing that's really pushed me to write at it. It was just, it always felt to me like a very natural thing to do.

I suspect part of it, maybe also my personal experiences. I grew up in Poland in the final days of the sort of, you know, Soviet and post communist rule, and I watched this system crumble and you had this stable, industrialized country. Not prosperous, but boring, suddenly on the brink of war. And then, you know, later on in tech, I was getting started right during the.com crash.

Then a decade later, you know, we all lived through the housing crisis. and I witnessed the impact on my own industry. And now, you know, we may be on the brink of another slump. And in the intervening years I watched a generation or two generations of techies enter the workforce absolutely convinced that they are invincible, right?

That they can always get 10 another offers from other companies. That there's no point in having a rainy day fund that you know that this is a perfectly safe career choice. So I think the, the book is essentially about that, right? It's about thinking through real world contingencies. Yeah. It proved to be very timely with Covid, with, you know, what's, what happened in DC with inflation, with bank collapses with war in Europe.

We are now living in interesting times. Again, I think after a decade of really nothing interesting happening in the news. And I just wanted to help other folks build a mental framework for, not for preparing for the apocalypse, but for sleeping well at night. 

[00:09:03] Dave: I'll say that I read. A lot of it, not all of it.

It's dense, it's thick, and I'll say that it's extraordinarily well written like you are a gifted writer. I read a lot and I've read a number of books written by people who aren't professional authors, and there's a few phrases I really, really liked. I'll, I'll read some excerpts from it here, but stepping back.

When I was reading it, I was kinda like, oh God, where's all the crap that I need to do in here? Like, there's gonna be a bunch of stuff that I feel compelled to do as a result of this. And I got through it and I'm like, oh, I'm actually, I feel much better after having read this in a number of areas. And I think part of that's because we camp and we do like cold weather, camping and so on.

And it was like, oh, half of my camping gear applies here. Like, this is good. And the fire extinguisher is that my buddy who runs a fire extinguisher checking service, like come in handy. I actually found myself feeling much better afterwards. And you know, I didn't feel like I had to go carve out a bunker somewhere or get a whole bunch of firearms.

So it was a really enjoyable, surprisingly, really well written book for someone who isn't like a professional all the time author. So tip of the hat, like it made me feel better. And it was actually, it was very enjoyable to read and even pretty funny in a few places. 

[00:10:17] Jack: And Dave, as you said, it's not, it's rational preparedness.

I mean, prepping has gotten a connotation, but it's preparedness and a lot of it is thinking about the mundane things that can uh, happen and suddenly not the outcome isn't mundane, losing your job or having a catastrophic, having some sort of minor but still catastrophic thing that impacts your life and how many of those there are.

And you know, it's probably not a mountain lion attack that's gonna ruin your day. . 

[00:10:49] Dave: Yeah. What maybe we can start out with, well actually a question for you, Michael. When did you start writing the book, given that so many notable things have happened recently? It doesn't sound like your catalyst was any one of these events.

It sounds like you began writing it before even Covid 

[00:11:03] Michal 'lcamtuf' Zalewski: hit. Yes. Yes. So the history of the book is, is actually kind of interesting. I think it started with an online guide I put together probably in 20. 15, 16, so, you know, well before Covid. And it was actually, you know, talking about the prospect of pandemics, basically, you know, drawing from historical record, right?

Like we had a number of them in the 20th century. And on one hand I think, you know, we had some advances in, in, in, in medical sciences, but at the same time, you know, our population density and the frequency of travel has increased dramatically since then. It seemed like an interesting question to pose and one that is sort of, you know, very rooted in, again, in precedent in historical data in reality.

And so I published that guide and then, you know, years later I was talking to my publisher to North Star and they expressed interest in, in publishing a book about prepp. From, specifically from that angle and yeah, so I think, you know, we kicked it off right around the beginning of Covid and it was wrapped up fairly quickly because I had a lot of sort of, you know, material to work of.

So, yeah, I think the book came out, finally came out at the beginning of last year, and part of the delay was, you know, the supply chain disruption caused by Covid. It affected the printing industry as much as anybody else. It's been in the making for a while. 

[00:12:31] Dave: So let me read one of the things that actually, it was uncivil unrest, and it was one of the things that actually I was a bit more concerned about before I read this.

And I figured we'll double click on this. So I'll, I'm gonna read out an excerpt here. I quote, I believe that these maxims paint a uniquely dim view of our species, rooted in personal anxieties more than in solid historical precedent from the Irish potato famine to the tales of survival in war torn in Europe.

We have ample evidence that even in the darkest of times, the majority of people would rather suffer and quietly starve than harm a random stranger who did them no wrong. and this feels really important, especially, you know, we saw how dark it can get with the January 6th capital insurrection in the US and you know, certainly even , I guess even more so.

We see this over in, in the Ukraine now with the war that's happening there. But having said that, how does the societal risk of upheaval, how do you think about this? How does it play into your personal kind of risk model? 

[00:13:32] Michal 'lcamtuf' Zalewski: You know, my childhood experiences tell me to never say never. Right? And as much as I despise Lennon, I think he had a good quote here.

You know, there are decades when nothing happens and there are weeks when decades happen. That quote you mentioned, I think, you know, I very much reject the notion of the sort of, you know, mad Max world where, where sort of, you know, neighbors hunt neighbors for sport. I think, you know, that proper fantasy doesn't ring true for me at all.

I think what is a lot more likely is that, you know, when people are angry, when they are upset, when they are frustrated and they see no. Path forward for themselves. They lash out against other groups. Right. And that may be their political opponents that may be, you know, minority, the rich, the clergy, whatever.

Right? Like, I, I think we have plenty of examples throughout the ages, you know, from antiquity all the way to modernity of this happening on a fairly irregular cadence of a couple of decades, right? It happened in the US a number of times, maybe not nearly as brutal and deadly as some of the other revolutions that happened in other places, but it, it, it certainly happened, you know, not long ago.

You know, I don't think it's useful to try to look for signs and try to convince yourself that something terrible is about to happen. I think it's unlikely, but I think it's also unknowable and you're gonna be miserable if you try to play that game on the time scale of a lifetime. I think there is a chance that something profound and violent could happen in our own backyard.

And it's not that you have to gear up for a fight, it's more that you. Probably should have a common sense contingency plan, you know, cash for the place to go if you need to leave. And I think, you know, the way I approach prepping throughout the book is basically as an insurance policy. It's not that you need to be convinced that something is gonna happen.

It's not that you have to look for science. It's not that you are, you need to have a proof. It's just that, you know, there are things you can't predict, whether that's a recession, whether that's civil unrest, whether that's your bank going under, right. You should probably think through your options and choices and alternatives before it happens because that's when it's easy to think clearly and double up a plan.

It's the same approach we use in in computer security for incident response. You want to have a playbook that you write when you don't have an incident going on, right? Because it's gonna be very difficult to make tough choices, to think clearly in the middle of a 

[00:16:02] Dave: disaster. And I'd say, The kind of tone through the book that I pulled out of it was not one of optimism or pessimism, but one of just kind of a fierce pragmatism.

And you point out just the how defeatist it is to have that spirit of pessimism too. I think I'm gonna butcher this, but you had a fantastic quoting here at one point, I think you said. In doing so, they've predicted 20 of the last two recessions . It was a really clever kind of turn of phase where it's like, look, you can live like that, but that will suck.

And it was, I've been reading, I've been reading Malcolm Gladwell's talking to strangers where he goes through the guy who basically outed Bernie Madoff and you know, you think he's gonna lionize this guy. Because he was the one who wasn't bought in. He was the one who, who called him out. But at the end of it, he was a miserable human being.

And Malcolm Gladwell comes out and says like, look, he doesn't default to truth. He's what might be referred to as a holy fool. That's really no way to live because it makes you miserable as a human being, even though there's a very important societal role for those people to play. And at times they can expose things like this.

That mindset is not the one that we wanna adopt. It's not fruitful. It's not even helpful for society if everyone behaved in that way. So it was interesting kind of tone you struck that personally I connected with, throughout the book 

[00:17:25] Michal 'lcamtuf' Zalewski: actually. I tried to, you know, strike a reasonably optimistic and joyful Tom throughout the book.

I think I'm, I'm actually optimistic about, you know, about humanity, about the us and I think that our kids are very likely to inherit a better world than, you know, we have to experience in our childhoods. But yeah, the, the sort of, you know, one of the closing comments I I, I make is that, The universe is a harsh mistress, right?

So optimism may be warranted, but sort of, you know, foolishness or recklessness is not right, like you. Should be prepared for things to happen down the line. Right. You know, it's your job and to think ahead and sort of, you know, take care of your family. If, if anything like that comes to pass. 

[00:18:07] Dave: So, let's talk a little bit about, given particularly what's happening today with Silicon Valley Bank and what's happened with Sam Bankman Freud or Friedman and so forth and, and everything there, there's kind of this freedom from the banking system, I think is one of like the mantras behind the crypto movement and, you know, get the banks, get the, you know, the middlemen with sticky fingers out of the way and make yourself independent from it in case bad things happen.

I'll read another excerpt here on your statements on cryptocurrency and you say, this emergent ecosystem is operated by a handful of large corporations, some with thousands of employees, and valuations measured in the billions of dollars. In that regard, it's not particularly distinguishable from traditional banking, perhaps except for the lower degree of regulatory oversight and limited recourse for customers when things go awry.

And that feels very prescient given what we saw happen, you know, the collapse of FTX and everything there. But talk to me a little bit more about that. I mean, you wrote this a while ago. Do you still feel the same way and do you still see 

[00:19:11] Michal 'lcamtuf' Zalewski: the same thing? You know, it's easy to say that right now after FTX and a couple of other things.

You know, my starting point was never that this system is, is is bound to collapse, right? I'm not able to make predictions like that, and I don't think you should be listening to any people who think they do. I generally don't have a rigid prescription for how people should manage their finances. I do spend a lot of time in the book, you know, this is probably the single largest chapter in, in, in the entire book on, on modeling financial risk.

And again, less in the context of, you know, what if the storm troopers come, what if the government is after you? And more in the sense of. What's your plan if the power is out and you can't pay with a card, or what's your plan if there's another episode of high inflation or you know, if your bank collapses or whatever.

And I actually cite very specific actionable data. I think about between two and eight bank collapses in the US on any given year, and then some, you know, interesting years. Like during the housing crisis, the numbers sort of skyrocketed to several hundreds. So it's not like a, you know, a fantastically unlikely scenario to consider.

It's just, it, it's something that happens pretty regularly. You just don't hear about it very much. And so I sort of go through some of the alternatives you have for when you have that safety net and you, you want to safeguard it against inflation. Other incidents that may happen down the line, but it's, it's mostly by staying within the financial system, not ditching it in favor of, you know, gold or crypto keys buried in, in your backyard.

And from that perspective, I think one of the, Fundamental problems that I see with cryptocurrency is just that reasoning about their risk profile is really hard. You have a lot of people with very strong convictions, but it's ultimately a very recent and and unproven technology and there's no solid framework for understanding how much, you know, one Bitcoin is actually supposed to be worth or whether it's even still gonna be fashionable a decade from now.

And so, you know, from that perspective, I think there are better tools, again, including traditional financial products that you can use to shield yourself from inflation without accepting that much. Unpredictable and poorly characterized risk. Even investing in the stock market, there's a very clear sort of, you know, the nature of the trade you're making is very easy to understand.

The valuations of companies have their basis in reality and in their tangible assets that are, you know, outted by, by independent companies and, and so on. And so I think, you know, you can operate within that system without necessarily going for the fringes or for experimental technologies. I think a lot of people confuse wishful thinking with principles risk management, right?

I, I really want my investments to quadruple in value before the end of the year, but, you know, I'm not fooling myself that it's wise to take wild bets. So, you know, with crypto, my approach is that, you know, if it's a small portion of your portfolio and you want to have some fun, go ahead. But if you're putting all your eggs in the same basket, Even if you win, you're gonna win for the wrong reasons.

Right? And the next investment decision you're gonna make based on the same principles is probably gonna be disastrous. So yeah, I think, you know, ultimately risk management is about giving up some of the upside in to shield yourself from the potential downside. And I think with cryptocurrencies just.

Don't get that kind of assurance right now. So I would advise caution, and I don't think my outlook really changed after FTX or, or you know, the recent drop in Bitcoin. There's a lot of people who are sort of, you know, ready to dance on the grave of, of cryptocurrencies. And my assessment fundamentally doesn't change, right?

Like, yeah, it is poorly regulated. It's very recent technology with, you know, unproven assurances, unknowable risk profile. What you're seeing is a consequence of that, but it's not a fundamentally new data point. It's just a fairly natural consequence of that. And that's not to say that the, you know, technology is bound to fail.

I can imagine a world where in 10 years we're all paying each other with, you know, dodgy coin or whatever. I'm just not ready. Proclaim that with certainty, right? Just as I'm not ready to proclaim demise of cryptocurrencies. Yeah. And I mean, 

[00:23:40] Dave: we see FTX at the same time. We see stable coins becoming the national currency in a couple countries.

I mean, it's a complex, it's a confusing space to write it off as, um, as the dismiss a whole lot of things all in one category that's incredibly diverse. And also to neiss the intellect and the judgment of a lot of very, very smart people. So a a couple final questions on the book. So one is clearly you're not of the prepper community, which I think is the appeal of the book.

But having said that, had there been any reactions from the prepper community at all? Do you have any ties to that group and that 

[00:24:18] Michal 'lcamtuf' Zalewski: community? I hate to stereotype, but I think. People who build their identity primarily around being a preparer are not always fun to be around. And it's not about politics or anything like that, it's just that prepping is kind of like buying insurance.

Right. Or at least that's how I think about it. If you have any friends who are really passionate about car insurance and don't wanna talk about anything else, and it really defines their identity as a human being, you know, they are probably a bit weird. Right. I try not to spend too much time in circles like that.

I know some people who are, you know, very reasonable that I'm actually good friends with who are in that space and would describe themselves as, as preppers, you know, first and foremost. But most of the people who are preparers are in my social circle are, are not the ones who are gonna introduce themselves as such.

Right. They are software engineers who just have some supplies and PLAs plants just in case. And you. Maybe eventually they are gonna confess that they, they are sort of, you know, thinking about this problem space the same way, or maybe they read the book, but it's not sort of, you know, it's not a natural topic of conversation for them.

So, yeah, I'm, I'm, I'm trying to stay at a sort of, you know, healthy, maintain a healthy distance to the hardcore prepper community. I've not heard much feedback one way or the other, but then, you know, I'm sort of not seeking approval from that crowd. 

[00:25:39] Dave: Alright. So if, let's say someone's not gonna buy the book and they're only, they're not super interested in prepping or so forth, is there like a top five things that you think would deliver a bunch of value to the average person?

Maybe like a, a top tips from you just to kind of wrap up on the book of something that someone should consider? 

[00:26:00] Michal 'lcamtuf' Zalewski: I think there's actually, you know, just one really important takeaway for all people working in tag that I. I try to bring that up when people ask me for career advice, and I usually get an I roll from them, but it's, remember that it's a volatile industry and don't live paycheck to paycheck.

You know, it's very easy to do in tech because there's immense peer pressure to, you know, drive a Tesla, have a sort of, you know, nice apartment and have the latest hardware and, but the industry goes through, you know, pretty violent cycles of growth and contraction, uh, at a fairly regular cadence. It goes for many other industries as well.

But, you know, tech is, I think, you know, particularly interesting from that perspective. And so, yeah, if there's just one thing you're gonna do to prepare yourself for contingencies, it's, you know, have some modest financial safety net so that you'll never have to discover how low, you know, unemployment benefits in your state.

Maybe. 

[00:27:02] Dave: So savings and maybe just a hunch, maybe put 'em in more than one 

[00:27:06] Michal 'lcamtuf' Zalewski: bank. I think that's a good strategy. I think, you know, it increases the likely, the overall likelihood that some of your funds are gonna be affected because, you know, when you have funds in two or five banks, there's a higher probability that one of them is gonna go under, but it greatly reduces the likelihood that all of them are gonna suffer problems at the same time.

And you're gonna be cut off from all of your funds when you need them the most. And again, you know, bank collapses in the US right now are fairly. Uneventful because the government has flexible money supply and then they can just sort of, you know, conjure money to meet the obligations with some potential sort of, you know, consequences down the line.

But no sort of immediate impact on your savings. It's just that bank collapses are dis disruptive, right? Like they mean lose potentially losing access to your funds for a good while. Maybe at the time when something else is happening across the economy, maybe just as you, you know, you lose your job or whatever.

So it's useful to plan ahead, but again, you know, it's less about. The prospect that you know you are gonna die or suffer some other terrible faith if you don't prepare. It's just that you know, there are simple tricks that really help you sleep well at night and that make it so much easier to go through hardships like that.

[00:28:21] Dave: So this is, I think, your third book that you've written and you've got a big job of family and you actually, you post you blog pretty regularly on a number of topics. Your blog is, is super interesting. Everything from woodworking, it's very technical, hardware issues to perspective on software security and and so on.

It's, it's fantastic. What's your writing process look like, I guess to start for a book? Do you have, have you refined it over time? Have, do you have largely the same approach? How do you make time for it in your schedule? So 

[00:28:53] Michal 'lcamtuf' Zalewski: time-wise, I think I have a, I have an understanding family, but I also have a harsh truth, you know, for myself and for other folks in the industry.

I think that for most of us, especially in white collar professions, you know, we have plenty of time on our hands. It's not like we're working double shifts or racking up over time. You know, sometimes we have incidents or crunches, but most of the time we just procrastinate a lot and I'm, I'm guilty of that.

I, you know, I spend time on Reddit, I play video games. I argue with people on the internet, but I, I kind of force myself to make up for that by, you know, spending another 50% of my free time on the more gainful hobbies. And when I say gainful, I don't mean like very serious endeavors, right. But just things with a tangible outcome, whether that's long form writing or woodworking or whatever.

I wanna be able to look at, look back at my work and say, Hey, you know, that was actually neat, right? That was useful. There's some lasting impact that it has. And yeah, I think that approach served me well, right? It's, it's not about being a workaholic as much as just, you know, if you rock up the death of procrastination, try to pay it back down the line, force yourself to pay it back.

That's for the writing process. You know, I think it's a very personal thing. I found out early on that one of the most difficult things to do in writing. Or editing is deleting what you wrote? I think, you know, we tend to write way too much. There are sort of, you know, tangents, flowery language and, you know, irrelevant details that we include because they are interesting to us, but are not interesting to the reader.

And you have this personal attachment through what you write. And so I think it takes a lot of mental effort to be able to sort of, you know, just select all and delete. Right. And so the way I, I try to overcome that is I intentionally, they, my initial draft is intentionally sloppy, right? I try to use like, Sentence fragments or just keywords just to structure the document, figure out the flow, sort of, you know, organize it all on paper.

And then I do an angry rewrite or two down the line before I get to that state where I can, you know, actually say, okay, this is close to final. Right? This is where I'm actually invested in this document, and I, I would rather not delete a, a large chunk of text. I think that really helps, right? I think.

Writing for your reader is important as well. Try to put yourself in their shoes. I think a lot of us write for, we write for ourselves, right? Like we have a particular, you know, set of life circumstances and personal interests and a particular understanding of the topics that we are writing about, and we really tailor the writing to that.

And that's a mistake, right? Because your average reader probably has different interests, different priorities, and you know, knows different things. So you really need to put yourself in their shoes. You know, that realization really helped me. Probably the final piece of advice I have for writers is just remember that the voice inside your head is a very unreliable narrator.

You're gonna write an email or a document or a book and you're gonna read it. And what you're gonna hear in your head is what you wanted to write and not what's actually on the screen or on the paper. And I think even just taking a break, right? Like, you know, going for a walk for half an hour or whatever and rereading the text will make you realize how clumsy some of the phrasing is, how many typos you accidentally introduced or whatever.

So I try to do that, right? Like, you know, take a break before hitting sand, you know, almost nothing is as urgent. I mean, you know, you know, you don't have to press sand within like the 30 seconds of writing that email. And yeah, I think that really saves the day more often than you may be suspecting.

Otherwise it's, it's just grinding, right? Especially when you're writing a book or posting on a blog or whatever, you really need to get into a predictable daily routine. Same as with dieting or, you know, any other thing of that sort. You're gonna have endless temptations to cheat to, you know, skip a day or two or a week or a month, and then it's gonna be very difficult to sort of, you know, get back into that, into that cadence of, of doing actual work.

[00:33:27] Dave: Do you have a set time and place where you do your writing? 

[00:33:32] Michal 'lcamtuf' Zalewski: No, I wish I had that luxury again, I. , when I'm writing a book, I try to do it daily. Right. And I think there are some situations where you just can't, right? Like there's something urgent going on or you're traveling or, or whatnot. But yeah, barring exceptional circumstances outside of my control, I just try to force myself to do it every day.

Right? And that may be an hour or two in the evening, you know, I have young kids. I don't have a mansion with a separate office in, in another wing of the house. So it is challenging. The environment is, is definitely not conductive to doing that kind of work. But, uh, I try to make do, 

[00:34:11] Dave: let's seg from here into generative ai.

And I'm curious, like if you were to write the book today, would you incorporate chat, G p T? Would you incorporate AI-based writing at all? 

[00:34:24] Michal 'lcamtuf' Zalewski: I wouldn't, I think, you know, it's just the purpose. For me, the purpose of writing is really, you know, sharing my knowledge with other people and. I don't have a financial incentive.

If you're writing, you know, technical books or, or trade books, you know you're gonna be making what, hundreds, maybe low, thousands of dollars. You know, you'll be making more working any kind of a menial job. So the idea is that you know, you have something interesting to say and you find ways to convey that, really condense that information, say something new novel that your readers are gonna benefit from.

And if you're abdicating that job to ai, that's sort of like, you know, cheating in a video game to me, to some extent. And also, you know, if AI can do a perfectly good job writing a book you wanted to write, then maybe you didn't actually have anything interesting and novel to say. Right. So no, I don't think I would be using tools like that.

I have some hopes for image generation ai. I think that's an interesting technology that may enable people who are. You know, not necessarily skilled or can't afford to pay an artist to produce certain types of illustrations for their work, especially this sort of, you know, very utilitarian type that complex diagrams or whatever.

I think, you know, I know it's disruptive to artists, but sort of, you know, selfishly as a person who has no talent for painting or drawing, I'm actually, I'm looking forward to a world where you can describe a technical illustration to a computer and, and get a sort of, you know, a competent result. But yeah, writing I think is a bit more dodgy to me for a couple of reasons.

I think it's not just my sort of, you know, perspective as an outer, but I think it's generally a lot more difficult to get AI to the point where it can really create remarkable works and. But we are already past the point where you can use it to generate to infinitely scale. Certain types of online engagement that I think are likely to destroy the internet, right?

So I have some interesting fears about chart G P T and LLMs in, in, in that space. But yeah, overall, you know, I think this stuff is exciting, right? I think it, it's absolutely a breakthrough. I think, you know, we all knew that it was coming, but I think we expected it to happen a lot more gradually. And the past two years when, you know, quantum leap, it builds on the top of a lot of interesting research that happened internally in places like Google.

But I think even Google was caught off guard by chart G P T and is now scrambling to respond. And, you know, we now have graphics card that in casual conversations are essentially indistinguishable from real people. And for a long time, that was the ultimate benchmark for intelligence and cognition. And we just sort of, you know, solved it like it was nothing.

So, I think it actually kind of ties back to that doomsday theme, right? Because I think, you know, a lot of people are jumping to conclusions about AI singularity and what it means for humanity. Have we sort of, you know, created this kind of artificial general intelligence that is gonna surpass human beings When you look at it closely, I think, you know, the way large language models work is actually kind of simple.

Almost like, you know, disappointingly simple. They, they are statistical text predictors and they actually operate iteratively, kinda one word at a time, right? And the results are nothing short of magical, given the simplicity of the technique. But intuitively it also feels a bit too simple to be a recipe for like, uh, you know, general artificial intelligence.

We don't quite understand why LLMs are good at what they do, and so it's possible that there's some emergent behavior behind the scenes that, you know, accidentally makes them kind of sentient, and we're gonna be able to isolate that, tap into that. But it's equally possible that we are just sort of, you know, mesmerized by a really cool parlor trek that the way that we were, you know, mesmerized by the first computer chatbots like Eliza in the sixties.

So I honestly don't. Again, my, most of my worries about chat G p D are a lot more tactical. I think we, we have a technology that you can use to absolutely drown out authentic content on the internet with influence campaigns, you know, commercial or political that are completely indistinguishable from organic activity.

That stuff has been happening for a long time, but there used to be constraints. You know, you could crank out a lot of low quality content cheaply, you know, on, on closer inspection. It just wouldn't hold up, right? I think you could easily, you know, you can tell spam from normal email, right? Or you could crank out really good fakes, but then you needed to hire experts and pay a lot of money for that.

So I think, you know, there was a delicate balance, I think in the past, but now you have a technology that makes it practically free to create millions of online personalities with seemingly real online lives. You know, engaging in conversations, giving sort of, you know, credible responses to inquiries.

They can, can be conjured with the sole purpose of ultimately convincing you to buy a different toothbrush or to vote for a different politician. And we also have this fantastic, infinitely scalable tool for spearfishing, right? It can be tailored to use the correct industry jargon to produce content that's relevant to your life situation.

And it can do that an unprecedented scale, right? Like you can target millions of people at the same time. So I think I'm really worried about that and I don't even, you know, I don't even wanna think about the sort of, you know, AI singularity angle of this. I think fighting that trend is gonna be a major focus for the InfoSec industry in the coming years, especially for social media companies, which are gonna face an existential threat because of, of, of that sort of, you know, uh, new breed of inauthentic content on their platforms.

[00:40:28] Dave: Actually, it's where I wanted to go next, which was, you said you had some, some specific concerns around generative ai, you know, the current generation of AI that you're seeing and how it impacts security. And it sounds like a lot of it comes down to its propensity to be used in deception and you explain how, yes, we've had this before, but it was prohibitively expensive to do something very convincing.

Now it's cheap and that, you know, the kind of economic difference, the difference in in the math of it is substantive. It changes the risk profile dramatically. Is there anything else that you would point out as a risk that you're considering? 

[00:41:05] Michal 'lcamtuf' Zalewski: Well I think, you know, there is that sort of, you know, AI singularity angle, which again, I'm not sort of, you know, prepared to make any pronouncements about.

But uh, obviously if we reach that tipping point where, you know, we are able to create machines that are as intelligent or more intelligent than people, that is a very, very interesting. Tipping point in the history of humanity that, you know, may very well spell our doom. There's, you know, a number of other scenarios that you can imagine and really no way to quantify the risk, but it sure is cool to sort of, you know, idly speculate about how that could unfold.

And I think there's a lot of very interesting sort of thought experiments that you can do in that space, right? Like the. The classic approach is, you know, basically the, I, I give that example in the book, but it's like, you know, the Terminator franchise where it's sort of, you know, the, the computers rebel against us, but I actually find that it's a bit of a tired trope in fiction.

I think the more provocative approach is that parable of the paperclip maximizer. Where, you know, we create AI with the sole purpose of improving the efficiency of a paperclip production line. And we just give it the liberty to iteratively improve the, you know, processes surrounding the manufacturing pipeline, whether that's, you know, improving resource extraction or improving assembly techniques or whatever.

And eventually, you know, the AI continues to improve the processes and find new sources, new sources of raw materials until it's done converting the entire planet and all of it's inhabitants into paperclips. And the idea there is that, you know, the AI doesn't. Have to have any specific feelings towards you.

It doesn't have to love you. It doesn't have to hate you. It is sufficient that you're made out of atoms, that it has a different use for right if we mess it up. And so I think, you know, that is probably something that should keep us up at night. But at the same time, you know, there's not much we can do about this particular threat.

I don't think there's a way to contain it once we get to that point. And the best we can do is, is to hope for better outcomes. But yeah, I think, you know, a lot of the more tactical concerns around the AI I think are probably a bit overblown. I think there was a significant moral panic about deep fakes a while back with, you know, image synthesis models.

And I think. You know, I, I keep going back to the time when computer generated special effects became ubiquitous in media. Right. I think we pretty quickly realized that when we are seeing, you know, the, the capital or whatever being or was the capital or the, or the White House getting blown up on the big screen, it's probably not happening in real life.

It's, you know, it's something you can do with a computer and I think we're. You know, just the same way to computer generated deep fakes or whatever, you know, we are just gonna alter our ex expectations around what's real and what isn't and, and how to respond to that content. So, yeah, I, I, I don't worry about that.

I, I do worry about Phish. I don't worry about, you know, authentic content on the internet just because, you know, The internet is in a fragile state even today because of the commercial incentives to just spa the heck out of it. For many Google searches, you just get pages of blog spam or sort of, you know, review spam and yeah, I, I just, I can't see how this is gonna improve it.

There is another possible reality where, you know, search engines does become a thing of a thing of the past, and you'll just ask your AI assistant for information. But I think there are two problems with that. First of all, you know, the assistance would be bootstrapped with a, with a copy of the internet as it exists today.

But, uh, if you remove the incentive for people to publish new content for the open internet to, to continue growing and you know, a lot of incentives are gonna go away, if you're no longer getting traffic because you, the assistant is, is, has all the answers, then I think, you know, over time we're actually gonna get dumber.

Right? There are many ways to solve that problem. I think the most depressing possibility is that, you know, the, the internet is just gonna splinter into a lot of walled gardens, which is already happening to some extent. And, you know, there's just gonna be no bots allowed posted on the, on the sort of landing page.

I'm cautiously optimistic about the future. I just honestly don't know how close we are to artificial general intelligence. And I'm equally worried of people who are telling us that it's not happening. And the ones who are saying that it's, you know, it's a done deal. I mean, for me, 

[00:45:47] Dave: personally, like the most compelling near term.

Concern that I've heard was stated in Homos, the book by Yuval Noah Harari of Sapiens as follow on to Sapiens. And I read this in another place recently too, and every time I look at it I say, yeah, I'm not sure we're ready for that. And it's the elimination of a bunch of white collar jobs. I mean, the blue collar jobs that require interaction with physical machinery, the physical space and AI and so forth, introduces the level of complexity that's difficult to deal with.

I mean, if you look at the Boston Scientific robots and so forth, like they're not compelling yet, they're intriguing, but you don't look at it and say, wow, that's gonna go out there and replace a bajillion jobs and so forth. Especially when you couple it on with the software side, like that's hard and it's going to, it will happen, but it, it's going to take time.

But you look at the white collar jobs that are out there and how you could apply. A specialized large language model to eliminate a number of jobs for attorneys, for doctors, at least with analysis. And, you know, you kind of go down the list of these things that where, you know, you are requiring people to process a large amount of information and report back on the important pieces of it.

That could be a huge disruption to the economy if all of a sudden an entire class of workforce, we just don't need it anymore. And what are the societal implications of that and the disruption that could cause that seems to be more of a near term 

[00:47:18] Michal 'lcamtuf' Zalewski: concern. I think that's right. And, uh, on some level, you know, I am concerned about the outcomes.

I think, you know, when you look at, at the history of the industrial revolution, you know, the, the consequences, the, the fallout of that wasn't all that pretty. Right. I think, you know, on some level, It obviously helped, you know, humanity in the long haul, but in the short term, it led to a number of, you know, bloody insurrections and revolutions, some of them condemning people to living in misery for decades or like, you know, the bol sha revolution or whatever.

And so I, I, I think I am concerned about that. I think that's a good point. But then, you know, part of me also feels that, you know, we let this happen over the past three decades to a lot of blue car large jobs where we were just like, you know, cheerfully saying, well, you know, you can just learn coding, right?

Like, we basically destroyed manufacturing in the US right? I think you have a lot of ghost towns that used to be booming industrial towns until that sort of, you know, plywood factory or, you know, aluminum smelter or whatever moved out because of nafta, because of the, you know, of globalization, because of the, of China.

And that was perfectly okay. To us, right? I think, you know, no one was really all that concerned about the mining jobs or the smelting jobs or the outer outer industry. But now that I think, you know, we have reasons to worry about our own prospects and our own jobs, you know, all of a sudden we are a lot more concerned.

So I think there is a bit of hypocrisy there, but yeah, obviously selfishly, you know, InfoSec, maybe less so, although I'll never say never, but, uh, there's definitely a lot of sort of administrative jobs or sort of, you know, customer support or very basic utilitarian writing. Summarizing documents and so on that uh, it's not that those jobs are gonna disappear, but you know, now a single person multiplexing, multitasking is gonna be able to, to handle the workload of, of 10.

So yeah. Interesting times. I think, you know, we are yet to find out what the actual limitations of the technology are, right? Like we pretty quickly went from, oh wow, this is amazing and perfectly human-like to realizing actually, you know, it makes up a lot of stuff and it is just designed to sound really authoritative and convincing, but you can't really rely on it for anything serious without some sort of a human oversight.

And the big question is, you know, can we overcome those limitations? Or is this is just how a statistical model built on a large, you know, data set of internet content is gonna behave. So I think there are unknowns and you know, the actual impact of the technology is gonna depend on how G P D four G PT five actually performs.

Are 

[00:50:16] Dave: you using it at all in your day job or your teams using it at all where you 

[00:50:21] Michal 'lcamtuf' Zalewski: work currently? I suspect that people use it because, you know, especially for simple tasks like, you know, mundane emails, why wouldn't you, right? If it helps you, right? I think there's plenty of people who struggle to communicate clearly or, you know, don't have time to argue their case with HR or whatever, and they, they may sort of fall back to that mechanism.

I'm not using it. I have no need. I played with it extensively because I'm curious, but I, I just had no compelling use case for the technology yet. I do, you know, quite a bit of coding, but all of it, Part of my hobbies and I actually enjoy doing it. And I, you know, I have no special interest in a computer doing it for me.

And I don't think for this specific type of coding that I'm doing, I don't think, you know, chat gpt would be able to do a good job. I am comfortable with writing, so again, it's not something where I would really benefit from that kind of help. But I know that some publishers are excited about it, for example, for editing.

Because while it may not be very good at coming up with brand new content, something like, you know, make that paragraph easier to understand or, you know, that's a simple task that it can actually handle, you know, 90% of the time. So you know, it's gonna eliminate some of the editing jobs. So 

[00:51:43] Dave: given your deep background in software security, have you thought about how generative AI and what's happening, you know, with ai.

In general, not just the general design, but just the progression of AI from kind of simplistic machine learning to, you know, full neural nets and so forth. How do you think that impacts software security projecting out a bit? 

[00:52:06] Michal 'lcamtuf' Zalewski: I honestly don't know. I think a significant limitation of the methods of the AI tools we have right now is that they generally require vast curated sets of data for training.

They are not at a point yet where. Like with human learning, you can give it a couple of examples and iterate on that and get your model to a point where it, you know, exhibits mastery with a new topic. In the security space, I don't think we have that necessary volumes of, again, annotated, curated information to build meaningful models, whether that's for attack or for defense.

You know, there's a lot of people trying to sell AI technologies to companies or to customers in the security space, but most of it is very, very unconvincing. Again, it mostly boils down to the fact that, you know, you really don't have data to enough data to establish what's the baseline of the legit traffic on your network or legit user activity.

And so it's difficult to build a model that's gonna perform better than, you know, just a number of tailored rules that you come up with based on your understanding of how your company operates. I do expect that to improve with time. And at that point, you know, when you can actually train models based on a couple of examples, that's gonna change.

And I think that's gonna change both, both on the offense, you know, finding vulnerabilities in software. Here's a software package. Try to find a bug, right? An exploit and on the defense where you're gonna be using, you know, AI based algorithms to try to find. Anomalies on your networks or, or within your systems.

But again, right now, I'm not seeing that revolution happening with what we have in the, you know, l l m space, or 

[00:53:52] Dave: it strikes me that, you know, much like your comments on editing with writing, you could apply a lot of that to software and developing software and having an interactive editor that's doing much the same thing, but based upon software quality software, vulnerability analysis and so on.

I think it's, it's a lot harder, to your point, using AI as a software vendor across a large number of environments at scale, every environment is so unique. Like what we're doing at Open Raven with data, there's very much a role for ai. We will use ML to define red, reject patterns at times and appropriate keywords for it.

But, If we tried to use it for full analysis on unstructured data, large amounts of unstructured data, it simply wouldn't work. It'd be too slow. The beauty of Red jx is its speed and the fact that we can, you know, combine all patterns into a single super pattern, which means we can do a single pass on a file and so on.

Or we can even do it on metadatas where we don't even have to open it. It's just, you know, and we're doing arbitrary file types. So you look at that and it's like, I can't train a single model that's gonna work on all these different companies data, but I can use it for a definition of really smart, well-formed patterns, and I can use it for post-processing.

We use things like markoff filters for first name and last name and so on. So it's your point. There's a lot of hyperbole, especially in security and I'll, I'll say it's like it's positioned like some form of black magic and it has a role, but that role I think is dramatically overstated given the, just the harsh realities that we operate in so many different environments and security can't constrain the file types that we find out there and the things that we see in the wild.

We just, we simply can't. Alright. This feels like a good point to kind of start wrapping up here. So let's go through, we haven't done our speed round in a while, but I think it'll be especially interesting with you, Michael, given the breadth of your interest, what's the last thing you read or or digested, whether it was a movie, whether it was a book, whether it was a podcast that had an impact on you that changed the behavior you had or has otherwise kind of stuck in your head?

[00:56:06] Michal 'lcamtuf' Zalewski: I think I had a long string of really forgettable. Experiences recently, and it's been a long while since I was really inspired by a book or a movie. Sorry about that. 

[00:56:20] Dave: Where do you go for information? Maybe like if we were to look at your information diet, where do you dine? That's 

[00:56:27] Michal 'lcamtuf' Zalewski: an interesting question.

I think, you know, for industry news, as much as I hate it, I think my most reliable source is Twitter. And I absolutely hate the fact that, you know, our entire industry migrated to the platform and that it replaced mailing lists and newsletters and other more structured sources of data. But in effect, if you wanna stay on top of, of incidents happening across the industry, zero days or other major developments, you kind of have to be on Twitter and you have to follow security folks for more personal interests.

I actually try to consume less news. I think, you know, it's very easy to get stuck in this mode where you. Think you are well read and well informed and well educated because, you know, you are on top of every single international headline, but it's just sort of, you know, mental clutter by the end of the day and a major time sync.

So I've been actually trying to scale back on sort of, you know, 24 hour news and all that. I subscribe to a number of, you know, weekly or monthly newspapers and I actually get them in paper, in a paper form. And I actually try to aim for a spectrum of viewpoints, right? So I subscribe to the Economist for a more sort of, you know, neoliberal take and to the Atlantic for a more progressive take.

And I think I even have a subscription for the. For the National Review, which is like the sort of, you know, traditional conservative. I have the reason magazine, which is the sort of, you know, hardcore libertarian, legalize all drugs kind of an approach. And I really like that diet because I think, you know, it's very difficult to find a source of news that is really unbiased.

There's a lot of, of, of sources that are pretending to be unbiased, but really they have an agenda, they have a bias, you know, there's, people write for a reason, right? You don't become a journalist for the money, right? You become a journalist because you think you have a story to tell you. You want to educate people, you want to sort of, you know, and so I'm, I'm sort of, you know, explicitly opting in for sources that have known, well established biases, and I can sort of synthesize an understanding of the world based on, on that.

Again, I, I try to balance sort of, you know, consumption of news with the creation of content or learning about new technologies and. You know, most of the headlines don't matter in the long haul. Sometimes there are really profound events that are gonna shape our lives for the decades to come, but 99% of it is just filler or, you know, headlines designed to make us angry and, you know, make us click.

So 

[00:59:10] Jack: your comments on Twitter are interesting cuz they do. Uh, weave back into a disaster planning. You know, security BSides was built on Twitter and we can't leave no matter what anyone thinks of it, cuz there's 15 years of, of history there. But what happens when it does implode, if, I mean, it seems to be if you stop paying Amazon, they knock off your access to aws, you know, , if you, you gotta pay some bills and there's no simple migration.

It's like, ah, here we all are. But like you said, that, that you have to be there. That, uh, the fact that, uh, so many people in security were taught on Twitter is why the conversation started that Bill BSides and where do we go from there? And, you know, Mastodon is, The fact that it's, it's got a privacy bent, which I appreciate.

It does mean that the searchability is, isn't there? We can't, you can't stumble across things. So it's interesting, but it's still, yeah, I mean I'm gotta be there. If you do this and even retired, it's like, ah, I know where all my friends are. They're 

[01:00:11] Michal 'lcamtuf' Zalewski: on Twitter. I'm not even commending on the sort of, you know, recent mask acquisition and, and all that mess.

It's even before that, it was just a bit of a terrible place to go for security news because you know it all, like you had to subject yourself to a lot of things that you actually had no interest in. Right. But you know, it is what it is and, uh, yeah, I think it continues to be an invaluable source of industry 

[01:00:38] Dave: news.

Let's wrap up with what specifically in the security industry makes you hopeful? What are the things that you see that have been kind of recent positive developments that make you more optimistic for the 

[01:00:52] Michal 'lcamtuf' Zalewski: future? I'm actually not an InfoSec dom sayer. I think we've come a long way since the nineties when everything was trivial, vulnerable.

You know, you had remote route on anything you wanted and nowadays it's actually pretty difficult to go after most targets on that sort of, you know, technology plane, you attack people and you know, but that's not a software engineering problem except to the extent that bad UX sometimes facilitates attacks.

But now if you want a call remote code execution bug, you, you know, you better have a really serious acquisition budget for that. Right. And we're talking governments not cybercrime. On the flip side, I think, you know, the landscape is changing in interesting ways. Last year was absolutely remarkable. This, you know, perfect storm of geopolitics and really spectacular attacks happening across the industry.

And I think several prolific hacker groups really figured out for the first time how corporations really operate. And where the weak spots are. You know, they realize that if you wanna go after Microsoft, CloudFlare, Okta, whatever, you know, you have to go after this outsourced customer support guy who's working on a personal machine in, in Pakistan or, or wherever.

You know, that was brilliant and really intense. That's the sort of, you know, the kind of knowledge that government's had for a long time, but not necessarily the sort of, you know, run of the mill cyber crime. Yeah, last year was, was an absolutely just, you know, amazing mind blowing run of, of string of compromises.

I think it made a lot of people realize that you can't really depend on just being able to prevent security incidents at a scale. You really depend on rapid detection and containment. And, and that's something that the industry wasn't really all that good with UN until very recently. It's also interesting to see that, you know, a lot of the basic types of two FFA did very little to stop the threat.

And, um, and I think again, like there's gonna be a lot of positive change as a consequence of that. On the security side, I think one very interesting trend that I'm seeing on the software security, uh, side one, one very interesting trend is that, I think we are becoming a lot more like traditional software engineers.

We are no longer just critic SPS to poke holes in other people's designs. We are getting more involved in fixing the underlying causes, fixing the processes that contribute to the introduction of security bugs. We have more and more security engineers hardening, you know, build pipelines, adding security automation code repositories, thinking about dependency management, redesigning production access control.

And I think that's really cool and you know, that is actually how you'll solve a lot of the problems versus, you know, producing a report after a report pointing out that you still have access or you still have a buffer overflow or whatever. So yeah, I think, you know, there's definitely that sort of, you know, movement in, in happening in one direction with security engineers becoming software engineers.

There's also an interesting parallel. Trend in the opposite direction. I think there's a lot of software engineers who traditionally worked in security adjacent fields like managing authentication systems or you know, spam and abuse or whatever, and they are more and more interested in securing that sort of, you know, security engineer job title because it often comments higher pay or better career prospects.

You know, other than like, unless you're a software engineer working on machine learning, I think, you know, the specialization as such has lost some of its luster and, and is more commoditized now than it used to be. Security is still the sort of, you know, cool hip thing that you get into if you, so yeah, it's, it's like, you know, you have security engineers becoming software engineers, software engineers becoming security engineers like this, you know, circle of life.

Awesome. 

[01:04:55] Dave: Well, this has been fantastic, Michael. Thank you. There's a million topics that we could have spoken to here. You have such a wide range of interests and expertise, so, you know, maybe at some point we'll have to do this again. But, uh, that was fantastic. Thanks for taking the time. 

[01:05:10] Michal 'lcamtuf' Zalewski: Thank you again for the, you know, very thought-provoking questions.

We should do another one about woodworking, I think. Yeah, I, 

[01:05:17] Jack: I was just gonna say, what are you doing to make, uh, wood chips and sawdust these days to close out? 

[01:05:22] Dave: All right. I tell you what, it's a deal. Let me get my money out of SVB and let me get open, Raven to a good place. And, uh, then we'll have to get back on because we can't have this conversation.

You guys both have amazing hobbies with blacksmithing and woodworking and, uh, taking my kid to tennis lessons probably wouldn't stack up . Well, thank you 

[01:05:43] Jack: so much, 

[01:05:43] Michal 'lcamtuf' Zalewski: Michael. All right. Thank you. Awesome. 

[01:05:46] Dave: Thank you. 

[01:05:47] Jack: Thanks for joining us for this episode of Security Voices. If you have comments, questions, or feedback, please reach out to us@infosecurityvoices.org or reach out to Security Voices on Twitter, or you can always contact either Dave or me directly.

If you'd like to hear other episodes of Security Voices, see transcripts of the shows or learn more about our guests, check out our website@securityvoices.org. We'll be back in a few weeks with another great conversation.

[01:06:23] Michal 'lcamtuf' Zalewski: I argue with people on the internet.