evelyn-and-kenny-debiasing-your-software-design-decision-making
Transcript
[00:00:01]
Or we'll call security, whatever you want. Either way.
[00:00:06]
Yes, already, right? I mean, we haven't even started yet.
[00:00:12]
Oh, we should have started yet, sorry.
[00:00:16]
No, go away, you.
[00:00:22]
I see them everywhere.
[00:00:23]
A room full of ba... There's more. It should be B-A-A-S. That's my name.
[00:00:30]
Shall I just leave? Shall I just go?
[00:00:42]
Don't do that. Debasing. Don't start.
[00:00:43]
Oh, what's happening?
[00:00:45]
Kenny.
[00:00:47]
Yeah, we're back.
[00:00:51]
Thanks.
[00:00:55]
He also has a Spice Girls t-shirt for another talk that we have. We had a talk that was about autonomy.
[00:01:00]
What's happening here?
[00:01:02]
Oh, sorry. Yes. He had a Spice Girls t-shirt. It was really cool.
[00:01:11]
Okay. We wanted to start, but now... Yeah, yeah, yeah. There we go. Now the screen was going on and off for some reason. All right.
[00:01:21]
Good morning everyone, bonjour. That's the only French you will hear from me probably.
[00:01:27]
My name is Kenny, this is Evelyn, we come from the Netherlands. Nobody speaks Dutch here, I heard French talk, so our talk will be...
[00:01:37]
Some people speak Dutch, I know two of them.
[00:01:40]
Are we audible in the back? Just checking. You can hear us correctly? Good.
[00:01:45]
Talk a little bit louder, or we'll try. So yeah, we're going to talk about devising your software design decisions. And we travel a lot together. We are not together. We're friends.
[00:01:57]
For the record.
[00:01:58]
I'm already trying to de-bias you because last time we were at a conference and I did all the bookings and we got to the hotel, we tried to check in. And they're like, oh, yeah, I only need your passport, so my passport. Okay, yeah, I said, I booked two rooms. Oh, oh, oh, oh. But they're not on the same floor. Is that a problem? And I was just processing it. I'm like, oh, no, he's a colleague. So this is a very quick story, one story about a bias which got quickly handled, luckily.
[00:02:32]
No, of course not.
[00:02:34]
And another quick story is that I have a double name. My name is Bas Zwegler, and Zwegler is my wife's name, my partner's name. And I went to the doctor, and the same thing happened. They called me in, saying, Mrs. Bas Zwegler. And this was even more awkward because the guy had like a photo of me. So I do know that I look like my mom.
[00:03:00]
Yeah, so these are devices that we're going to talk about. And these are very innocent in a way, right? But when we look at IT, we have a lot of these. Yeah.
[00:03:10]
So the example of Kenny is pretty harmless, right? There are no real long-term consequences or impact. His ego might have been a little bit crushed, but he can overcome that. But when we talk about software design decisions, then the decisions that we make that are actually full of biases can have long-term impact. And decisions that we make now with a... Powerful gut feeling that is probably a bias can have long-term impact but then we see some stuff like some cost fallacy like We've already invested so much in this decision and now we're just sticking with it because we're already invested a lot. There's also a lot of research done and this research is something that we're going to cite a lot in this talk because it's really in the context of, well, the context that we are in. And it actually says that architectural decision-making is far from being rational, and architects tend to adopt a satisfying approach rather than looking for the optimal architecture, which means we are tempted or likely to go with an approach or a decision that's good enough. And that good enough decision is very often full of biases. And the long-term effect, we are going to talk about that today. Why is that important? Because we, from experience, we know that a lot of software design decisions that we make, we make them together. And we use, for example, collaborative modeling exercises to make design decisions together. And everyone that comes to such a session brings their own biases. So that means that whenever we are making design decisions together, we can multiply the effect of cognitive bias by the amount of people that are involved in that decision or that session. So that's a lot of bias. So we're not saying that collaborative modeling sessions are not the right place to make these design decisions, because they are. We're just saying that these are also the places where these cognitive biases are very, very present, and that that is also the place where you should start to consider. To de-bias some of your software design decisions. And that's what we're going to talk about today. We are very passionate about that topic. We dedicated a whole chapter in the book that we wrote together with Hien, who's also here.
[00:05:08]
And there's a whole chapter on cognitive bias. And that says a lot, I think, about how passionate we are about that. So if you want to know more after this talk, then this is the place where we would refer you to, which is non-biased, of course, at all.
[00:05:21]
But before we dive in, I want to make sure that we are on the same page when it comes to, or we have the same framework, the same model of what we mean with cognitive biases. So this is the definition that we are using. This is following Daniel Kahneman. We're also following his distinction between system one and system two. Anyone familiar with that? Okay? A very short system one is more of our automatic pilot system. We use that a lot. It's a very effective system. This is the reason that you don't have to think about every single decision that you're making in a day. So this is the reason why, for example, when you are driving, you all of a sudden can think, oh, I'm already here. And you didn't think about all the decisions that you took while driving to get there. So system one is full of biases. System one is trained. It's based on experience. We know what works in certain situations and what doesn't work. So we know how to act and what's probably the right thing to do. This is where our biases live.
[00:06:14]
Yeah, whoever... Got at the office and think, did I lock the door?
[00:06:19]
Yeah. Well, I always check that 10 times, but that's me. That's for another talk.
[00:06:28]
And our system, too, is more of our slow, deliberate, conscious system that we... And it's not like a switch that you turn on and off, but your system, too, you use when you slow down. You reflect a bit more. So when you want to de-bias your software design decisions, system two is a very useful system that we have. But in most cases, you can fully trust your system one. It's full of biases, but you can fully trust it. That's important because the biases that we have frame the way that we look at the world. So in the world that we live in, in this context, we have learned that what this image displays is called a pipe.
[00:07:01]
What this artwork does, and this is also why I really like this artwork, is it challenges that. Because it says, this is not a pipe. So when you walk past this artwork and say, hey, that's a pipe. It's not a pipe. You're looking at an artwork of a pipe. And it's a very subtle difference, but your system one is, hey, it's a pipe. And the artwork challenges you to use your system to slow down, to reflect, hey, what am I actually looking at and how does this fit in my perspective? And that's what we also want to do today. We want to challenge the way that you look at some of the software design decisions that you're making and how you can reflect, slow down, start to de-bias them. So this is not going to be a very theoretical talk about, hey, you should be aware of cognitive bias because we hope you are aware that you should be aware of cognitive bias. So we are going to dive in some examples that we see a lot when it comes to software design decisions, and we're going to share some of the habits that we taught ourselves to start de-biasing the decisions that we make.
[00:07:56]
So again, we... Hey, it's not working.
[00:07:58]
It will.
[00:07:59]
Okay. Okay. This is why he's wearing this shirt, by the way, for anyone who hasn't seen it. So again, We're not saying you cannot trust your gut, you cannot trust your biases, because you can. In most of the cases, you can go with what you already know and what works in certain situations. We're just saying sometimes it's useful to think twice. And when we were in Berlin a few weeks ago, we were having dinner with a couple of people, including Gregor Hope, and he was saying we should all have our own law. So we created our own laws, and I started to think, what would be my law? And I think based on the years of experience that I have in companies, this definitely would be my law. The more harmless a bias feels, the more damage it probably has already done. So whenever you feel like, well, we are not very receptive to this sunk cost fallacy,
[00:08:50]
I would think again. So this is my lock, and this will be in later. So this was about the biases part, decisions part.
[00:08:58]
Yeah, so it's about devising decisions. So yesterday, who was at Heinz's talk by any chance?
[00:09:04]
You've seen this then.
[00:09:05]
Yeah, so you've seen this. It was more elaborate. What she did. But anyway, what do we consider a decision? It comes from economics theories, which is we have alternatives, we have information, we have preference, we do some logic, and then we have a course of action. So that's a decision. And what she says, and this is a very quick introduction if you want to know more about it. Rewatch Keen's talk later on, but it's a choice between two or more alternatives that involves an irrevocable allocation of resources. So this is a decision, how we consider a decision, a model of how we consider decisions. And what I then always use to tie it with is what I use is what is decision making. And I use Andrew Harmel's last model, which says, well, we need a decision. We're doing some option making, which is all the elements you need in that decision. Then you do some decision taking. And you might need to share that if not everyone is there. And then a decision is implemented. So these are the two concepts of bias and decision making. So these are the two models that we use for our talk today. And one thing to know and to understand is that the biases... Are different from what I earlier said, what they tell you in psychology. And psychology is usually a one-time thing. So one, I went to the doctor, it was quickly done for and it was gone. The decision was made. In software design, however, we have a reinforcement feedback loop. And this is why it's so dangerous in software to not take action to it. Because if we take a decision, it becomes an artifact. And if that decision is biased, it's becoming an ADR or code or diagrams or whatever, that feeds into our mental model. That feeds back into that decision, the information we have, the preferences we have, etc., etc. And that feeds back into the decision. So it's a never-ending reinforcement loop if you don't strategically do something against it. It's not a one-time thing. It's not, hey, this decision. It's made, oh, we were biased up.
[00:11:10]
Let's fix it. No, it actually fixes up. So your architecture teaches the teams what good architecture looks like, whether it was right or not. And you see that especially with a lot of architects. There's not a lot of scientific base to architecture because most architects say, yeah, I did that on gut feeling. That's biased. Yeah? So that's why we need to actively do something. And as Evelyn already says, the thing is, a lot of the industry already talks about the biases. The problem with biases and awareness, it doesn't do a thing. Research looked if that awareness alone isn't enough. Kahneman already said this. Reflecting on his own experience has pointed that out. Now, I'm very well aware of my biases. And that's also a bias, by the way. Talk about that later. But if I go on LinkedIn, I get hit by the bandwagon effect currently. AI. I know what's going on. I know there's no grounded theory yet that AI works or not. But still, when I scroll through LinkedIn, I get a little bit anxious. Am I need to jump on doing more AI? Who has that feeling currently? And that's the bandwagon effect. I still get hit by it. So awareness is not alone. I need some strategy to fight it. And luckily, there's some research to that. Well, I'll show that later, but we need to actively start nudging. And maybe you've seen it, who's seen this on the road?
[00:12:47]
Who actually start driving less fast? I do. Again, it's complex, so not everyone does it, but most of all, the research says if you do the left thing, people will drive slower. So we need some strategy in our decision-making to make sure we are less biased. You cannot remove it, unfortunately. So we need that. There's a great book about nudging, and it's called Choice Architecture. How can we put nudges into our decision-making process and our decision-making to actually lower our biases? And that's why I'm very proud to introduce to you our checklist, right, Evelyn?
[00:13:27]
Yeah, well.
[00:13:29]
When I told her, we're going to do a checklist, she's like, no, don't clickbait me, please.
[00:13:34]
This was my reaction. It's also not working. Why aren't they?
[00:13:38]
I'm not sure.
[00:13:38]
This is a very, very nice gift. It's the cat doing this.
[00:13:44]
So it's good that this is not recorded, otherwise this would be a gift right now. So we indeed, when we're talking about this, we should do a checklist. And I'm like, I don't think we should do a checklist because checklists trigger me because usually it's just clickbait and they don't really mean anything or they don't really add value. So we first just got to talk about this talk and the structure and what do we want to tell people and blah, blah, blah. In the end,
[00:14:08]
I got on board with the checklist because this checklist turns out to be different.
[00:14:14]
And yes, I know how that sounds. But trust me, in the end, I will get back to that. Trust me. Trust me.
[00:14:20]
We're going to talk about that as well. Yes, true.
[00:14:23]
But this is actually going to help you process all of the information that we're going to throw at you today about the biases and the helpful habit suggestions that you could do. So bear with us. The checklist actually does make sense. It's not just clickbait.
[00:14:38]
And it's used in some industries as well, right? When you have pilots in a situation, they will hook up a checklist and they'll keep improving this, right? And there's some research here that actually proves that the structured intervention with training works. Checklist, putting in, build it into a practical checklist, works. And there's actually in our field recently, there was a pilot that found out that awareness at loan had no effect, but structured intervention worked. So there is some grounded theory here, research theory, that checklists and training works. We're going to talk today mostly about awareness and the checklist. And we're going to go through all of these five today. So we're going to show you which bias does it actually tackle. And at the end, we're going to present you the checklist in full, by the way. So don't worry. What we're going to do is we're going to go through one by one. We're going to give you one or two. Biases that are in there and how We're going to go against it.
[00:14:53]
Training works. Checklist, putting in, build it into a practical checklist, works. And there's actually in our field recently, there was a pilot that found out that awareness at loan had no effect, but structured intervention worked. So there is some grounded theory here, research theory, that checklists and... Training works. We're going to talk today mostly about awareness and the checklist and we're going to go through all of these five today. So we're going to show you which bias does it actually tackle and at the end we're going to present you the checklist in full by the way. So don't worry. What we're going to do is we're going to go through one by one. We're going to give you one or two biases that are in there and how we're gonna
[00:15:39]
go against it. So Evelyn.
[00:15:40]
Yes, starting with the first one. Be decision ready. And we are very aware that this is a very vague term and that it can mean a lot of different things to a lot of people. We are going to specify this. So imagine that you have a very big design decision coming up in your working life. At the same time, things at home are a little bit messy because maybe you are in a huge fight with your partner or your best friend, your kid or your cat is sick, or you have a sick dog at home, or you have a roof that keeps leaking, or there is a... Yes, that's from personal experience.
[00:16:13]
So there are lots of reasons why you might be feeling sad. What we usually do, you're truly included, is we think, well, yes, design decision is coming up, but I am a rational human being. I can separate my personal life from my working life, so I am perfectly capable of making that decision in my working life.
[00:16:34]
And I won't be affected by the emotional state that I have following from what's going on at home. The thing is that cognitive bias doesn't really make a distinction between your working life and your personal life. So it will definitely have an impact on it. And the specific example that we're going to talk about today is something called myopic misery. And that has to do with a specific emotional state, namely sadness. So this is not for all, quote-unquote, negative emotions, but this is really specifically on sadness. And what the research says is that whenever we feel sad, we get impatient as humans. And what we then want to do, we want to get rid of that feeling. So we want to resolve that impatience that we have. And usually we do that by falling into action bias, which means taking action is always better than doing nothing. So to get rid of that impatience that I have, I'm just going to make a decision. I'm just going to do something. So at least I can get rid of that impatience, and that will help me relieve that sadness a little bit. So we become very focused on the short-term relief. We're going to do things because it will fix my emotional state. That doesn't mean that I'm going to make the best decision for the long term. So the example that I gave, I have to make a design decision and I'm feeling very sad. Then the decision that I might be making might be more focused on me, my emotional state right now and not focused on what would be the best decision for the long term. This could also translate to, for example, a developer that goes with a hot fix that he or she can do right now, because that means I can close a ticket today. Yay! Instead of maybe going with a solution that might be better in the long term, but that might take me a few sprints more. So we are way more focused on short time resolution. That's what we want. And there's a lot of research done on this, and it's all been done in a similar way. What they did, they had two groups. They had an experimental group and a control group. The experimental group, both groups had to make the same decisions, but the experimental group first got a sad impulse. And the sad impulse in these researchers was usually a sad video. After that, they got a choice. Do you want less amount of money, but you get it right now? Or do you want a bigger amount of money, but you have to wait for it, and then studies vary from hours to days to weeks? In almost all of these studies, the group that got the sad impulse was way more likely to go with the short resolution. Give me the money now. Even if it's less money, I want it now. So the action bias was very present there.
[00:19:06]
And that also, well, if you translate that to the design decisions that we make, well, just give me it now. I want to make the decision now. Get my emotional state fixed. That has an impact on how we make decisions and what that means on the long term. So what this thing says is that we sacrifice future future. value over immediate resolution if we feel sad, thereby impatient, and then we want to fix that. So what can you do with that if you feel this? And we specifically chose this one because sadness in general is not an emotional state that has been discussed a lot in teams or organizations from my experience. So what you could do it would be very helpful to know, like, hey, where are we on the levels of sadness in our group or in myself? Start with yourself. So if you have to make a decision, you might want to ask the question, hey, are we in the right state, the right emotional state to make this decision today? This is a question that makes you slow down. So you might want to be... Measuring or assessing the level of sadness within yourself or your group. You can do that very easily. Scale from 1 to 10. Level of sadness. Where are we? You don't need to give an explanation. You don't need to get too vulnerable. Where are we? And then the follow-up question could be, given our emotional state, are we ready to make this decision today? And you still can make the decision if you want to, but at least you slow down, you challenge your biases, and you thought, hey, Potential impact, are we going to take the risk? Are we going to work with the trade-offs? Yes or no. But at least you slowed down. So a check-in or a sense-making exercise could be a very useful tool to do here. I've experimented with this a few times. Some psychological safety is useful in this exercise, but you can also start with it by yourself and see where your level of sadness is. Yeah, how is that affecting your decision making?
[00:20:46]
Yeah, and to add, I know Rebecca Wirth-Broth says maybe you can add it in your ADRs. If you do ADRs, architecture should be. Decision record, you can actually also said this was our current state. So the sense making you can actually add and over time you can see how our decisions being different. If you have the psychological safety.
[00:21:03]
Yes, that's helpful.
[00:21:04]
So check, be decision ready. That's the first one we're going to talk. I'm going to talk a little bit about broaden frame. And you might think, oh, come on, Kenny, we're already doing this, right? We're doing workshops. We're doing the double diamond. We're already broadening the frame. However, there's two biases I want to discuss today that will keep you from broadening that frame. You might think you're broadening the frame, getting more information in, and as we learned yesterday, right, if the information is not relevant, maybe you should stop. But what if there's just a whole blind spot that you're dealing with? And the first one I want to discover, and I'm going to try to trigger your system one now. I'm going to show you an image, and I want you to solve the problem. Yeah? And it's the first thing that comes to mind. So don't think about it too long. I already biased you towards your system two. But here you see a Lego figure, and there's a brick placed on top with one pillar support, and it's crooked now. How would you solve it? Who wants to say and speak up?
[00:22:09]
Yeah?
[00:22:12]
Yeah? Was that your first thought? Yeah? Who thought, I'm going to add three blocks to this thing?
[00:22:25]
One block? One is also called that.
[00:22:26]
One block is my door. Yeah. So here you see, and I already triggered you in a way, and there are people who do that. But what we're dealing with here is something called the additive bias. And especially in software design, and especially when you design models, We rather add things to our options than removing them. And that makes things complex. So it's actually, this is the paradox and one of the enemies for good design is we keep trying to add things to solve. Look at all the UIs nowadays. Like Miro started very simple. Now it has so many things. I'm like, okay, why? This adds things. So this is in our bias. So it's the additive bias. And that research with the Lego bridge, they did that and the vast majority of participants added the bricks to support the bridge, only when explicitly reminding to remove something. So they did it, fix it, and they put Lego blocks around it, right? And they, okay, let's just add it. As soon as you say every Lego block you add costs money and everything you removed gives you money, Only then they started removing it. It's because the outcome, because that's what we're trying to look for, is the same. Right? And this is a bias you might think of. I always have the bias to add things as well. Indeed. Especially slides in our presentation, right, Evelyn? No.
[00:23:54]
Never noticed.
[00:23:55]
So yeah, it's also a little bit of therapy today.
[00:23:59]
So what can we do against this? So one thing I ask in groups, if we could not add anything, how would we solve this? Especially when you have options in your decisions, right? You want to have options. You can then ask, okay, can we get an option that has less things to add? And especially I'm a domain-driven designer. I see when we create models, we try to add languages. Yeah, but we need chairs. Why do we need chairs? Because there's chairs, right? Yeah, but what if we remove them? So I try to remove everything. And the general statement we have, a model is finished until we cannot remove anything out of it anymore. But that's so counterintuitive to what we actually do, adding things to a model. So that's the first thing. Now the second one that I'm going to talk about is this one. It's the candle problem. Who knows this? The candle problem? Who has heard of it? There's a few. Okay, so the instruction, pin the candle to the wall in such a way that when it is lit, no way drips on the floor. So you have a book of matches, you have a box of tacks, and you have a candle.
[00:25:07]
How would you solve this?
[00:25:14]
Yes. Who thought, hey, I can use this box without the tax, put it on the wall and use it as a platform? Who thought of that? It's a few, yeah.
[00:25:27]
Yeah, and as you can see, right, it doesn't hit us in the same way these biases. But there was also many, and to be honest, I saw it as well, is what is happening here is something called functional fixness, especially when someone brings it in, and here's a box of this, and you only think of the box to hold the text, but you don't think of it as a platform that it can hold.
[00:25:50]
This is what we call functional thickness. It's a cognitive block that limits a person to using an object only in the way it is traditionally used. And there's many more research about this. And one particular one is the classic problem of functional thickness. Control groups only notice the key obscure features about 27 to 40% of the time. We saw it happening here roughly. So it's never everyone's impacted the same. It's a few of them. So while there's this thing called the generic parts technique, which I find very interesting, when they train people to that, it's already 37 to 70% of the time. What's the general parts technique is that we don't name things.
[00:26:36]
for what they can do, but name it more flat. So I give DDD training, and in the DDD training, we talk about buying tickets. So people create a ticket boundary. And I say, what does a ticket actually do? Oh, it's a piece of paper that gives you access. Okay, so I'm trying to flatten the functionality of it, trying to remove it, so that people can unsee what you actually do with the ticket. Because when we see a ticket, it's like, oh yeah, it gets access. But if you can relabel that to something very shallow, only do that because it's a bit counterintuitive to what DDD tries to do, is create depthness to the language. First, we can remove the depthness. That's the general parts technique. It's a very interesting technique. So that's one of the things you can do. And the thing, what I said about the ubiquitous language, that's what we call in DDD the language of the model, it's very counterintuitive because the ubiquitous language creates depthness, but then we can also not see outside that language anymore. It creates functional fixes. It has a good purpose, don't get me wrong. I use it a lot. But we need to do something to de-bias ourselves away from that language, away from what we can do here. So modern artists protect their obscurity. And this is how I see design. And this is why Eric Evan, I think, I'm not sure, has this Kandinsky art, right? I like modern artists because they try not to name it. They leave it in obscurity. And that's why when you do event storming, and who's done event storming here?
[00:28:06]
I keep it very obscure because if we make it very exact, for instance, stuff like value stream mapping is very structured, very exact. There's no room for the obscurity and there's no room for seeing it in a different way. This is a balance that you need to find, right? So at the start, I try to remove the exactness to obscurity and then we create new exactness. Yeah, so the first thing we do is the general parts techniques right instead of what is this? What is it made of and what could these parts do and then the second thing is from Rebecca Wersbrock's responsibility driven design What does it actually know and do? So whenever I hear a surfer's manager or handler, I'm trying to rephrase it, trying to rephrase the wording to what it does or what it knows. So these are two of the things that we can do, two habits that you can ask during that check-in.
[00:29:03]
All right, so now we are decision ready. We are broadening the frame.
[00:29:07]
And now we have to get independent advice. Um, Yes. And this is a thing we all think or we all very often feel that we are seeking or getting independent advice. But very often, like what I see around me, is that this independent advice comes from people that we know or that we work with a lot or that are in our bubble or that we trust or that has some form of hierarchy in the organization that we have in or someone that just repeats their advice a lot. So that doesn't necessarily mean that the advice that we're getting is actually independent. And we have some things in place, right? For example, the architecture advice process, it's useful because we get advice from people that are affected by our decision or that have a lot of expertise on this topic, but it doesn't mean that it's independent. So a lot of people then tell me, like, yeah, but I got it from a lot of different people, the same advice. That still can mean that all of these people are basing their advice on the same resources because you are all part of that same bubble. So independent doesn't equal a lot of people. That's not the same in this sense.
[00:30:19]
Oh, sorry, I need to go back. There are some biases that we need to look out for here, and one of them is overconfident bias. And overconfidence bias means that we tend to overestimate our own skills, X, Y, expertise, knowledge, the control that we might have over future outcomes, it means that we think we are wiser, more capable, smarter than we actually are, and that might lead us to make
[00:30:45]
not that optimal decisions, let's call it that. I wanted to say bad decisions, but less optimal decisions. So when we were preparing this, we came across a post on overconfidence, and, well, I really liked it, so I'm just going to let you read it.
[00:30:17]
same in this sense. Oh, sorry, I need to go back. There are some biases that we need to look out for here, and one of them is overconfident bias. And overconfident bias means that we tend to overestimate our own skills, expertise, knowledge, the control that we might have over future outcomes. It means that we think we are wiser, more capable, smarter than we actually are, and that might lead us to make
[00:30:45]
not that optimal decisions, let's call it that. I wanted to say bad decisions, but less optimal decisions. So when we were preparing this, we came across a post on overconfidence, and, well, I really liked it, so I'm just going to let you read it.
[00:31:07]
And there's a joke in this, of course, right? But 50% of men surveyed believed that they could land a commercial aircraft in case of emergency. And when you read this, you think, well, that's crazy because no one can land a commercial aircraft in case of emergency, especially not 50%.
[00:31:24]
Without the training.
[00:31:25]
Without the training, yes. I could do this. That will be fine.
[00:31:29]
So, again, this is a joke, right? Because this is not gender-specific. Everyone suffers from overconfident bias. I mean, if you... Yeah, I told this the last time we were together. If you would come up to me and ask me for advice on how to celebrate Carnival in the Netherlands after this talk, I would be very overconfident as well. I would be very right, but I would be very overconfident as well. So it's not just a gender thing. This can happen to anyone. And in software design decisions or in the context that we are in, What this could look like is, yes, of course, we can do the migration of this entire monolith in three months.
[00:32:05]
Who's had that?
[00:32:08]
Estimations are very, very receptive for overconfidence bias. Whenever someone asks, hey, how long would that take? Overconfidence is always there. We always think we can do it in a shorter time probably than it actually is. So a lot of research again, and maybe someone is familiar with the driving competence study that was done. What they did here is they asked US and Swedish students to rate their own driving skills compared to others in that group. What they found was that 93% of the US students and 63% or 69% of the Swedish students believed they were better drivers than the median driver in that group. Statistically, there are some problems with these numbers. The difference between US and Sweden, the 93% versus the 69% is also very interesting, I think. But we won't go into that research. But this research really demonstrated the overconfidence bias. Yes, I'm a better driver than... Perfectly capable of driving, and I'm probably a better driver than the rest of this group. So this overconfidence bias, we can very much suffer from that, and this is especially, as I said, the case when it comes to estimations. If we have to give an estimation, overconfidence bias is probably there. So what could you do? Well, one thing that you could do is a premortem exercise or a premortem analysis. Anyone has heard of that? I know some of you do, but... And this is actually very fun to do, because this allows you to come up with everything that could go wrong.
[00:33:46]
Or that you need to do.
[00:33:47]
Yeah, okay, but I like coming up with all sorts of scenarios, like horror scenarios, this could all go wrong. So what you do is, if you say, okay, we can do this migration in three months, you get together before you actually start the migration and say, okay, three months. months from now, we miserably failed, what has happened? What caused this failure? And then you start to list everything that can go wrong in that time. So that means that you are being way more aware of the potential risks and the challenges that might come across. You can also already start to think, hey, how can we mitigate some of these? But it makes you way more aware of everything that could go wrong, and that will help you to reflect on your initial estimation of three months, and it might make you say, well, maybe that was a bit optimistic. So this is how you could challenge your overconfidence. And this doesn't have to be a very long thing, a long process. You could do this preferably with a mix of people, because then you get more of the input and all of the wisdom of the group, so that's... Usually the better thing to do, but this is a very, very useful thing to do.
[00:34:44]
Good.
[00:34:45]
Yes, next one.
[00:34:46]
Next one. So we have be decision ready, brought on the frame, seek independence and advice, and the last one is test your assumption. A little story from a childhood. I already, I had plus six on both eyes, and I did a lot of studies, but recently I think, okay, I lasered my eyes, by the way, and so recently I was like, okay, let's go to the eye doctor again. And they told me I don't see depthness that well. Never knew this. Never was a problem to me. But for some reason, I see depthness than other people. Never knew this. Right? And if you dig into what you see, you actually see only 10% or something. And we have blind spots. And the rest, your brain fills in. Right? So last time I made a joke, what's reality? I'm not going to go into that. But there's some... There are some things that your brain picks up there, right? And Evelyn already said recently about, or recently just about, trust me, right? Well, there's a problem there as well. And the problem is authority bias. I trust my cats fully. That's why I put them on. This is Lulu and Mr. Noodle's law. So we have the tendency to attribute greater accuracy to the opinion of an authority figure or a sophisticated AI. I'm going to jump into that. And be more influenced by that opinion. So I like yesterday kill the thought leaders in the way, not kill them, but remove them. Recently in my company, I'm an architect, and I know that brings me sort of like a power dynamic. And I recently was in a discussion. We as architects use the advice process. We transparently show our advice. And a colleague architect had a different advice than me, and we put that in the ADR, which is nice, right? We have two opinions. And then the team can decide for themselves what they do, and we help them with it. And then another person came to me, yeah, but Kenny, architects should have one voice. I said, why? Yeah, because people come up to me and say, architects said that. I said, whew, there's a big problem there. Right? A, I don't know all the information, and B, it's the authority bias, right? If people say that, I'm going to watch that, because I don't want that because of this. Right? I don't trust myself because of my bias, so neither should other people trust me fully. Right? We need to have that conversation then. But the thing is, the second part here is that also AI is now in the game. And there was 2019 research, and I know AI wasn't sophisticated back there, but they already figured out, they used a statistical model, which, you know, AI nowadays are mostly statistical models as well, but they were participants constantly gave more weight to Equivalent advice when labeled as coming from an algorithm versus a human being. These were people who don't have expertise in the field that they're asking questions. So that's a good thing to know, right? When they don't have expertise, they trust AI or machines more than people. I recently saw a post online, a Dutch person in Dubai, there was a rocket flying over her. She went to ChatGTP and says, well, ChatGTP tells me the war hasn't started yet.
[00:37:59]
Okay, we have a problem. And that's what this labels. But interesting to know here, experts do the opposite. They dismiss it. And the sad thing is that the non-expert using AI are more accurate than the expert not using AI, especially for statistic models, right?
[00:38:21]
And that's authority bias for you. So neither blind trust nor blanket dismissal leads to good decisions. So we need to have a combination.
[00:38:32]
So one thing that you can do in option making is saying, if a junior team member had suggested this, what questions would be asked? So I'm trying to de-bias it, right? So if an architect said this, right? Yeah, but architecture said this. Yeah, but what is that? It's a junior architect. You're trying to move them in the system too now. Yeah, and you can ask what problem was this designed to solve and do we actually have that problem and that was also in Kim her talk yesterday sometimes we just have solutions without knowing the problem yet and That's also an authority bias AI on this everyone's using AI so we need to use it as well
[00:39:11]
So that's test your assumptions. And let's talk about establishing simple rules. There's one thing that we have is these biases do is making it easier for us to process all the things that we do. That's what Evelyn already mentioned before, right? The thing is, when we're dealing with software, we're dealing mostly, hopefully, with complex problems. Now, complex problems have biases on speed dial.
[00:39:38]
We need to de-bias ourselves to move away from that, but there's so much to process. And that's why sometimes we just need to establish simple rules. Because there's this other bias called the law of triviality or bike shedding. It's also in our book. Who's heard of this?
[00:39:56]
Yeah, so organizations give disproportional weight to trivial issues because They are easy to understand while ignoring complex. It's called bike shedding because this was a managing observation, by the way, but it has proven later. It's called bike shedding because the management took five minutes to decide on a two billion, I think, project on a nuclear power plant. And spent 45 minutes talking about the bike shed that's being built next week. And we laugh, but I've done the same thing when I started naming things in software design. I might spend like 45 minutes and how do we name it?
[00:40:34]
Who's been in that situation? Well, we're completely ignoring the complexity that's underneath there. And that's normal because if things get complex, they call that edge behavior. We try to stay away from the uncertainties and from the complexity. That's what we generally do. Also, when you're dealing with some emotions, people might go to their phone, right, and trying to move away from that complexity. That's normal human behavior. That's your system one kicking in. But we need to actually move away from that. So this was just not an observation, but there's multiple experiments showing this. So we need some simple rules in place. So one thing that you can actually do is a two-way door that Heen also talked about yesterday. If reversible delegated, no group consensus might be needed because we can just go back. And the other thing is what would make this conversation a waste of time? So you can start with that question. Today, what would make this conversation a waste of time? And then someone says, well, if we spent 40 minutes naming something, okay, maybe we should not do it. The architectural advice process and design heuristics plus principles can really help you as a tool here to fight this. So those are the five things on our checklist. We're not done yet, by the way. We're not done yet. And one thing to know, a checklist is never done. Even when they're very simple, I can go to the grocery shop, I have a checklist, go into the grocery shop, oh, I forgot something. So a checklist is never done, just so you know.
[00:42:01]
What?
[00:42:02]
Wow, cliffhanger. Cliffhanger. So you might have gone through all of these steps, but then there's still one thing you need to do, and you need to have a check as a collective and not on an individual level. Because on an individual level, we all might feel, yeah, well, this probably feels right. We've been through this. We've discussed some of it. And then someone might say, so I guess we're all on the same page then, right? And that might be followed by something like, and otherwise people would have spoken up by now. And that last sentence is usually said by someone who's been in a company for over 10 years, has a lot of experience, a lot of authority and opinions. So assuming that everyone is on the same page comes with this false consensus effect that you might have heard of. So we have this tendency to overestimate that our own beliefs, values, opinions, behavior is shared by others. And that's called the false consensus effect. So the false consensus effect means that whatever we decide, we feel or we believe that at least the majority of a group is with us. And that's a very comfortable thing and a comforting thing to think and to know. And so when I say it, we might laugh at it, but this helps us to deal with all of the possible difficult thoughts that we have in our own mind. So it helps us deal with reality in that sense. So false consensus may, okay, well, at least the majority might be with us. Again, we should check that. We should do this check as a collective. And why? Well, this is also a... It's also very sad that this gift isn't working. But yeah, there's this one research on Facebook interactions with conspiracy theories versus scientific news. And what this research did, they studied over a million of interactions between Facebook users and either conspiracy theories or scientific news. What they found was that these groups very quickly, they segregated into homogeneous or like-minded groups in which the false consensus effect was nearly 100%. People in these separate groups believed that everyone agreed with them because they never saw opposite views. And I think that's terrifying and fascinating at the same time.
[00:44:16]
And I don't want to drag conspiracy theories into software design decisions. And what we can learn from that is, for example, if we work in teams that are mainly working remotely and we use Slack channels or Teams channels to get our information from and to base our opinions on, We only see the information in the channels that we are a part of. So by default, we are missing out on information, on wisdom, on things that we, on perspectives that challenge the perspective that we have. Sorry. So by design and by default, we are missing out on stuff that we need in order to get an opinion or to get a new perspective on it. And then you might say, yeah, but I am part of a lot of groups and a lot of teams and a lot of channels. Still, you are always... missing out on information. So you have to make sure that you are not like assuming the false consensus effect when it's not there.
[00:45:09]
There are private select channels, right?
[00:45:11]
That too. There are a lot of back channels usually going on in an organization that you have no clue about. So what you could do, you should always see like, hey, given all of the information we collected, are you on board with the decision we are about to make? This should be a question to the collective. The collective should have had this question at least. And if someone is not, then the follow-up question, and it's coming from deep democracy, we use that a lot, is, hey, what would you need to go along with the decision? But don't assume the false consensus effect. Don't assume that everyone is on board, even if somebody acts. They didn't speak up, so they probably are on board with that. That's an assumption that you should avoid. So this is why we always leave room for individual contributions before we have a group conversation. If you start, like, in a group, hey, everyone is on board, right? If you're not on board, this will be very hard to then speak up and say, well... Not necessarily. So always leave room for individual contribution. That can be through a sticky note, like first write down your own opinions or yes or no, or a sense-making exercise, but leave room for individual contributions before you have a group conversation.
[00:46:20]
That's why psychological safety is so important.
[00:46:22]
Yeah, and this actually helps in increasing that psychological safety.
[00:46:28]
Now we're almost done. So we are getting to why the... The checklist is a good idea. So we had all of this information that we wanted to share, the biases, the research, the outcomes, and we said to each other, well, what would happen with the audience? And we said, well, probably there will be some information overload. And then we asked AI, and AI said, well, your audience might leave the room feeling smart, but remembering nothing actionable. And that's all you want to hear, right, when you create a talk. So we thought about that, and we think, well, then we need...
[00:46:59]
You believe the AI. Of course. Straight away.
[00:47:01]
Because, of course, I believe that.
[00:47:04]
So we needed to do something with all of the information that we were going to throw at you, so we needed a structure, a structure that actually helps you in processing all of the information and prevents that information overload and choice paralysis.
[00:47:18]
There is this research. And this is the jam study. And basically it had two groups. There was one group who was exposed to 24 types of jam. I didn't even know there were 24 types of jam, but apparently there are. And there was one group that was exposed to only six types of jams. In the end, in the group that was exposed to the 24 types of jams, only 3% bought one. In the other group, that were exposed to six types, 30% bought a jam. So this is why we concluded... Sorry? Oh, okay. We concluded, okay, we should find a way to, instead of throwing 24, the 24 equivalent of the bias in the researchers to you, We should have a six types of gems equivalent of what we do. We narrowed it down to five in our checklist. So this is why this checklist is then, in the end, is becoming some sort of a cognitive scaffold that helps you to not remember everything that we said to you, but what we hope it does is whenever you see broaden the frame, you think, oh, that has something to do with functional fixedness. And whenever you see seek independent advice, I hope that you think of the men that think they can land a commercial aircraft, and then you think of overconfidence, and then you think of the premortem analysis that you need to do. So in that way, the cognitive scaffold that we call our checklist is helping you processing the information, not needing to remember everything explicitly that we told you, but it's a cognitive scaffold that you can use. And that's how it's different than just clickbait.
[00:48:53]
Yeah, or you can think of you, who are very overconfident that we'll make this on time. Five minutes! We will get it! Yeah, so there's this thing called the DDD crew, and we published the first version of it. But again, checklists are never done. These are our stuff. And for yourself, you don't want the checklist to be too long. But what I actually did at DHL, where I work in Better Looks, I added this to our ADR template at the decision part. And I already have some feedback from people, oh, that was interesting. That's good that you put it in. And as you can see, we added the biases because those form a little bit of extra knowledge that people can dive into. It's triggering the system too. to look it up from what is it. Oh, did I actually do drugs? So the checklist is small, but it will lead to more engagement afterwards. And again, people can ignore this totally. It's a nudge, and people have the choice to ignore it. Those are nudges, by the way. Nudges are always there that people have a choice. In doing this or not, right? But this already helps, and this is already scientifically seen that it will help.
[00:49:59]
And we just started this, so if you start to use it and have feedback, please let us know, because we are still updating.
[00:50:05]
If you have yourself, we would really love to hear it.
[00:50:07]
Yeah, just click it.
[00:50:08]
Yeah.
[00:50:09]
Yeah, so to close off, almost, our cats are an inspiration for us because they remind us that we sometimes need to use our system too. This is my cat on his birthday, getting a very special treat. So basically, you should learn to question your first instincts and turn on your system too every once in a while, especially if you're making software design decisions that you want to debias. So my cat is not very good at that. So this is also, this is always the reminder for myself that I need to be good at that or get good at that. So your first instinct, yeah, well, learn how to question it and use it to start debiasing some of the design decisions that you make.
[00:50:48]
Yeah, and just some final thought on AI. We showed the loop of decisions being put in your mental models and in your next decision. AI is trained on our artifacts, which are by default bias. So it's the bias echo, right? It creates training data, which is biased. It creates a model where you seem like it's less normal. And then it creates that authoritative suggestion, which you input into the context or even the decision, right? But I place it into the context. And this is the reinforcement loop, two reinforcement loops that's happening, especially if they're going to add that directly into the new one. Everything in the end will turn into an echo chamber. So I'm already experimenting with, can I use a checklist in the AI so that it can try to help debias it by default? It's already biased. Just remember that.
[00:51:46]
Time's up. And I think that's the time.
[00:51:49]
One minute in my clock. And thank you.
[00:51:52]
Yes, we're done. We're done.
[00:51:58]
39.
[00:52:02]
You are not open and confident at all.
[00:52:03]
No, it's 39.