MagnoliaTree: Inspiring Brave Leaders
We are Sabine Gromer and Christina Huber and you're listening to Magnolia Tree's podcast "Inspiring Brave Leaders". We leverage our network of inspiring individuals from all walks of life to learn from their experiences with leadership. We aim to spark thinking on ethics and leadership one podcast episode at a time.
MagnoliaTree: Inspiring Brave Leaders
Beyond Forecasts – Gregor Sieber on AI and Unpredictability
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of the Inspiring Brave Leaders Podcast, Sabine Gromer speaks with Gregor Sieber – software industry veteran with over 20 years of experience and most recently Managing Director at CloudFlight Austria. Gregor brings a rare combination of deep technical expertise and sharp strategic thinking to one of the most pressing questions facing leaders today: Are you truly ready for what AI is about to demand of you?
Together, Sabine and Gregor challenge some of the most dangerous myths and biases that are keeping executives passive in the face of exponential change. From the status quo bias and optimism bias to confirmation bias – they name what's getting in the way, and make a case for why traditional three-to-five-year strategies are no longer fit for purpose.
The conversation takes a turn toward scenario thinking, anti-fragility, and what it means to lead organizations that don't just survive uncertainty, but are built to benefit from it.
Whether you're just starting your AI journey or rethinking your entire organizational model, this episode's for you.
Our Guest
https://www.linkedin.com/in/gsieber/
https://www.postdigitalleader.blog/
Shownotes
AI Sources – Recommendations by Gregor
Gregor Sieber suggests following these blogs and people for AI insights. He notes that he doesn't have time for every Lex Fridman or Dwarkesh podcast, so he uses them to spot trending topics and guests, then researches them traditionally. Gregor also recommends using AI tools to discover more good AI sources.
https://www.deeplearning.ai/the-batch/
https://www.deeplearning.ai/the-batch/tag/data-points/
https://huggingface.co/blog
https://developer.nvidia.com/blog
https://techcrunch.com/category/artificial-intelligence/
https://hai.stanford.edu
https://bair.berkeley.edu/blog/
https://machinelearning.apple.com
https://openai.com/news/
https://openai.com/research/index/
https://research.google/blog/
https://ai.google/research/
https://deepmind.google/blog/
https://blog.google/innovation-and-ai/models-and-research/google-deepmind/
https://www.distillabs.ai/blog
https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026
http://yann.lecun.com
https://www.linkedin.com/in/yann-lecun/
https://www.linkedin.com/in/andrewyng/
https://www.linkedin.com/in/demishassabis/
https://www.darioamodei.com
https://www.linkedin.com/in/fei-fei-li-4541247/
https://lexfridman.com/podcast/ – if not for listening, then for identifying topics and people
https://karpathy.ai
https://simonwillison.net
https://www.dwarkesh.com Dwarkesh Patel podcast – if not for listening, then for identifying topics and people
https://twimlai.com/podcast/twimlai
https://neurips.cc – one of the most important conferences in the field
https://arxiv.org – find papers, e.g.: https://arxiv.org/list/cs.AI/recent
https://www.platformer.news/
https://garymarcus.substack.com/
https://datasociety.net/
https://jack-clark.net/
https://thegradient.pub
https://www.alignmentforum.org/
https://www.lesswrong.com/
https://www.latent.space
https://www.oneusefulthing.org
https://www.ben-evans.com/essays
And it's really hard for us to grasp what's going to happen with it and the implications. Not only in business life, but also in our daily lives, in our political lives. There is a huge potential in AI, for example, for cyber attacks, for spreading disinformation, for accelerating a lot of the trends that we see in society, in politics right now. That's what's unsettling me on the big picture. Yes, prepare and forecast, but do it in the way the fire department does, because they also don't know where the fire starts, but they make a plan so wherever the fire starts, they can be there quick and they know what to do. You need to think of an AI agent just like an employee, and there's this uh quote Jensen Huang from Nvidia said that IT will be the HR of AI agents, and you can kind of think of it in the same way. You will need an onboarding process for your agents, and then they will be, if they have a good onboarding, they will be really good specialists for these tasks. If you're trying to handle all of the compliance and the questions about how would we you know finally adopt this across the organization, first you will be stopping your innovation. At some point, we had a person that had the right test data that was then incredibly fast at evaluating new products even as they came out. And this is the kind of situation that you want to be in as an individual, but also as an organization. And I think more and more these test cases, these the test data and the scenarios that you're doing, will help you adopt new technology quicker than your competitors. Because as we just said, right, the models are pretty much the same. So everyone has access to the same power of a model. So it's two questions. Do you use it smarter than the other organization, or are you able to access the new model faster than them? This is really one of the things that leaders need to have on top of their list to ask themselves: what do I need to do so that each and every function in my organization has the ability to safely try out this new technology? Where do I even get my information from? And if the only source I have is my LinkedIn feed, it's probably not a good one.
SPEAKER_02My name is Sabine Gromer. And my name is Christina Huber. And you're listening to Magnolia Tree's podcast, Inspiring Brave Leaders.
SPEAKER_04We leverage our network of inspiring individuals from all walks of life to learn from their experiences with leadership.
SPEAKER_02We aim to spark thinking on ethics and leadership. One podcast episode at a time.
SPEAKER_05We continue with the AI theme, and I'm very honored that today I'm joined by Gregor Sieber. Gregor has more than 20 years of experience in the software industry and in software development. Um, most recently he was the managing director for CloudFlight Austria. And I'm really excited to talk to uh Gregor because Gregor, I find you have very stimulating thoughts around a number of topics when it comes to AI. And um knowing full well that um when we talk about our first topic, which will be technological developments, that this is probably not going to age well given the speed of development. We're still gonna talk about it. But for those of you listening in a couple of months, we're recording this at the end of January 2026. Just feel free to fast forward to the sections where we will talk about change leadership, mindset, myths around AI and what we believe leaders need to do to prepare themselves for what's to come. I think it's fair to say that I am incredibly alarmed, not necessarily just because of the pace of the AI development, but more because I perceive a really high level of passivity, maybe naivety of senior executives in many companies when it comes to the topic of AI, and certainly a huge time lag in adopting to this new environment, shouldn't new technology available now. So I would love to start our conversation, Gregor, by first of all welcoming you to this podcast.
SPEAKER_00Thank you very much, Sabina. It's a pleasure to be here.
SPEAKER_05And I'm gonna start with my first question, which is before we talk about tools and trends, what is something about the current AI wave that genuinely unsettles or alarms you, even as an expert with deep knowledge?
SPEAKER_00All right, yeah, that's it's an important question. I think there are um quite a few things and they all they all kind of tie together. I think the main the main point is that it's really hard for us to perceive the pace at which things are happening, right? So I think we are in a phase of exponential growth of this technology, and it's really hard for us to grasp what's going to happen with it and the implications, not only in business life, but also in our daily lives, in our political lives. There is a huge potential in AI, for example, for um cyber attacks, for spreading disinformation, for accelerating a lot of the trends that we see in society, in politics right now. That's what's unsettling me on the big picture. On a much smaller perspective, it is, as you said, it's it's kind of the the lack of preparedness at the moment in a lot of organizations. And I'm slightly fearful for for what it actually does to people when they they realize that they need to adopt a lot of things in their lives, their way of thinking, their way of working, um, maybe the way they actually earn money. That's going to be a big change for us.
SPEAKER_05I would love to to dive a bit more into that, the adoption and how we need to prepare ourselves. So maybe why don't we just continue right there? What do you think should people do right now? What should they prepare themselves for? Do you have any recommendations of where to start if you haven't started? Any sources that you find useful? And by the way, I'm a terrible podcast host. I always ask four or five questions in one.
SPEAKER_00Yeah, it's it's it's a good, it's an excellent question. Maybe it helps a little still to to start where we're where we're at actually right now and what's what's happening. Because I think some people might have been missing the steps, right? So the first the first question is how how do you even get your information and how do you know, how do you find out where AI is at and um where the adoption is at, right? What's been kind of happening in the last few months is that AI systems that we can broadly use as regular users, as let's say also as non-programmers, have kind of moved away from this purely chat-driven approach that we've seen in the last year, where you know you had a you had an assistant that was super smart, but essentially you had to bring everything to the assistant, right? You had to bring all of your files, your data, your ideas. You had to tell your whole story every time again, and then the smart assistant would help you, but then in the next interaction things would be forgotten. And we've now moved to a new version of these assistants where they can basically be configured to remember. I think memory is a really important bit here, to remember what you um want them to do, remember your context, remember the past interactions, and work directly in your data. And they can even help you create that data, manage that data, structure it. And that is a huge acceleration, especially in in business life, because you can actually use that to kind of control all of the data that you have on your laptop in your environment. So that's for example one of the steps that has happened in the past months. And when I look at typically organizations in Austria, I think a lot of them have not adopted it, right? So in the coding community, um, you might have people that use it because they are maybe more adept at installing software and trying things out. But in the broader picture, for example, for project managers, um I hardly see any adoption. Now, yeah, what is it that we have to do? I think one piece here is really the information. So, where do I even get my information from? And if the only source I have is my LinkedIn feed, it's probably not a good one. Yeah.
SPEAKER_05That's true. Because everyone is claiming to be an AI expert on LinkedIn, right? True. Whether they are or not, it's a different story.
SPEAKER_00Absolutely. And you might get your bits and pieces if you follow the right people. So you could probably design your LinkedIn feed to be a good source of information because there's very authoritative people also on LinkedIn that are publishing good stuff. But those people then might be publishing research results where you need a lot of mathematics to understand what it means, and it might take three, four, or five months to actually trickle down into something that you can that you can use. So you need to find some good sources of information. I think those might be different for everyone, and it's probably a good exercise to remind yourself every few months to check where am I getting this information from? How am I setting up my education to stay on top of things with AI? But then the second piece is not just the reading, it's the experimentation. Privately, it's one thing that you can, of course, do. But when we're trying to get this into organizations, into enterprises, there's a lot of hurdles to jump through. And I think this is really one of the things that leaders need to have on top of their list to ask themselves, what do I need to do so that each and every function in my organization has the ability to safely try out this new technology? And that can be in a sandbox, it can it can have a lot of question marks. So you can, it's fine if you try it and you think it's great, and then you have a big question mark of how to roll it out. That's that's another thing, right? But you need to get people trying the technology and getting excited about it, and then you can figure out the rest. If you're trying to handle all of the compliance and the questions about how would we finally adopt this across the organization, first you will be stopping your innovation.
SPEAKER_05100%. I completely agree. I would like to just personally ask you where do you get your AI information from? So, which sources are you following currently? Knowing that you said you should probably evaluate that every couple of months, but where do you currently get your information from? What are good sources in your experience?
SPEAKER_00Um, so I do actually follow quite a few of the AI researchers, yeah, Janderkin and so on.
SPEAKER_05Um share a couple of names because our listeners might actually know.
SPEAKER_00Yeah, I think that would probably be something for the show notes. Happy to put in a list to add a couple of people who are on the research side and not so much on the like it's you know, Sam Altman, frontman of a multi-billion dollar organization. That's very it's very politically, very opinionated. I think it's always interesting to see what's actually happening in research. Yeah. Second part is there are some active communities, and for me, being someone who is from the software industry, it's quite natural to be connected to the coding community. And then there's simply some people who've been really engaged in trying out all of the code automation tools and who also share when they work, when they don't work. But I do actually rely quite a bit on my personal network for that as well. And I think that's also essential, as with probably a lot of other professional things, right? Not to just rely on what's out there in the web or in books, but think of how could you expand, how could you build a network of practitioners outside your own organization where you can get feedback, where you can get experiences about things that you don't have time to test in depth. Just beginning of this week, for example, I had a sports meeting with a friend. We went kayaking. He's also an engineer. Um, he has two products that he that he builds and runs with his one-man organization. And he tried a lot in the past months, for example, around creating visualizations of data with AI tools. And that was just super valuable to have that one and a half hour conversation while we were on the river, because I don't have the the six months to try out those tools, right? So that's extremely valuable.
SPEAKER_05Fantastic. Thank you for sharing where you get your information from. And I agree on networks. We're gonna put in show notes your recommendations of researchers that you follow that you recommend, and also our recommendations, because we do follow sort of AGI experts and communities quite closely as well. So you're gonna get a number of recommendations. You were saying something before around how companies need to prepare themselves. And you said something really interesting because just this week I had a call with a large insurance group, and they hired one of the big strategy consulting firms around originally AI learning journey, it now evolved into more of an AI project. And the way they set it up is very modular, where they first do an assessment of what use cases do we have, how do we use our data today, what are our problems. Then they're gonna do some innovation around it, then they're gonna do the learning journey, and then in the end, it's about decisions and strategy. And I would love to hear your thoughts on that because my reaction to it is one, this is very ineffective because it would take a lot of time. Two, you do not know what you don't know if you don't grasp the full potential of AI. So for me, the best approach is always to use C-suite executives or to use their time to really show them and let them experience hands-on by actually working with AI applications themselves and show them the potential and give them the experience so that they can assess what is possible so that they can challenge consultants, especially around use cases that consultants have developed and are now selling, that are probably already outdated. So I'm really concerned that large strategy firms advocate this very lengthy and of course very costly approach, which is great for their revenue, but I don't think it's the most effective way. So I would love to hear your expertise because you are somebody that really knows the stuff, knows project management as well, especially since you've been in the IT industry for such a long time and you know in software. So I would just love to hear your thoughts. If you were to set up a project like this, how would you approach it?
SPEAKER_00I think there's probably a variety of reasons why it's happening this way. My feeling is that a lot of people at the moment are really desperate to get something started. Yeah, and I think there's a lot of memes going on about that, and sometimes this even comes from the C-suite or from the strategy advisors who basically want to have something to show because they're very desperate for it. There's also a push from the big AI companies, of course, to push consultancies to do these pilots. Some are even funded. So as a consultancy, you might even actually get money from a hyperscaler to do a project. I've heard that some of these um, you know, they they they they get done and they hardly get used, right? So it's it's a questionable approach. And and I totally agree with you. So I think the big thing there is when you're starting that way, you're always considering kind of off-the-shelf use cases. You're continuing to think in the way that you've thought before. You think of the same problems and the same types of solutions. First, you need to open yourself towards what's possible. And I think, especially in companies, you need to also put yourself in a position where you can iterate on what is it that would differentiate us and what differentiate our solution from the solution of others to make us more competitive, right? And I think that's also an aspect where first you would want to learn what the tools can do for you, and then you can start thinking of how to apply them creatively, right? So I would kind of turn that agenda around and definitely start with the education piece on a broader scale. And I think that education, it's important that we don't get stuck in metaphors. That's something that's happening a lot. Obviously, it will be hard to get down to the nitty-gritty mathematics because that's also totally out of my scope. I'm not a mathematician to like very deeply understand things, but try and get rid of as many metaphors as possible, get hands-on, be honest about the limitations of what the models and what the systems do. But I think experiencing hands-on what agentic AI can now do, which is really have this memory, have a planning phase, have different executors and reasoners that can actually backtrack and try and fix problems for you and operate different tools, getting to know that firsthand, trying it out, experiencing the limitations, but also sort of these amazing effects that you get with it, puts you in a very different position to then think of the use cases, right? And maybe ditch some things where you thought that you would need to do and prioritize on completely different ones.
SPEAKER_05There is a French futurist, unfortunately, I don't have the name ready right now. But I love his quote because he said the biggest challenge of today is not to imagine different futures. The biggest challenge of today is to actually assess the present with different assessments or with different eyes and different conditions. And I love that because that's exactly what you said, right? It's not so much imagining what we could do in the first place, it's actually let's challenge our current thinking around it. One of the topics around sort of the technological development that that I find very frustrating in the discourse around AI is that we make it so incredibly unsexy because we talk about efficiency all the time. We talk about time saving. When I really feel that in its core the core questions that we should ask ourselves right now are twofold. One, what is now possible that was just not possible before? So, what did we always want to do, but we didn't have the capabilities to do it? And the second piece is how much ethics do we want to afford? And how much can we afford?
SPEAKER_00How much can we afford? That's the that's the bit, right?
SPEAKER_05And I think these two guiding questions somehow get lost in this whole use case efficiency, layout restructuring, time-safe discussion around AI. How are you feeling about that?
SPEAKER_00Yeah, absolutely. Two very good points. I think broadly speaking, whenever you're just talking about savings, it it gets incre I'm I'm also a salesperson, right? It gets incredibly boring because it's very unsexy as well. You're talking about some percentage points of automation. So you want to talk about something where you're creating value in some way, and that can be all kinds of different value. It can be new products, but it can also be freeing up massive amounts of time for people to do something really cool in the organization or with their lives, right? And I mean, just one very quick example for this first scenario, right? What's what's possible, what maybe wasn't possible before. One thing that I've been confronted with a lot in the past 15 years is this question of buy versus build in software, right? So you're an organization, you're growing, you know you want to become more digital. And I think we are now quite digital. So right now it's more often about maybe you want to replace three or four applications to better fit what you're actually trying to do, or you want some kind of enablement for a different line of business or something. And previously, we all know there's been a lot of standardization going on, right? So, for example, at some point you get to the point where you think you really need an ERP system, and then well, you go to SAP, and then essentially you adapt your processes to SAP because they have a really good best practice, right? If what you're doing in your ERP is something that doesn't differentiate you, that's probably good because it will take away a lot of the pains and it will make it easier to hire people who know who know the system. But if we're talking about core processes in your organization that do actually differentiate you, and it can be something like you're really good at handling your logistics, for example, right? So you might be a retailer of small metal bolts or something, and for for some reason you have a very efficient logistics system that others don't have, then you would probably go more towards this build thing. But it used to be a really big decision. Now we have coding automation and we have a huge amount of acceleration. So this is, for example, something that's now very well possible. That that was a really, really tough decision in the past to simply go down this build route, right? So that's something that's enabled, I think, with this new technology.
SPEAKER_05Let's talk about this some more because you mentioned agentic AI, and uh you and I were talking about cloud code, and you were quite excited about Cloud Code, as seems to be the case for the community. So educate us a little bit, please. What is agentic AI and what is Cloud Code and um why are people so excited about it?
SPEAKER_00Good. Yeah, absolutely. So Cloud Code is just one example. There's a couple of other products out there as well that you will find when you Google. Of course, all of the big players have some kind of framework. In sort of my network and also in CloudFlight, for example, Cloud Code was the product that we love the most in the last weeks and months. What's behind that, right? So, as I as I mentioned before, we all kind of got used to this chat window way of interacting with AI, right? So you have a language model that's pre-trained on some kind of data and it's super smart, but it only knows the stuff that was pre-trained on. And then you can prompt the language model so it kind of gets into a certain role or has a bit of pre information, and then you start asking your questions. And maybe you had a system where you could upload a couple of documents, like two or three PDFs or so, to get your questions answered. But then you would still be moving the data back and forth, and the system wouldn't really be actionable for you, right? So you can copy paste some stuff out in back. Into your Word document or your code editor or something like that. With agentic AI, things are a bit different. So AI has made huge steps in its capability to do planning and to do reasoning. It's moving away from this single state of mind where you just get a question and you provide an answer towards something where it can actually kind of imagine something that's further down the road and it can make different ways to get there and it can ask you for confirmation, it can help you. And if there's some problems coming up on the way to get there, it can actually react to that on its own. That could be things like fetching things from the internet on its own, extracting data, summarizing it into a document, based on that summary, do something else. And then if there's a problem there, for example, I don't know, the internet connection is not working, it's not able to get something from one website, it might come up with a different solution, right? And propose that to you. And that can be interactive or it can be on its own. And you can create different types of these agents. So they can be quite specialized. Typically, you would do that by task. So you can create an agent that's super specialized in researching things for you on the web. And you can get another one that is very specialized, for example, on helping you write long articles. And you could get another agent that's very helpful for disseminating that same article, let's say, across three different platforms that have different authoring style, right? And crucially, they will remember what you gave them. They will be able to access all of your past data. So if let's say you already wrote 10 articles, they can read those articles, they will have the style, they will be able to compare structure and the style to the previous articles and use that as a basis. And they will operate directly on the data. And it's the same with coding, right? So your AI will have access to all of the code base, and you can tell it what you're trying to do. It will suggest some steps to get there, and then you can engage in a dialogue, or you can just let it work and oversee the results. And that's very different from the way we interacted previously. So it kind of feels like you're the orchestrator at some point, right? And yeah, that's that's a very different approach, much higher speed, of course, very different way of working because it's more like having an actual collaborator.
SPEAKER_05So agentic AI. If I want to access it, if I want to work with it, what do I need to do?
SPEAKER_00For an individual user, it's rather straightforward. So the the quickest way of starting out, there are, and it it sounds a bit technical, um, there are command line tools. Some of you may remember this it's this box where you type in some actual things that the computer is supposed to do, like enter a directory or delete files. But don't worry, because actually inside that terminal, the application will start, your agent will start, and that's where you interact with it. So that part then again feels quite similar to the chat window. But the difference is, since this is working on your command line, it will have an actual directory and it will have access to all of the files and data that it has in there. It can do its its stuff in that, kind of like in a sandbox. So in the end, you need to download that software, you need to start it up. You probably need a subscription. There's only a few that leave you this capability free of charge. Gemini is one of them, by the way. So if you have a standard Google account, you can try it out. You will hit token limits at some point, but um, you can try it.
SPEAKER_05So I downloaded and uh oh, by the way, actually, I didn't want to forget. When it comes to Claude, it's a large language model provider, is that fair to say? It's actually our preferred LLM because for everything that we need as a small boutique consulting firm, Claude is most educated and gives us the highest quality response. I just want to quickly also note, because I try to do that in every podcast, that there is actually a European large language model is Le Chet by Mistral. They're based in Paris, and I really recommend for ethical reasons to actually use that, and the performance is not too bad either. So these providers you can access, you know, through their websites, you can actually download the Agentic AI and work with it. And you just said, of course, that Agentic AI has access to your files, your computer. So what is the downside of that? Apart from I remember in preparing for this podcast, you were saying something that really struck me. Like, we can't guarantee that it doesn't delete your files or files that you don't want deleted, for example. So what is what is sort of the safest approach? Without, of course, uh being under an illusion that anything is 100% safe. What would you recommend when you actually want to start experimenting with agentic AI?
SPEAKER_00The way it works is in this scenario with Cloud Code, you have the large language model which is running on a cloud platform. So the model stays with Entropic, who owns Cloud. They innovate on the model, you get access to the latest model, hopefully always. That's another thing that we we talked about a few weeks ago, right? Um potential risk, of course, for differentiation. But for now, sort of the playing field is quite level because everyone has access to this, to the most powerful model once they have a subscription. Your data resides on your machine or on your system with this approach, which is good because you're not giving it away to the cloud. So basically, it will only be transiently sent to the model, and the model will not store it anywhere typically. So that's quite safe. But then on the other hand, to be productive, you're giving access to the model, to the agentic software, to data on your machine or on your network, right? And in the in the easiest use case, you're kind of constraining this to one directory, and that makes it kind of easy. So if you're doing something like your project management or content creation, you can have that in a directory and you can easily make a backup. But now there's more and more agency tools coming out that have a lot of skills that can access calendar, email, maybe your ERP system, maybe your order management, maybe your bank account, right? Or there's also these AIs coming out now that have skills to buy stuff on the web for you, right?
SPEAKER_05To book your holiday.
SPEAKER_00To book your holidays, right? Of course, yeah, as you said, there's there's a risk. I mean, uh basically even running any software on your machine has a risk for exposing it. So how do you prepare? Oh, what can you do? I think the important bit here is at some point you will have to trust software you use. That's true for almost all software, right? But how do you first get there? And I think first getting there is by having good test data to make sure that the software actually serves your purpose and kind of nothing goes wrong in these first tests. And for an individual, you know, if it's about your content creation, that's probably straightforward. If it gets to, you know, trying things with your email account, it already gets difficult because I think don't think we have like good you know mechanisms to reset your email to two days ago. So that's already a bit of a problem. If we get to the organizational level, that's I think where things get exciting. And that's actually one of the things I really recommend any organization when you're going down this path of experimenting with AI, but also with other types of software. The big question is how quickly can you verify that this new tool that's coming out is an advantage for you? So it's actually better than what you had before. And in my experience, even in traditional software projects, this was always a huge problem. So, for example, I did a project in legal tech. It was for a large legal department. We tried to replace a tool that was 15 years old and wasn't maintained anymore, trying to identify the test cases where we could actually find out whether these products now that are currently on the market work for the specific purposes that they had, like around protecting their brands and their specific type of stuff that they were doing. It took a couple of months. But at some point, we had a person that had the right test data that was then incredibly fast at evaluating new products even as they came out. And this is the kind of situation that you want to be in as an individual, but also as an organization. And I think more and more these test cases, these the test data and the scenarios that you're doing, is almost like a little bit of your IP, your own IP, because it really reflects what your organization does and where its advantages are. And it will help you adopt new technology quicker than your competitors. Because as we just said, right, the models are pretty much the same. So everyone has access to the same power of a model. So it's two questions. Do you use it smarter than the other organization, or are you able to access the new model faster than them? Right. And it's this thing if a new model comes out, can I just plug it in? Will some things break that the other model for some reason, you know, did and this one doesn't anymore? Those are the questions that you need to be able to answer. And for that, you need that test data, test cases. You probably also need some sandbox systems and a good backup.
SPEAKER_05After our conversation in preparation for this podcast, I actually decided that we have a couple of unused laptops in the business, and I decided to set one up where I just download from uh from our cloud, storage clouds, a couple of documents and uh put them on a desktop and use igentic AI on just this laptop without having all the cloud files on there. So in a very safe environment. But for me, it's quite easy because, for example, I wrote a book. So it's very easy to test agentic AI by saying, listen, I I want to create a platform where my clients can actually ask questions, a sort of a chat and LLM. Can you help me build one? The source is the book and so on. I would try it out and let you know how it goes. Um but actually you uh you really sparked my creativity around that, uh, you know, and being able to just try it out in a very safe way. And Elke will do the same just because we are super users, but we don't come from a tech background. So for us it's really important to just try things out and uh you know see what's working, what's not working, and then talk to people like you when we hit the wall. Because you're right, the network is incredibly important.
SPEAKER_00Yeah, absolutely. And maybe because we we also talked about you know what do leaders uh need to do. And I think if you're an an IT leader, CIO, head of IT, this is what you need to get your people to do, right? So you need to give them environments where they can test these things out without fearing that they will you know break the organization, delete the internet. I don't know. That should be a focus, creating these safe experimentation spaces and then having like a parallel stream where whatever works for you, you think of how could you actually roll it out, right? And one thing that we often run into is uh worries about IP with these cloud-based models. And it can be a path to also use local language models, right? And that's another thing that we can all try on our laptops. You will obviously find out very quickly that for agentic, it's much worse than cloud. Of course, a model that runs on your own laptop for smaller stuff like summarization, these small models that can run on a small server or a small laptop are actually quite good also for other tasks. And that should kind of be your second path in the strategy, right? So one is really experiment somewhere where things can break, and then whatever is interesting, try and figure it out how you can make it safe enough from your, you know, from a regulatory perspective, from an IT security perspective, how can you make it usable? And then, you know, yeah, maybe it's not 100% of the performance of the cloud model, but maybe the 80% are also really good.
SPEAKER_05And I think this is a great bridge into one of my sort topics when it comes to AI adoption in companies, because I believe the Data Protection Act and many actually regulatory thoughts in Europe, I welcome them. I know there's a lot of criticism, especially from the American supercompanies, around that. But for me, I can relate to it, it makes a lot of sense for me. And it also makes a lot of sense to me that senior leaders of organizations, CXOs, are personally accountable for some decision making. It's however a real um, how should I put it? A lack of ownership in this responsibility when I see so many senior leaders going the most secure and safe way in their perception, which is we block all AI applications, we block everything that's available that is not right now in our IT infrastructure. We offer something like a company XYZ GPT. We offer something that is co-pilot-based, and one, we believe first, we're safe, which is not true, absolutely not true. There are a couple of plugins that actually apparently, even within the Microsoft infrastructure, can do a lot of harm and did do a lot of harm and are actually a gateway to hacking. But then the second piece is not really understanding that in doing so, I actually take away an opportunity of upskilling that is really critical for all of my employees. And not knowing the limitations of Copilot itself, which there are numbers. I would just share what I know, and then please correct me if I say something wrong and please add to it. But one of the limitations, of course, is that I don't have a version control. So I might not be using the latest model. That's that's the first thing. The second piece is very often they have a memory function or they have access to our files, but I have to actively opt in and out. So if I if I don't remember that, I might actually just be using all the data that is stored in our clouds, in our files. And all that I do is I just generate ideas or strategy or input around what I already know and what other model already knows. And that is a big problem because I basically stay in my own bathtub and I just move water around that I already have and already know. So these are just a number of limitations. Plus, even with a co-pilot in most organizations that I work with, I don't see that there is a real learning journey or learning program for employees to actually use it effectively. It's pretty much like it's their use it. I'm simplifying, of course. So, what are your thoughts around sort of how leaders embrace technology right now, how they play it safe, in my words. And then, of course, limitations around co-pilot, for example.
SPEAKER_00Yeah. I agree with your view on simply using something like copilot for a broad rollout, right? So what's happening is tools get used to maybe you know help you write an email faster or in uh in a more elaborate style that might not even be fitting you. At some point, the style kind of gets the same for everyone, right? So we're all just um just using that to adopt to the AI style.
SPEAKER_05I mean, I have to say, right, when I read LinkedIn posts, I feel I have a detection radar for what has been written in ChatGPT. I mean, really, there are a couple of words to just give it away. There's a couple of framings, right? You always start with the negative, you say, but it's not that.
SPEAKER_00Yeah, and you delve into things.
SPEAKER_05You delve into things, yeah. Absolutely. Absolutely.
SPEAKER_00That's that's true. So I think a lot of the also a lot of a copilot use that I've seen is about making emails nicer and then maybe you know making some fancier PowerPoints.
SPEAKER_05Transcripts of meetings.
SPEAKER_00Yeah, transcripts. I mean, that's quite that could be helpful actually. But I think you know the root of the problem is that um emails and PowerPoints don't create innovation, I think. And meeting transcripts are nice, but the core of the problem is usually that you have too many meetings. So automating away the transcription of the meeting and then you never read it, it's like all of these people recording meetings that they never watch. So I don't think that solves it solves our problems. You need to kind of break out of that office universe. Office is great, it does its stuff, but I think if you want to use AI in a more powerful way, you need to get out of the office suite thing and have a broadly usable tool that you can use across all of the applications that you have that you can use to ideate and to create your stuff that lives outside of your PowerPoint. So that's important. Yeah, how do we get there? So I think I agree with you. First, I can totally sign up for your statement that I think regulation is incredibly important. Yes, people are complaining about it, and yes, the AI Act was maybe a bit unspecific about some things initially, and that got a lot of lawyers nervous because they're now waiting for you know the first court decisions, the first lighthouse decisions before they can give you answers. And that's that's obviously a problem because that slows us down. But I think you can quite easily see why regulation is important if you look to Minnesota right now, if you look to what's happening in the US with facial recognition and all of that stuff. So clearly we need that. We want to have a safe environment. And I think it's also very important from a competition perspective. You mentioned ethics, right? And I think my understanding is that economists agree that you need to have a level playing field, right? And that means you need to have rules that apply to everyone. And it's it's the same with social responsibility or environmental responsibility. You might have a brand like Patagonia who's able to kind of stand out with the stuff they do and then they have a fan base and it works. But if you're just the company that kind of spends more on that at some point by economic um rule, you will be at the losing end of things, right? So you need to put the stuff that's important for us as a society into regulation so you have a level playing field. And I think that's exactly what needs to happen with AI. Yeah, and we should be adopting that globally. So I'm I'm really hoping that we can get more traction there as well. I also don't think that this will be in any way really stopping our innovation, right? So the models will be getting better. And we do have, and I think that's also important to say, we do have huge question marks when we are moving towards what people now call AGI. Yeah, um, so artificial general. Artificial general intelligence.
SPEAKER_05It's expected this year.
SPEAKER_00Yes, there. I think there was an interesting discussion in uh Davos, um also between uh I think it was DeepMind and Tropics. Exactly. Um so you get the different perspectives. Um I think the term AGI is also used in slightly different ways depending on whether you're pleasing your investors.
SPEAKER_05We're actually gonna link the recording of that discussion. That's actually a good one. We're gonna also put that in the show notes. Thank you. That's that's very much.
SPEAKER_00So the definition of what, you know, or the perception of what AGI is kind of different between people right now. But in any case, we are in an exponential growth of this technology and we are expecting huge steps forward. And there is definitely a risk there that also a lot of people that are much smarter than me have identified there are existential risks in having an AI that's smarter than us, that's connected to the internet, that's able to basically hack into any computer system, and so on. And um, that topic is totally unresolved. So there is AI safety research going on. That is mainly pushed by people who want AGI. That's also an important thing to notice. It's so this AI safety research is not by the government or the EU or you know, some kind of neutral containment entity. It's they are biased. They are biased. Yeah, they're biased. So there are big question marks, and that's why I think regulation is good. Of course, we never want kind of regulation that just prohibits everything, but that's also not what's happening, right? So we have essentially the possibility to do a lot of things, but we want to stick to our ethics.
SPEAKER_05Getting back to what executives need to do, um I think And the risk appetite that they should be willing to take, because I seriously I think that they harm not just themselves and their colleagues. I really think that they harm the survival of their organizations.
unknownYeah.
SPEAKER_00Yeah, I I agree with that. It's there, the technology is there, and organizations will use it. We've moved ourselves into a space where we are treating compliance in in the IT scene as something where you you know you create a lot of paperwork, but you don't really solve the problem. It's the same with IT security. So instead of thinking of what is your exposure, you just you know you install five more systems who kind of track things for you and produce some paper, and then you think you're you're safe. I think this is the wrong approach. You need to understand what you're dealing with, you need to create the places to experiment, to really evaluate in a solid way, and then really understand also the regulation, right? Because I think, in my opinion, the AI Act offers a lot of possibility to use AI in the organizations. There are question marks around where is the data moving on a hyperscaler, but you have that when you use Office. So if you use Office 365, you have the same problem that you have with using Open AI. And for some reason everyone ticks their boxes with Office 365, but not with the other stuff. So that is weird. It is. So in that sense, I think it's it's the responsibility of leaders to put that on their own agenda. I think what's what's happening quite a lot is you have you have this chain where no one wants to be responsible, right? So if you ask your legal department, they will say, yeah, let's lay wait for the first two or three court decisions and so on, then you you never move forward. So you need to make it a priority. You need to understand yourself what data is being processed, where does your IP move? So does it stay with you, where are the boundaries of these systems, and what is the actual risk exposure? And what does an attack vector look like? What does the threat look like? And then you can make decisions, you can document them. And in my opinion, we have the business judgment rule. And you know, if you do these things and you also see a possibility where your organization can innovate and be more valuable, access more customers, and so on, then I think you should be safe to take those decisions, right? But you need to do them on your own.
SPEAKER_05And as well, I think we need to come back to education. One of the big changes uh that we really need to take is we need to move away from assuming employees are effectively childlike and need to be managed to an adult to adult approach. I believe most adults are capable of making solid decisions if we give them a couple of simple Rules. For example, your crown jewels stay within your IT infrastructure. You don't put them into an AI app, agent, whatever, right? But there is so much that you can actually do. And I think as long as people understand just a couple of really simple rules, you can actually use it, you can experiment with it, with quite a manageable downside. So what would be your simple rules for employees when it comes to using AI? How would you educate them?
SPEAKER_00It's not a question, probably, of two or three rules. So there's a lot of talk about guardrails, right? The guardrails is this term, and I've I've heard and read it uh too many times without it being specific. I also assume that whatever these guardrails are, they will be changing quite a bit. So you need to put people into a state of mind, into a situation where they feel they can really take decisions on an informed basis when the environment changes. And I think that's kind of getting to the meta-level. I think we need to move from these traditional organizations to post-digital organizations. What do I mean with that? I think technology is everywhere, and we need to put kind of the human back into the center. We need to enable humans to take their own decisions and use digital as a tool and not the other way around. For organizations, for leaders, I think this is quite often a big change because it it kind of means breaking up this typical kind of command and control structure or predict and control structure and moving towards something where you're much more task-oriented. So you want to have less of organization by hierarchy, but more organization is centered around tasks. Because that's when you can also, where you can also effectively then map these digital tools. And you want to give people enough context about where you're going in the organization and what your values are, as you said, what your crown jewels are, and so on, that whatever happens, they are fast that within their team to take those decisions, try things out, or you know, come running to you really quickly and not through a huge chain of hierarchy where it takes one month if there is a problem. And I think that's the big learning that organizations and especially leaders have to make.
SPEAKER_05And what would you say is the skills gap nobody talks about because it's maybe uncomfortable rather than technical?
SPEAKER_00Yeah, I think the skills gap is I think very much around how you organize as a team and how you perceive yourself. So maybe it's it's even more a perception thing than a skills thing. How you perceive yourself and your own value to the organization. Is that because of a title and a role? Or is that because of something that you're good at doing and because you're smart? I think this is something that's really getting flipped around at the moment. So I had an interesting meet a couple months ago. We were merging two companies and we were talking about what will the value proposition of the merged company be in one or two years ahead. And the agenda started out with first, let's define the roles. So what's going to be the role of every person in the room? And then let's talk about what the company will do, and then the value proposition, and then how we market that and how we brand it. I was quite shocked about that. I was like, okay, but how can we talk about the roles if we don't know what we do? But um then another person said who was who was organizing the meeting, everyone needs to know in what role they're contributing to the conversation. And that for me, that was like a huge realization of how different people can think, right? And I think this needs to be turned around. So you wherever you are in the organization, you want to get smart people in a room and they look at the problem and then they see how they or AI or or other tools can contribute to the solution. And I think this for a lot of people, this means moving out of their comfort zone because it's it's not you know predefined role with predefined responsibilities. It's more of finding out what you're actually good at, accepting that maybe some tool might be better than you in one or two months, or maybe already is in certain aspects, and then finding out where you can shift and where you can add that value, as everyone likes to say, or um, or have fun or be productive in that new setting. So I think it's really this flexibility of the mind that we need.
SPEAKER_05Absolutely. I subscribe to that 100%. I love that. And you were saying something quite interesting to me the other day as well, because you said in future, a lot of what we need to do actually relies on good old-fashioned data and knowledge management. If you could just expand on that a little bit more.
SPEAKER_00It's a very good point. We talked before about this idea of trying to really identify what differentiates your organization, what is your IP, what makes you different. If you are trying to get agents to handle tasks for you, you will need to explain them how you're doing it, why you're doing it. Ideally, you can give them data to find out and reference cases, right? And if you look at the way knowledge management is, you know, it's always been like, yeah, we do that, we have a confluence, and we something is in there, and you search and it doesn't work, it's all outdated. And of course, an AI has that same problem. And you but you need to think of an AI agent, it's just like an employee. And there's this uh quote, so uh Jansen Huang from Nvidia, it's some I think late last uh last year, second half of last year, said that IT will be the HR of AI agents, and you can kind of think of it in the same way. You will need an onboarding process for your agents, and then they will be, if they have a good onboarding, they will be really good specialists for these tasks. If they only find reference cases, procedures, instructions that are totally outdated, they will have the same issues as someone else, uh, like a like a human uh coming in and uh and not finding that, right?
SPEAKER_05And it this just reminded me of a comic that I saw already a few months ago, which was like in order for AI to take over, we need users to sp to be specific and define what it is that they want. Dot dot dot. We're gonna be safe. Dot dot dot. That just reminded me of that, right? We we really have to be clear about what it is that we want, what is the data, what's the quality of the data, how do we warehouse the data, and what do we do with that? And then the second piece that you just said, that I really think is also just important to understand, is that there is a number of organizations that we know quite well that are actually merging the HR and IT departments into one organization, because that is, in their view, the best setup for the future uh to think about connectivity between humans and technology, or humans and machines or AI. And um there's actually a lot of crossover, and I believe that that's a really solid strategy. Not for every system, we looked at it for a number of different companies and we decided against it in two cases, but for most, it actually does make a lot of sense because one system can one um nurture the other, but two, also it just helps the strategic discourse. Because for me, both HR leaders and CIO leaders or CTO leaders or chief technology officers, um they actually need to step up in the strategic process. They need to start to drive strategic thinking and innovation. And they're not used to that because they actually used to be the maybe, you say guardrails, or be the guardians, or be the entities that execute and maybe give input into strategy, but not to drive strategy. And I think that there is a number of roles that really need to change when we think of boards, when we think of executive teams, when we think of CXO roles, and we'll be quite interested in your perspective on that.
SPEAKER_00Maybe a final add-on to your to your statement. Um, so first of all, I think this idea of merging the two fits sort of this this post-digital paradigm really well. And I think I I have some some ideas that I can that I can share on how the roles should evolve. The knowledge management part that you mentioned might actually connect back to your skills question a little bit. Yeah. So we have knowledge management on two levels. One is on the organizational level, so um giving it a priority, having systems where you can actually find stuff, version stuff, you can flag things as outdated. Um, that's important. But also as an individual, it's really crucial to be able to formulate what you need and what the constraints are in an efficient way. Yeah, and this has been an issue in software forever, right? So it's these requirements that everyone talks about, right? Uh and there's things like user stories. And this is something that's definitely not outdated when you're talking to AI agents. So even though you know you might not be programming in the software, you might have an agent that just does stuff for you. But the way you need to describe what you want and what the constraints are follows pretty much these patterns. Also due to the fact that these things were trained on software projects, right? I mean, those agents are all trained on publicly available data, and a lot of that also comes from software projects, right? So then they know that that format, that way of speaking, that way of describing things. So that's an that's a very important skill for the individual, but also for the organization to adopt. When we look towards leaders, um, I totally share your view. So traditionally, IT departments have have often been kind of like, you know, second row, so they make sure the stuff is running. And when CEO or whoever in top management, the board uh wants something, they kind of deliver, right? So that someone draws out a strategy and then they scramble to deliver. You need to turn this around. I think IT must be a strategic enabler of AI. So they need to open the doors so um the C level can get creative and can come up with new strategies based on the doors they've opened, based on the enablement they have. And I think it's similar for for CFOs, for example, right? With the tools that we have now, CFO activity needs to adopt a software mindset. So, you know, instead of hiring two more people that do something in Excel, you'd rather hire half a software engineer and a couple of agents and automate things, right? So you would be in a position as a CFO to work on predictive modeling of where is the organization going, what are maybe some worst-case scenarios that we want to avoid, what are the paths that lead there? Let's not go down these paths, let's set up some alerting systems if we're moving that direction, for example. Be in a position where you have all of that data at your fingertips to actually provide the data for more strategic planning for input into the strategy. Instead of kind of being in this backward-looking position where you're always kind of trying to catch up with the numbers of the last months to close the month, right? That potential is there, but it also requires a huge mind shift in the in the finance um office.
SPEAKER_05Absolutely. And I think it it generally actually requires a huge mind shift because we need to move from traditional strategy thinking, strategic thinking and strategy drafting, to um scenario thinking and foresight um application. And um, I would like to talk to you uh about this topic as well, because I find it quite moving. I think the traditional way of crafting a strategy that is three to five years looking ahead, uh maybe incorporating some base scenarios like a base case, a worst case, a best case scenario, some KPIs is not fit for purpose any longer. And one thing that all futurists and all foresight institutes that I know of have in common is that they say we cannot speak of the future as in singular because the future is not written. We have to think of the future in futures. There's multiple possible futures, and it's our job to do two things with that. One, to think in these different dimensions and different futures, and try to imagine what impact is that gonna have on us as a business, and how can we prepare for that? And which maybe tracks do we need to start now to avoid negative impact? And the second piece is that we really need to become a lot more engaged in shaping and crafting that future because otherwise, it will happen anyway, but we will not have been in a position of agency to actually try to steer towards one of the futures that we prefer. And I do believe that this is this is quite crucial and it's quite important. And from my experience in working with senior executives, this is exactly the mindset shift that is needed. And at the same time, where I feel that a lot of executives have a number of limiting biases. I would just share those three biases. I actually said that in a previous forecast. But the three biases that get in the way, and this is not just my opinion, it's actually also the opinion of the Copenhagen Institute of Futures Studies, which I can really recommend because it's an NGO that has been around since the 60s and does incredible foresight work and offers a lot of its data, information, and insights for free. So it's definitely worth checking it out. They actually say that uh three biases really get in the way of us imagining this unpredictable growth potential. The first one is that we have a status quo bias. And this is definitely one that I see in many boards right now, and I try to wake people up, which means that we believe that the way things have been is more or less likely going to be the way they will turn out. So, for example, we try to create metaphors around the industrial age or social media or development of smartphones. And we believe that because we live through that, we have some relevant lived experience for what's to come, which is of course a limiting bias. Second uh bias is an optimism bias. I hate to say it, I'm not terribly AI optimistic anymore. I'm actually quite worried because we're not prepared, in my view. I think we could use AI just like money in a very positive way. I think money and AI, for example, are neutral at the starting base, and then they become one or the other. They become a force for good or a force for bad, and it's on us to shape it. But the optimism bias basically uh says something like, in the end, it will all be fine. And I don't subscribe to that. And then the third one is confirmation bias. And I think this is the one that I see predominantly in boards, because board members are trained to have an opinion, and they have a confirmation bias that their opinion is right, and they also use that confirmation bias to scan news for news that actually fits them, or you know, when they attempt to work with an AI. Or in your example, you're absolutely right. In many organizations, they actually tried different use cases that were unsuccessful, and then you believe, well, it's not gonna hit us so hard. It's right, it's it uh it will be a while yet, or whatever else excuse you have. And before I really start preaching about why we should really park these biases, I actually hand it over to you because I would love to hear your thoughts around foresight thinking, scenario thinking, um, imagining different futures and how to prepare for them.
SPEAKER_00That is a big and important topic. Absolutely. First off, there's plenty of evidence that forecasting is almost useless, right? There's plenty of books, smart people have formulated that. Why do we still forecast? Why do we do that?
SPEAKER_03And I was a former financial analyst. I used to forecast the shit out of things, sorry. But it's true. And it's and it's absolutely true, it's useless. Yeah.
SPEAKER_00So I mean, uh, you know, there's there's no one predicted the 2008 crash. Why do we do forecasts? I think I mean one very easy explanation is we like to have something that we get measured against and we like to have some assumptions, and then when it doesn't work out, we have you know good excuses because we can say, you know, this assumption wasn't met, and that's why we're not where we are at, right? And then um everyone is fine because uh you know we have good arguments for it. I think that maybe that's one operational thing. Maybe it's also a generational thing, because I think a lot of the um if you probably if you look at the average age in boards of larger companies, you will have people that that actually lived in a couple of stable of decades, right? That made their fortune in an age of stability. I'm assuming it might be different if you put a couple of 20-year-olds in these positions, but that's that's just a guess. Getting back to this prediction thing and and forecasting thing, right? So we are in a very volatile world. We have mathematical proof that a lot of the stuff we're looking at is is non-linear equations. It's even impossible to forecast it for more than two weeks. We do have a way to tackle that, and that's the thing that gets used in weather forecasting, it gets used for pandemics and for other things, hardly in economics, but I think that's a structural problem in the science, and that's ensemble forecasting. That is having different scenarios because you do not even know the parameters of now correctly, right? You don't know the buying intent of everyone, you have no insight. So the only way to do some valid prediction would be to actually imagine these different scenarios and see where the whole bulk of the scenarios takes you. And does that all go the same direction or is it very broad, or do you have clusters, right? I mean, that's that's an interesting thing to look at. But operationally, I totally share your belief that we are now at a point where this three-year, five-year forecast doesn't have a lot of effect. There are so many things happening right now with exponential growth in AI tooling, still a huge push for digitalization. We have some market shifts going on. Um, it will be really hard to do that top-down. So, what I really think needs to happen is that we need a shift in mindset. And there's an old book by Andy Grove, it's called Only the Paranoid Survive, where he talks about his time at Intel and how sort of the most important piece of information arrives sort of at the sea level, basically months after it was relevant. You need to prepare, but I think he says, yes, prepare and forecast, but do it in the way the fire department does, because they also don't know where the fire starts, but they make a plan so wherever the fire starts, they can be there quick and they know what to do. And that's kind of the negative view. There's also this view that you can put yourself in a situation where you can actually benefit from uncertainty and from things happening. I think that's the exciting piece, right? How do you not only make your organization sort of defensively resilient, but how do you put it into a state where you can actually benefit from new stuff that's coming in? Like it can just be the cool new AI model, it can be also something different, a new market opening. I don't know what it is. And I think that is something where you need to change your organizational paradigm. So I mentioned this Jensen Huang statements about IT being the HR for agents. And I looked a little bit about um his views on how he thinks work should be organized. And it's much more about having groups of like 40 to 60 people that are focused about certain tasks than having roles. And I really connected that with this view in the book Reinventing Organizations, where there is the teal-colored organization, which in the end is an organization that is very autonomous, that is totally self-organized, where there's no central steering, but each member of the organization has a clear picture of where the organization wants to evolve to and can thus take decisions within the smaller teams that benefit the organization. And I think there's a lot of similarity between those two views. And I do really think this is what we need as a paradigm shift. And yes, we will still do forecasting, but I think it's probably interesting to that more around, for example, things that we absolutely want to avoid. I think that's also something that Berkshire Hathaway has been doing quite successfully in their modeling. And instead of trying to forecast sort of the good things, the growth path, see where you absolutely don't want to go and look for flags, how you can avoid it, because that can be modeled much better. Put yourself into a state where you you can at least understand the now very well, so everyone can take decisions quite well about tomorrow.
SPEAKER_05That is a beautiful finishing statement. Thank you so much, Gregor, for that. I would like to just add one more thought. I believe leaders really need to additionally embrace the thoughts around anti-fragility. And one element of anti-fragility is really that we have to build redundancy in a process. It's not about having the most efficient process, which means that I have just enough supply and enough product that I need to produce and to sell. We actually need to think a lot more in discretionary capital and savings to invest, um, to pivot. And we need to define these signals very early on and to monitor for them and actually pick them up as soon as we can. And this is all we have time for today. So I really thank you for joining me in our podcast studio.
SPEAKER_00Thank you very much for having me. It was a great pleasure talking to you. Thank you for the conversation.
SPEAKER_05And thank you, everybody, for listening to this episode. We're really Looking forward to hearing your feedback, your thoughts. We hope you enjoy the additional resources, and we thank you very much.