Pioneers Podcast by Lyreco
The podcast from Lyreco that explores the Future of Work, from Lyreco's innovation team.
Each episode we talk to a pioneer of the future of work, exploring the themes and trends that will shape the workplaces of tomorrow.
Pioneers Podcast by Lyreco
Are We Building Tools Or Outsourcing Thinking
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is already in our lecture halls and our workplaces, but the real question is whether we are using it to learn faster and build better, or just to cut corners more efficiently. I sit down with Professor Dr Vincent Ginis (VUB Brussels, visiting professor at Harvard) to unpack what he is seeing on the ground as universities rethink assessment, companies scramble for “AI strategy”, and everyone tries to work out what responsible adoption actually looks like.
We get into the messy reality behind the headlines: why focusing only on cheating misses the upside of personalised AI tutoring, why continuous evaluation suddenly becomes feasible, and why some of the healthiest learning environments may be the ones with clear “no screens” time to protect attention. Vincent shares a useful lens: AI puts both good and bad behaviours on steroids, so the challenge is designing systems that reward real understanding rather than outsourced cognition.
From there we move into workplace AI adoption and AI governance. We talk about Goodhart’s law, the traps of measuring the wrong things, the tension inside IT departments between security and experimentation, and why heavy-handed guardrails often produce shadow AI anyway. Vincent makes the case for internal champions, high-variance experiments, and building new products as the gap between idea and execution collapses. If your organisation is stuck rewriting emails, this conversation will help you aim higher.
Subscribe, share the episode with a colleague, and leave us a review, then tell us: what is the most genuinely valuable way you have seen AI used at work?
Find out more about the Future of Work -> www.future-of-work.eu
Welcome And Guest Context
Marc CurtisHi, welcome to the Pioneers Podcast. I'm Marc Curtis. Um, today I'm talking to uh a guy called Vincent Ginis. He's a professor at uh VUB. He's also a visiting professor in Harvard. I won't go on about it because I I will ask him to introduce himself. He's such an interesting bloke though, and honestly, I I've chatted to him before and we we just ended up chatting about so many different topics. So I think there's a very strong possibility that um things might go a little bit off topic. But in terms of what he knows about AI, how he thinks about adoption, how he thinks about the impact on education, absolutely fascinating. So I won't rattle on any more. Um here's my interview with Vincent Ginis.
Marc CurtisVincent, so nice of you to come in today. Thank you so much for joining us. Um I'm going to introduce you briefly, but given how long your journey to where you are now is, um, I'm not going to go into it in huge depth, but I think maybe you can explain a little bit. But effectively, you are a professor at VUB in Brussels. Um, you're the rector's academic uh commissioner for AI and data. That's it. Which I assume is really long on the door, right? Yeah, yeah.
Vincent GinisI don't have business cards, so I didn't have to update that.
Marc CurtisThere's not enough ink in the world for that title, but but that effectively means you're you're you're the senior voice within VUB about policy, governance, how AI is being deployed across the university. Um you also a visiting professor at Harvard. Yes. Um, which I think is fantastic, and maybe you can talk a little bit more about that later. Um and your background, although you're focusing much more at the moment on on AI and um how that gets rolled out and maybe some of the policy stuff, your background, and we spoke about this before, is in mathematics, information theory, and you're actually doing photonics as well for your for your PhD, and I remember you telling me about that. And and as much as it would be lovely to talk about photonics, I think we probably have to stick to one subject. Um but as I say, really good to have you here. And maybe you could just just tell me briefly, it'd be good to hear how you made that transition from physics effectively to to AI.
Vincent GinisYeah, yeah, sure. Um, yeah, first of all, a pleasure being here. Um, looking forward to this uh conversation. So um let's see, where is the most organic point to start this um transition? I I imagine it was 2015. Um I just completed my PhD here back in Brussels, uh, which was in photonics indeed, in applied physics. Um and then I got a postdoc, or I could go for um you know to collaborate at Harvard in the group of uh Professor Capasso. And when I arrived there, it it was kind of my first week or my second week that some of my uh very very brilliant colleagues told me, like, you should have a closer look at AI, what's what's happening there. Um, and so I knew that AI was a thing, and and you know, I kept up on reading in the popular media, but I wasn't really like digesting the the most recent papers and the most recent blog posts. And that was kind of the time what what I would call the previous uh AI wave, the deep neural networks wave, that all of a sudden also um got huge jumps in capabilities that weren't really predicted. Um, and at that time some of my colleagues were telling me like probably this AI is going to have like a huge influence in our work in physics and in photonics, so it's gonna you know pay out to pay a lot of attention there and to get like more expertise. So the first year I was there, I was uh really like uh despite the fact that I was going there like in a physics lab, I was also getting um a lot of you know basics of AI and basics of computer science. And and from then on, I would say every year the percentage of my time that I was focusing on AI was growing, um I would say super linearly. Um and I looking back on it, I have the impression that what happened to me was kind of happening to the entire field of physics. So I see a lot of colleagues, you know, um working with these tools in one way or another. Obviously, not everyone is working with the same tool, but when I now look back and when I now talk to colleagues, like almost everyone is working in one way or another with these with these tools.
AI’s Biggest Impact On Education
Marc CurtisRight. And and and I think we'll probably dive into that a bit more, but I I you know clearly it's having a it's having a massive effect in education, it's having a massive effect in the workplace, and this is where, and I think we talked about this before when we met, there's a lot of parallels between what the impact it's having on, for example, um how people are learning and the journey through the education system, and potentially where you can draw parallels for what the impact on businesses are, because effectively it's it's it's a it's a parallel route, you know, because you go into work and you learn stuff. I guess I guess maybe maybe it's a little bit more acute in the educational arena, but but where are you seeing the the sort of the biggest impacts at the moment from a from an educational perspective and and how does that translate potentially into the workplace?
Vincent GinisSo I think one of the one of the biggest misconceptions that live in the world these days, or or it's not entirely a misconception, but it's at least um it's at least taking away too much energy and attention from people, is that obviously AI can be used in a bad way in education. Like if you give traditional assignments to students that let them write a couple of essays, and you know, if they don't have ownership in that task and if they just outsource it, then obviously something's wrong with that evaluation. So that's like the big elephant in the room, and then everybody in in academia is thinking about ways to uh rethink how we evaluate and and and what we teach and how we teach it. So that entire process is like now under like a lot of um turmoil. Um unfortunately, the attention that goes to the bad effect of AI is taking away a lot of attention for the extremely positive outcomes that um these personalized tools can bring to education. And I like to spend at least the same amount of time of my you know uh cognitive capacity to think about those things. Um and and one of those is obviously that students can have so historically in educational sciences when you want to have an effect on the learning of people, uh it has been shown that by far the best thing you can do is having personalized tutoring. Right. That's like the the golden standard of teaching and and of effect sizes of how people learn something. Obviously, that is just extremely expensive. Um but now we're coming gradually to a point, and I would even say not gradually, we're coming very, very fastly uh to the point where that personalized tutoring can become possible for everyone. So imagine that you teach and that um everyone who's struggling at some point or at a different point in you know in the story that you're bringing can get like personalized feedback on it, can get um some immediate um resolve of a question that they have. So those things are actually those things will probably push our education to levels that we that we cannot think of now. Uh and that is that is something that that makes me very excited. The other thing is that we can now as teachers start thinking of more continuous evaluations, things that we couldn't do before because you know the size of the of the groups are too large, or you know, you could you couldn't just spend all the time in correcting all these things. But you know, add it with trusted tools, you can start doing these things. Um so yes, I I do think education is is going through a rough time because those who do not adapt are having evaluations that potentially um do not reflect what students know or what they can do. Um but those that do adapt are going to offer educational services that you know were just impossible just a few years ago.
Marc CurtisIt yeah, so it's it I I think if I hear what you're saying, is that if it's seen as an augmentation tool, something to actually enhance the the more traditional methods of of education, then then it can benefit the students and help the the teachers. I'm I'm I'm interested and a little bit struck um and and I and I I don't know whether you saw it, there's a there's um a clip going around on on LinkedIn at the moment that you know and various other channels of um this neuroscientist, Dr. Um Yarod Um Kooney Horvaff. I don't know whether you've heard that. I haven't seen it. Fascinating thing, and I and I'll share it with you afterwards. And basically he he appears before the uh a Senate committee in America and he's he's got this research. And basically his whole thesis is that uh since the introduction of digital technology in the classroom, so not in higher education necessarily, but in the classroom, since since it was introduced in around 2010, um that there's been a a a correlation between that and cognitive decline. And his whole thesis is that you don't learn through digital tools. Now clearly there is that you know that links somewhat to the use of of AI. And I guess if I had to be critical of linking that to AI, you could say that actually if it's used as a as a as a an augmentation rather than the delivery method, then maybe it's not a problem. But do you think that there's a there's a danger of cognitive decline in AI?
Screens And The Attention Economy
Vincent GinisSo this might sound like very weird and and perhaps a little bit paradoxical compared to what I just said, but if for instance in my aula, uh digital devices are not allowed. Right. Um and the reason is very simple. Like when we look back on the 2010s, like 30 years from now, people will say, like, how is it possible that we that we had ourselves and young people immersed in this attention economy of you know services constantly trying to hack your brain and hack your attention. Um and that is an unfair game because obviously the the bot that is trying to get your attention is is trained on billions of your versions and you're just you know on your own. So obviously, everybody is losing this game, and I see this in the aulas as well. So despite the fact that I do believe that most students come to the aula with the intention to you know just take notes and just use their computer in augmenting fashion, um, it's for sure that uh after a while somebody just breaks or is the first one to break, and that picks away attention from people. So um it is going to be very, very healthy to have clear periods in your day where you're not, you know, uh interacting with the with the with the screens or in the in the digital spheres. Um, but that doesn't mean that those screens cannot offer um added value at other times. So obviously, you don't need them in my aula, you don't need them when you're interacting with other people, you don't need them on a restaurant, for instance. Um but but I compare it a little bit with um you know the internet is both Facebook and Wikipedia. Uh and obviously Wikipedia is helping my research, it's helping me when I'm learning something. It's it's probably one of the best gifts we ever got from from ourselves, actually. It's a gift from humanity.
Marc CurtisIt feels like the the the purest example of the best of the internet versus potentially the the purest example of the worst of the internet.
Vincent GinisWell, I mean I don't want to single out Facebook, but yeah, yeah, yeah.
Marc CurtisUm actually Facebook's looking pretty almost quite tame now in terms of but yeah, we'll take your point.
Vincent GinisUm but but there there you have this this beautiful example of you know screens that by themselves are not necessarily bad. I mean, there's obviously also studies about the light that comes out and you shouldn't do it when you're in bed and stuff like that. But but obviously there is like a lot of information there and well-curated information if you look for it. Um and AI tools actually you know reflect the same uh dichotomy. Uh they can be very, very bad in you know attacking your brain, but they can also be extremely well in just helping you out with whatever you're not understanding. And I've noticed it with myself. Over time, while you get better at talking to them, they also get better at you know figuring out who you are. So if you ask a question and you want to understand something, you know, understanding something means something different for many people. Um, and essentially it's always a translation process. You want to translate a concept that you don't know to you know a set of concepts that you do know, and you want to kind of make that linear mapping. Um and these tools are just incredibly good at that. Um, and obviously, I I've seen the examples of bullshitting and hallucinations, and I've I've been on LinkedIn as well. I I've seen people using it in bad ways, um, but that doesn't mean that you cannot use them in good ways. Um and it's unfortunate that this nuance often goes to is is gone, you know.
When AI Becomes Cognitive Outsourcing
Marc CurtisYeah, no, I I I completely agree. I I think one of the one of the challenges that I see, I mean, obviously you've got the content creation, but I think in terms of um learning or in the terms of interacting with data, um yes, I've I've you know I openly admit I use Claude and I use various other platforms, and and I find it exactly as you say, if you want to understand something, getting it to explain it to you is actually quite i you know it's the equivalent of actually talking to somebody who knows what they're talking about, and and that's a conversation which for me feels like a more human way of of ingesting information. But where I see it failing, and I've you know I'll admit that I've used it in this way as well, is you're in a meeting or you attend an event and you just record record what's going on, and then you without engaging with the content at all, you put it into AI and you ask for AI to give you something. And then the next step is like, ah, do you know what? I don't even need to read the AI summary, just create me some content to you know so so I think and I guess at that point that's where you've gone from useful tool to help you understand something to effectively outsourcing cognition, you know.
Vincent GinisAnd unfortunately, like often with humanity it goes like this, right? We first have to um figure out the extremely bad use cases before we come to you know the the grown-up's perspective and we realize like what are good ways to use it and what are bad ways. One of the ways that you know one of one of the memes that helps me a lot to think about AI is that it kind of uh puts a bad thing and a good thing on steroids. So things that were already happening are just happening on a larger scale, and and people that were doing good things are unable to do it on like a bigger scale, and people that were doing, you know, that were cutting corners can now cut corners more aggressively. Um so whenever I hear an example of something outrageously happening when somebody's using AI that everybody hears, like, I mean, come on, what is happening here? I tried to ask myself, okay, but what are perhaps hidden ways how similar things were happening before, but we didn't pay attention to it because it was on a smaller scale, or we couldn't see it, uh, or it didn't have a name. Um and I I do think that um many of these examples of cognitive offloading are not necessarily new. Uh, I do think that they just have like now an umbrella term uh and they have like one single use of doing it. Um but yeah, so to put it bluntly, I I I'm pretty sure that people also used to like not prepare for meetings and not do their work um in the way that it should have been done, or ask someone else to do it for them. Um and a similar thing, so sorry for rambling on this, but a similar thing is kind of true for um uh assignments at universities. Obviously, 10 years ago, people were also cheating. Uh, students were also asking their friends, like, hey, help me out here, or they were like um asking family members or paying for it as well, or they were paying for it. There was an entire industry, you know, revolving around um ghost riding. Um, and those things were happening on a smaller scale, but it was happening. Um, and so you you could say that I mean, if you really want to be an optimist about this, you could say that AI is actually you know bringing all these bad use cases to the forefront so that we have to figure out a solution to them and that we cannot just ignore it because it's somewhere in the margins happening.
Workplace Change Beyond Efficiency
Marc CurtisUm and I guess the same, uh I mean, if we if we if we're being an optimist about it, which I which I try to be sometimes, um, what we're seeing now is a response to the last 10 years of digital media, retention economy, social social platforms and so forth, is actually a bit of a swing back to people recognising the value of face-to-face, the value of human connection, the value of um you know turning turning machines off. My my um my wife found um an old um iPod um and ordered um ordered a charger for it. So it's the old style iPod with the with the big wide clicky thing at the bottom. All it does is play music. Um and apparently, and and you know, we're not particularly ahead of the curve, apparently there is a huge market for old iPods because people want to disconnect, they want to actually, yeah, they want to listen to music again without without feeling the need. And and I think Spotify is a nightmare for this. You know, you play an album on Spotify, and it you know, there's a video playing as well, and it's like I don't, you know, same with podcasts. I mean, we're recording this for the podcast because Spotify likes to see a video for podcasts, but actually I don't consume podcasts like that. I I just want to have it in my headphones, you know. Um I feel like I feel like we could spend a lot of time talking about the educational impacts of AI, but just to bring it back to the workplace because obviously this is what we like to talk about. Um how is how do you see uh some of these challenges being um surfaced within the world of work? I mean, one of the things I've spoken about in the past is this idea that uh we're moving from uh pyramid-shaped to organizations where people at the bottom of the pyramid effectively are learning their trade and you know, almost through an apprenticeship, they're doing the grunt work, the boring work which prepares them for middle management, and so on and so forth. That's now changing what some are saying to a diamond-shaped organization where fewer people need to come in at the bottom because those jobs don't exist anymore. Um, is that it does that chime with with with what you're seeing as well?
Vincent GinisYeah, so I I I I see the the papers coming out and and you know, also the anecdotal evidence that there is indeed like things are changing on the market for us. And depending on the sector you're working in, it's happening faster or uh not so fast. Like obviously, over the last couple of months, there's been a lot of ink uh written about what what is happening to the coding industry, right? Like it's it's it's very very um uh contradictory. So what industry the coding. Oh, coding, yeah. Right, okay, programming. Uh and so that well, a lot of that has to do with obviously how cloud code is also reimagining how people code. Um and so okay, so let me try to give a coherent answer to this. Um I do think that people understand so whenever you talk about um work of the future, people tend to extrapolate things, right? So you see a thing that is happening now, and then you think like, okay, let's let's it's a linear progression. Yeah, it's a it's whatever this is happening now, it's gonna be more of this in the future. Um I'm not convinced that this is how things are going to go. I think we're going to go to like a very weird period of a transient, um, and that transient could take a while because this this technology is also not stopping today. So the adoption is like lagging on the frontier models, but the frontier models are like speeding away, so that's a weird dynamic we're in. Um but I do think that too many people are looking at what is happening from the perspective of what am I doing today and what can these tools do? And therefore, I cannot do anything anymore, right? So if if at a certain point the tools, you know, get like 80, 90 percent of what you're doing, then a lot of people get into some kind of weird, um, it's not even weird, it's it's it's a very natural position that they're kind of afraid of it or they're trying to ignore it. And then there's some people that are kind of turning cynical about it, like, ah, all is over, right? These all these internet memes. Uh, and I think we're vastly underestimating how much room there is at the top. So this is kind of a plain words on um, there's plenty of room at the bottom from Richard Feynman, who wrote, you know, predicted nanotechnology with that paper. Um, but I think we're underestimating how much value can be created in society and and how much uh added value we can bring. And so it's not a zero-sum game. It's not like we have reached a saturation point of value to be created, and therefore, if there's now like another creating this value, we are out of the game. So I I highly and I've seen many examples actually already happening right now where this is absolutely not true, and where we, if we would have seen what is happening right now, let's say even two years ago, a year ago, we would not have believed that one human with just the tools at their disposition can bring this value. And I think that is kind of the the other perspective that I that I it's the augmentation perspective, obviously. But it's the perspective that I appreciate much more where people are being forced or where people challenge themselves to rethink, okay, given that there's so many options at my disposal. Uh, and I I kind of like the tagline. I I don't even know whether it's from OpenAI or or Entropic, but you can just do things now, you can build things. You're the so the the limitations that we used to have either on a cognitive level or on like a development level, many of those have actually disappeared. So the thing that limits many of us now is creativity and just you know wanting to do it agency.
Marc CurtisI guess I guess the parallel there is Yes. Um, you know, I'm struck by the fact that every time there is a new medium created, the previous generation will look at that and say, Well, it's not real art, you're you're not you're you're not being creative. Or with music, for example, I remember in the 1980s when synthesizers became popular and every or sampling became popular, everybody was like, Well, they're not really creating anything, they're just they're just rehashing something. But but now you know, nobody would argue that craft work, for example. I mean, God, that shows my age, but but nobody nobody would argue that they weren't creative. Yes, they were just understanding and utilizing a new technology in a way that nobody could have imagined. Absolutely. It didn't devalue what had happened before, but it but it it lowered the bar for entry into that. So people who potentially would never have played a violin could actually create an amazing piece of music. Sure. So and and I guess the same is true in in other worlds.
Vincent GinisYeah, and I I think at some level, artists are you know the profession that is being trained to constantly reinvent themselves. Um, and and so yeah, I I think the the general workforce is is going to have to uh adapt more like an artist mindset. Um what is what is happening, how do I reinvent what I'm doing? Um what what what is my purpose? What do I want to bring? Which is also a thing that an artist does, right? They don't just work, you know, because work needs to be done. Uh they often work starting from a purpose. And I think um I think humans are going to re-evaluate that on a larger scale.
Goodhart’s Law And AI Metrics
Marc CurtisI mean that that sort of I'll try and segue it nicely, but I I also wanted to talk about strategy within companies. And I guess this talks to that a little bit because I think when companies think of strategy, quite often they're actually looking at you know compliance and policy and you know they're setting boundaries and so forth. They're not necessarily looking at where they want to get to and potentially how AI can help that. What they're looking at is what is AI doing in the workplace now and how can we control it or put guardrails around it. Um and then and that links very nicely, I think, to how that's measured in the workplace as well. So and it and it it sort of turns into a conversation around productivity and efficiency. Um and I know you've written a a bit about um Goodhart's law and how measurement that becomes target stops being a measurement and so forth. So there's a question in there, uh I promise. But but so so how does that look like to you know large organizations? Where where where are the problems around organizations who are trying to implement strategy, which actually is looking more like a policy and then measuring it on things like productivity and efficiency?
Let Champions Experiment With AI
Vincent GinisSo there's a couple of things that pop to mind. Um the the first one is I I would absolutely refer everyone to a wonderful essay that uh Ethan Mollock published in The Economist a couple of weeks ago. It was titled, it was a little bit polarizing, but it was titled Um The IT department, where AI Goes to Die. Um and that actually he he beautifully fleshed out um um the you know the tangency or the or the difficult point where the IT department on the one hand is obviously like the natural place where IT should uh where AI should be embedded and where all these tools should flourish and grow. But at the same uh at the same time, this is also the place where uh people want to you know cover security and and they want to make things measurable and they want to make plans of how things are going to evolve. And so there's like contradicting forces within the IT department why it's difficult for an AI program or for an AI um uh policy to actually grow there. Um and one of the things that, and and another thing, like more higher level, uh another problem that we face there is that obviously if you're leading a big organization, and I've talked to many CEOs on this, many of them actually see the value of AI. Uh it's actually quite surprising that on the on the highest level of organizations, I often see people with a more clear-cut vision on AI than what I see in places where you would imagine that people work with AI day-to-day. So that's like um an interesting discrepancy. Um But they immediately feel the urge like we should do something, we should adopt AI today and we should measure it. Because if you're not measuring it, then you know value. You don't know what whether it's happening. And and and again, there that there's like this um this this this weird thing happening because if you try to set up systems that are very measurable, you're actually limiting yourselves in the creativity on how people can use it. Um, so I I like to um I like to think back on on one of these rules that I probably also from Ethan Moloch, I I would have to look it up. Um but in this day and age, if you're experimenting with AI and there's not like a significant percentage of your experiments failing, then you're just under-targeting, then you're not like trying to-you're using tools at that point. Yeah, yeah, you're you're not you're not experimenting if everything is 100% working. Um, so that's like a mindset that I don't see enough in people's workspace. Um, so I I I've I've seen too often the following thing that makes me a little bit sad in AI adoption, honestly. Um, leadership is being excited about AI. They decide that the company should urgently adopt AI. They kind of look on the market, what is around, like where do I get a one-stop shop of people you know coming in with briefcases and telling me like this is how AI works. Um, and then that one-stop shop often comes by, brings something that is useful, but it's not necessarily, you know, everywhere the best optimal solution, basically, because that best optimal solution does not necessarily exist yet. And the other part of the equation, which is something that is much more difficult to solve, is given the fact that we have AI, you probably don't want to keep on doing everything the way that you're doing them. That's also not something that you know just an AI program rolls out. And then after a couple of months, you notice that people worked with whatever program is you know installed, but they're not super excited about it because you know there was such a such a promise, and then you know the thing does something, but it's not like super uh super well adopted. And it's obviously also not the best thing in the market anymore because that's the other problem. Um, frontier AI models are like evolving every two, three months. Um, so if you just adopted something, lost imagine that you were like a pure visionary CEO that said, like, okay, two years ago, I know this is going to be great. I buy a program that is like state of the art in 2023, 2024. Um, and if you just took to that program, you probably have the worst AI in the country these days. So there's like this early adopter problem that if you are an early adopter and you don't adopt, you know, in a way that you can continuously keep evolving, you're actually locking yourself in on a bad product. And so those are things that that are also happening and and and get people you know not excited. Um so unfortunately, the the thing that is very unmeasurable and is almost sounds lazy if I say it, uh, but that does work is that you get um you get critical adopters throughout your throughout your organization, like people that want to work with it, that want to experiment with them, and you let them go. You let them experiment with them. You don't like, yeah, of course there should be some boundaries, but you don't start from the question, what can go wrong, and and where should where should where where is my data going to disappear? Or um, but you start from the question like what can this one person learn about my organization and about the use of AI, and how can I let that flow through the universe through through the organization?
Marc CurtisIt goes back to your earlier point, though, doesn't it? When you think about the future, we just imagine the future as being what we have now with more. And and if you're if you're looking at your organization, you potentially will view AI through the lens of what do we do now and how can that be made more efficient, or can I I can reduce the cost base associated with it. I mean, if you're in a traditional sales organization, for example, how can I reduce the amount of legwork that our salespeople do? You're looking at it purely in terms of what do we do now, how can we make that better using AI in the future? Which I guess is legitimate, and I guess I think you know, because it is difficult for companies, especially global companies, to say, okay, we're gonna we're we're just gonna give a bunch of people think if they fail, fine, whatever, you know. They will want to see benefits. So I'm I'm I guess a more pragmatic perspective would be okay, you need to understand what what tools are on the market and how can that positively impact the work that you're already doing, but you also have to pay equal, if not greater, attention to what you're just saying, which is turning it up on its head and saying, actually, if I give people access to these uh systems, we can learn something about how our company works, which is a very different model, right?
Vincent GinisAbsolutely. Um and and all the data is pointing towards the second model, despite being like higher variance, being extremely successful. So this is the year we've where we've seen the first few companies of just a few people, you know, getting to a billion dollars, right? Unicorns from one or just three people. Um and that just shows that you don't need a huge critical mass like in synchrony adopting. You do need like your high variance players in your company to to reimagine what they're doing and to see what value there is, and that can work as an avalanche throughout your company. So I do not believe that I do not believe that there is like um some competition between everyone and just a few people should have it, but I do think that you shouldn't constrain whatever whatever people are doing with it. Um and and so the the irony of it all is that this is kind of like an organizational software that everybody can just buy for themselves, right?
Shadow AI Inside Big Organisations
Marc CurtisThat's the other well this is this and and and I was gonna get on to this is the fact that when you start to put guardrails around it, yes, and and and I think anecdotally, again, a lot of companies that I speak to they put in these compliance um guardrails, they they come up with a policy, maybe even a strategy. Um and the experience of the actual people who are working with those systems is well, you know, okay, I'm using I won't name any sorts of models, but I think we all know which ones people tend to have to use in the workplace. Yes. And then they just bring their own, you know, it's big B Y O D AI or BYO. BYO AI, yes. So you bring your own AI to work and you'll you're kind of and especially actually if you're doing bring your own device as well, you can keep that relatively separate from the work. Absolutely, you know, so you end up with a shadow AI layer, and actually that's probably where some of the most innovative use of AI is happening because people are not constrained, as you're saying. Yeah, they're just using it in ways that that the IT would never have thought about using it.
Vincent GinisYeah, yeah. And I don't I so obviously it's it's it's it's a sad situation that people have to do it like this. I don't think that it's necessarily the worst situation. So let me give an example. So obviously at our university, um there is extremely sensitive data. Um, the grades that students get, right? I mean, those are things that you don't want to end up somewhere uh publicly. Um I'm personally also very happy that there is an IT department who is like very, very strict in you know, we're not going to let agents and um and what is it, uh, open claw run around in our organization and do stuff for us because there is obviously this danger. So I'm happy that there is an IT department. I'm happy I don't have to do this. Um, but at the same time, it is healthy that on other surfaces, uh on other places where you know that data is very well protected or not even uh available, people can start experimenting. Um, and and so what you see indeed is like it's actually an interesting thing that is happening. So over the last 15-20 years, many organizations went through a um centralized cloud adoption where everything kind of comes into the cloud and then people don't even have their personal RUM memories anymore, it's just all cloud memory, right? Um but you do see like as a result of all the shadow AI adoption that more and more work is actually coming back to the computer because it's very difficult to connect all these recent tools to the cloud environment, right? There's like legal and um practical difficulties to overcome that. So you do see that there's like a slight return to um uh computer work and like non-cloud work, um, where it's not really clear where this is going to go. So I also noticed this uh for myself. Like I used to be extremely free whether I had my computer with me or not. I could always get all my emails and all my documents, and now I know like, oh no, my cloud documents are on specifically that computer, and then I can get there to my phone, but it's it's not as easy as it as it used to be.
Scaling Pilots Without Killing Innovation
Marc CurtisRight. So there's some unintended consequences around people using these things as well. Um I'm interested as well. I mean, we talked you mentioned adoption and the adoption gap between policy, strategy, and actually what's happening. Um and I think this one of your one of the topics that you you you often sort of dive into is the fact that with complex systems you can't you can't scale in a linear fashion. And AI is a is a great example. Well, we say AI, and this is even that in itself is a bit of a misnomer because AI is just an umbrella term for this enormous range of technologies, and even if you're just looking at LLMs, there's a whole range of different uh ways that they work as well. But where do you sit on that in terms of if going back to this idea that you've got people experimenting, you you're potentially identifying champions in the workplace who are perhaps a little bit more curious than other people? Not everybody wants to play with new technology, right? So what's the solution to companies? This is a really big question, actually. I apologize for it's okay for you to say there is no solution or it's ill-defined. But what's the solution for companies who are finding that they're trying to scale the results of successful pilots, for example, or successful experimentations, but then they don't know how they don't know how to take those next steps because of this idea that because they're so complex, it's not just a question of taking the learnings from one successful pilot and applying it.
Vincent GinisYes. Um I do think that often often I think that um we as a society had like a beautiful, uh very small preamble of what AI would be a couple of um is it already 10 years ago, with um with the black dress, the black and blue. Oh yeah, the blue and gold. Yeah, blue and gold. I don't know which one you saw, but for me they I didn't know.
Marc CurtisIt was blue and white and blue, and I will die on the hill that says anybody funny side note here. I had this, it's weird. I was actually talking about this literal thing the other day, and there's a I find it absolutely fascinating. I'll probably cut this out of the book, but I find it absolutely fascinating that that recognizing the colour blue is actually a cultural thing. Really? Yeah. So they've they they did some research. So that historically, this is a real sidebar. So historically, um, if you go back to ancient Greece, um, you know, and and pre-sort of Iron Age civilizations, where there is written histories and so forth, blue is never mentioned. And actually they would have described the sky as bronze, right? Um and it was thought that maybe they were colour blind or maybe they just didn't have the language for it. And it and it's more to do with language, but but language impacts how you actually see as well. They've done a they did a study with this tribe, I think it's a um Nambian tribe in in Africa who don't see blue. And they showed them lots of coloured squares of green with one blue square in the middle of it, and said which square, different shades of green, which square is not doesn't fit with that, not green. And they couldn't see it. And it turns out, the hypothesis anyway, is that um language and culture dictates what colours you can actually see. So going back to the blue, um the blue gold dress, it it might one of the explanations for it might literally be about you know your cultural, you know, kind of context, you know, uh, and other things of that nature.
Vincent GinisOh, I I thought it was one of those neuroscience.
Marc CurtisI think I think it's all but I think it's all I think it's all related. I think the neuroscience thing, because you can, you know, you can once you start to see a colour, you it's it changes your brain. Yeah, you know, it changes your neuroscience. Absolutely. So so I mean, what I've just said is is actually filled with, you know, I'm sitting here talking to an academic, I am not an academic. No, no, no, no, I don't I don't know this field either. So no, but it it's it's a fascinating thing. So and my wife, my basically the reason we talked about it, my wife was saying, oh, because I'm always making stuff up, and she thought I was she thought I was making stuff up, so I had to go and get the research about this this Nambian tribe who can so anyway.
Vincent GinisWell as a as a kind of a leadway, I was uh I was pointing to the fact that you can have like one object and then still have people like fundamentally seeing different things in them, and I think the same thing can be true, at least in you know, where we're living now uh and where we are now in time, um, that the same AI solution can be seen by different people as like a wonderful thing in the world, or like what is this? This is useless. Um, and the nice thing about having like your champions working with those things is that they obviously um there's like this one perspective on what it is, and that they can build and and they can tune it depending on their perspective. But obviously, once you start rolling out like larger solutions or trying to scale it out in your company, you get this multi-perspective view again. And there will be people for good and bad reasons, you know, not liking what it is. Um, and that is definitely one of the um one of the things that create frictions in in how these how these tools because as they are right now, obviously, as they are working in a company, they are not working 100% um fail-safe. Um, and they're also not they're they're like not operating like normal software. So anyone who wants to see a bad thing will see a bad thing. They they can easily point to to places where look, this is not working, and there at that point it sets something. It's easy to criticize as well. It's super easy to criticize. And so once you're scaling up, you know, this this the opportunities for criticism and therefore um you know inertia kick in again. Uh, and I've seen this happening a couple of times, and and it's very so as long as these tools stick to you know a small group of champions that really love to work with them, you see them growing and almost like evolving every day, like ah, here we can make it better, and here we can make it better. But once you know you scale to let's say 100 people, there's always gonna be more friction, and and all of a sudden you're not in a mindset anymore of here we can make it better, but you're in a mindset of are we going to do this?
Marc CurtisUh and why doesn't it do everything that I think it should do? Yes. I I I funnily enough, I think the I think it's broadly the same as the great kind of innovation dilemma. Um, and and quite often when we think about scaling AI in businesses, it is an innovation conversation, primarily because it's something that people haven't done, and you're asking people to work and think in new ways. And I think one of the challenges with with innovation, you know, not strictly limited to AI, is that as you say, when it's a small pilot, people are excited, people, people are passionate about it, they want to see it achieve. And I call this death by a thousand questions, basically. So when you get to a point where you've got a hundred people, the questions are, oh, this is fantastic. Nobody will ever say it's bad. This is fantastic, I can absolutely see it. Does it work in Hungarian? Or this is fantastic, I can actually say, you know, will it do this? And you end up either and I think both of these can be quite destructive, you either end up with a situation where it doesn't progress because it doesn't tick every single box, or you try to progress it because you start to try and answer the the challenges that are being made. So it doesn't work in in this language, for example, okay, well that's fine, we'll kind of and what you end up with is something that suddenly starts to bloat. And you you end up with this kind of slippage of of of the kind of the core functionality. And I think this is why startups sometimes succeed and fail because they can keep that clear vision and find their audience. Whereas in a business, yeah, I mean I guess this is the point, right? With a startup where you're selling a product or a service, people will come to you because they recognize the value of what you're doing. If you're in a business, you're you're you're telling people what the value is and you're expecting them to get involved.
Why Startups Beat Big Tech
Vincent GinisTrue. So there was a couple of months, or I might even have done this for a couple of years after ChatGPT came out. I was constantly telling people like, how is it possible that like such a small startup like OpenAI, there were a couple of people, right? Um, could beat the giants in the software industry that are the most the richest. Companies in the world. Yeah, really, and Googles obviously. I was like, this is not gonna last long, right? I mean, Google is just gonna get her shit together and you know uh run over OpenAI. And then it took a while before I realized like how difficult it must be for a huge company like Google to get into that startup atmosphere that you need to be and and to be you know young and and and almost playful and um and and so it's almost it's almost a reality of the of the um of the ecosystem that you would expect that these tools would come from very small startups that grew quickly. I mean and Tropic Now I think is 2500 people, that's still you know very small, right, compared to what what Google is. And so you there you see that um you you you might have all the money in the world if if you don't have the mindset of change, uh it's a very difficult situation to to operate when when you have these tools coming to the market.
Marc CurtisI guess the other you know the the the cynical view as well is that actually most startups will fail. And and and we're seeing survivor bias. Yes, obviously. So so whereas within the corporate environment, the expectation is that everything they do will return something. Yes, yes. Um so it's a very difficult. I mean, this is you know, I have this conversation all around all the time around corporate innovation that if you go into it thinking that everything you do will result in in a new product or a new service that will bring revenue into the company, then then you shouldn't do it. Yeah. Because it could because it won't. And you have to be comfortable with failure. Going back to your point at the beginning, which is you have to be comfortable with the fact that things are going to fail and recognize the learnings and the cultural impact that those failures will have in a positive way. Whereas with a startup, if it doesn't fail, by definition, it's a success. Um, so you know, but uh but I'm sure for every Chat GPT there's there's a dozen companies that obviously haven't unicorn.
Vincent GinisYeah.
Marc CurtisUm I don't I'm not trying to I'm not trying to undermine your argument because I still think you know, I I think that successful startups are successful because they find a market. They're answering a need.
Vincent GinisNo, no, I 100% agree. Like I think there's like two periods in in the startup scene. So there's like the point where you are from you know from nothing to a success, and that is obviously what ChatGPT or OpenAI became in 2023 or 2022, you know, all of a sudden world adoption on on their tool. So at that point they were clearly already a success. And then you have to ask, or then I was wondering, given the fact that the recipe is out there, right? It's not like a very secret thing how they were making LLMs, at least not at that time. Things have changed since then. But but at that point, the the you know, the ingredients and the and the recipe for making a chat GPT was kind of known, and it wasn't as you know, gazillion trillion dollar project back then yet. Um, and then I was I was wondering like what is stopping Google from just making a 10 times better product. Um and I think when you're comparing them at that point, like two successful companies, a small one and a big one, um, I think OpenAI was still in an advantage because they were they could still easily move and and they weren't you know you know killing the product by a thousand questions. Yeah, yeah. Um and and you could also tell, like they've done a couple of things that made a lot of customers angry, and and you know, what a normal grown-up company wouldn't do, like yeah, killing GPT-4 at some point, uh, you know, taking it off the market one day after another. That's something that a normal, you know, decent grown-up software company wouldn't do, and and they did it. And I mean, I'm not going to defend that they did it, but it's it's still like one of those um examples where a small company actually Yeah.
Marc CurtisWell I guess I guess small companies as well are more personality driven as well. True. And you've got somebody like Sam Altman who's you know a personality. Um whereas there's more of a kind of a collective approach or a strategic response in larger companies. So those kind of in retrospective are crazy or genius decisions don't necessarily happen because because they're then they're not so rooted in the individual.
Vincent GinisYeah, yeah, that's also true.
Which Workplace Uses Will Last
Marc CurtisUm listen, I um I really I'm just looking at the the the timer. We've been talking for for almost 50 minutes, and and and I feel that we should just in the interest of sanity and the people listening to this, that they that they have a life to get back to. Um it would be good just to try and wrap it up. I mean AI is here, undeniably. It is having an impact on the on the workplace, it's having an impact on education. Um do you see where we are now as being setting the tone for the next five years, or do you think that actually we're all getting a little bit too hung up on the early adoption kind of conversation? You know, there is a I suppose I should set the scene by saying I I think my my my my my thoughts on this is that a lot of companies feel they have to do something, but it's not necessarily rooted in the fact that they should do something, you know, maybe you know going back to your early adoption point. But what what do you think? How how do you think that not the AI, I'm not asking for predictions on where AI is going, but but but how do you see the way that companies are responding to this evolving over the next you know the the medium term?
Vincent GinisYes, so the um the state we are right now, I try to summarize as uh uh well I can't talk for the entire world, but I I do see you know Belgium and and the surroundings, and and I also see a little bit what is happening in the US. So you clearly see that there is um even in the workspace, I mean personally there's like huge adoption numbers, right? But even in the workspace, uh there's good adoption numbers, like um 40% and up in in most companies and and most countries. Um the type of adoption, I'm still wondering whether these will be the cases that that will stick. And and I I'm I'm kind of convinced that you know the number one tool or number one use case in many companies is still like to rewrite emails or to write an email for you, and then you know you just have to okay it and it's being sent out. Um, and I have to admit that I I kind of think that there will be a huge renaissance of you know human human-written emails or even voice messages, just you know, to reconnect with each other on a personal level. Um, so personally, I don't think that anything that goes into the realm of rewriting stuff for you or making things sound more professional, corporate uh signature is going to be the use case that is going to stick. And I I kind of hope that that is not the case. Um the use cases that I hope that are going to stick are going to be based on the question, like, and and this is literally what I tell all my colleagues every day. I hope they don't hear this because they'll be like, there he is again. But imagine that your company is hiring like an extremely, extremely smart person, or like even 50 extremely smart people. Um it would be super sad that if you would that you would tell them then, like, okay, your your task is going to be check my email and then rewrite it that it sounds more corporate. I mean that that that would just be like a waste of talent, right? Like nobody would do that. Uh what you would probably do would be like, okay, let's um let's let's reinvent as we as we already mentioned during during this conversation, let's reinvent what we're doing, but let's also build this thing. Uh and that is like a fundamental new thing that happened over the last three, four months. Um, even if you had very good ideas in companies um, you know, over the last couple of years or decades, you still always had this bottleneck of between the ID and the execution of the ID, there's like a hundred different steps where something goes wrong, or why you cannot do it, you know, money-wise or development-wise. And it seems that all those hurdles are dramatically disappearing. Um, and so I think we're going to go to a world where all sorts of new um products and value is going to come to us. Like I think we're underestimating that uh by a lot. I also think we're going to very fastly to a world of extremely many startups. Uh, I just see many people, you know, the those early adopters, I also see it in my group, that have seen you know what it can do. They're like, I want I want to make my own company. Uh, and I I think for entrepreneurship and innovation, that might actually like plant a lot of seeds. And over the next couple of months and years, I think we're going to get many, many more of those.
Inspiration And Being Human
Marc CurtisYeah, I I I I yeah, I tend to agree. I I think that you I mean it goes back to the artist paradox thing again, doesn't it? You know, it it it doesn't remove the the the passion, the idea, all it does is it it takes out some of the blockers that would enable you to get to implementing what that could look like. Yeah. Whereas before you might, especially in Belgium, if you want to start a company or whatever, you know, I mean that's enough to stop most people from from from yeah, so you're lowering the barrier to entry, effectively. Um I think that's a good place to stop. I always ask with one final question, and um it can be, you know, you can answer this any way you like, but but the question I always like to finish on is is who in your professional or personal life has been a real inspiration who you've looked to as being a pioneer that you've either wanted to to emulate or has inspired you in some way who okay, this is dangerous, right?
Vincent GinisImagine one of them listens.
Marc CurtisUm it would be very flattering.
Vincent GinisYes, true. So um there's a couple of people that pop to mind. I I have to say, so um that I personally worked with, um, and and he won't probably listen to this, but it because he doesn't listen to podcasts, but um Professor Capasso from Harvard, he's like um he's like a real serious um innovator and physicist from you know Bell Labs back in the days, and then now with his research group at Harvard. And he's he's very deep in innovation, in engineering, and physics, but at the same time, when you talk about these things, like these very uh rapidly changing AI tools, he has like a very physicist way of looking at them and describing like just in a couple of uh concepts, like okay, these are the tuning points and this is this is how the system works. So um, you know, I I think I sometimes emulate the things that I learned from him. Um, worldwide, I have to say, um, and it it I'm actually very thankful that these people are doing this, but there's like a few people out there, like um Scott Alexander, um, who is uh international blogger, he has a blog called um Um Astral Codex 10, and he writes in ways about topics that well, I just have the impression that he had these tools already 15 years ago and you know even better than what they are now. Um he explains things in a way that um that you know you don't hear the traditional pundits talk about. Um so yeah, I I guess those are those are people that I that I like to emulate.
Marc CurtisWhat uh what I love about that is that you've just described what AI can't do.
Vincent GinisTrue.
Marc CurtisWhich is to synthesize complex what things in and put a an opinion and a spin on it that comes from the person. Whereas AI is very good at reflecting back to us or summarizing, but it doesn't give us that perspective, I think.
Vincent GinisYeah, I think uh sorry, like uh I know this was the last question, but um I think this is this is one of the one of the things I look forward to most, and if I could talk to myself like five years from now, um I think we're going to re-evaluate indeed um all those hidden things that you know humans, you know, what it is to be human and and what humans do. Uh, and we're going to reappreciate many of those things as well. Uh and many of those things are now hidden and somewhere, you know, on on the on the bottom of the floor. Um but but yeah, it's it's going to come to the surface uh very, very rapidly.
Future Of Work Event Invitation
Marc CurtisBrilliant. Well, that's a great way to end it and an optimistic view of the future. Absolutely. Um basically none of us know what's going to happen, but we just got to stay curious and and give people space to experiment. Um Vincent, thank you so much for coming. Um, really a pleasure chatting to you, and I know we're going to talk again. I should just say that you are participating in the Future of Work event on the 18th of June. Um, you're going to be participating in a panel, you're doing a short talk as well, um, exploring some more of these concepts. Sure. Absolutely fascinating. Looking forward to that. But for now, thank you so much for coming.
Vincent GinisThank you. It was a pleasure.
Marc CurtisSo that was uh that was almost an hour of chatting to Vincent. As you can see, really interesting and and so much, so much thought to get into with him. Honestly, it's you know, it's a real uh pleasure and a privilege to to sit down talking to him. I'm always a little bit uh starstruck and intimidated by academics, and and this guy really is somebody who is very much at the forefront of this kind of thinking. Um and and we didn't even really touch on what even is photonics. Um basically something to do with lasers, I think. Uh Vincent will, as I said at the end, be talking at the future work conference. Uh I urge you to come along to it. Honestly, he don't miss the opportunity to hear him and the other speakers that we've got um talking at the future of work. Uh but for now, thank you so much for listening. It's been a real joy um and as I say a privilege to have these conversations with people, and I'm very much looking forward to the next one. See you soon.