In this eye-opening episode, host Ashish Kothari sits down with positive psychology expert Llewellyn van Zyl to confront the double-edged sword of Artificial Intelligence. As AI shifts from a productivity tool to a source of “psychological support,” we explore the thin line between technology that empowers us and technology that erodes our core human faculties. Whether you are a business leader, a mental health professional, or a curious tech user, this conversation matters because it defines the exact moment we must choose to remain the architects of our own flourishing before the algorithms do it for us.
Main Topics Covered
- Hyperpersonalization vs. Scaling: How AI can provide granular, 1:1 care at a global scale.
- The “Friendship” Trap: A look at the ethical fallout when users become emotionally addicted to empathic AI models.
- Substitution vs. Support: The critical distinction between tools that help us grow and those that make our cognitive “muscles” atrophy.
- The Erosion of Agency: Why outsourcing our reasoning to AI might lead to a loss of autonomy and critical thinking.
- The AI-IARA Framework: An introduction to Llewellyn’s model for psychologically safe AI design (Awareness, Interpretation, Intention, Action, Relational Agency, and Autonomy).
- The Future of the Psychology Profession: Why practitioners must learn the language of engineers to remain relevant.
- The AI and Future of Well-being Summit: A preview of the upcoming global gathering of experts.
Key Takeaways
- Skills are Muscles: When we outsource basic tasks like writing, spelling, or reasoning to AI, we don’t just save time—we physically lose the cognitive capacity to perform those skills through “frictionless” living.
- Guard Against Engagement-Only Models: Much like social media, AI systems optimized for “eyeballs” rather than “impact” risk creating dependency and addiction rather than genuine flourishing.
- Demand Technological Humility: It is vital for professionals to admit what they don’t know about AI to prevent harm and ensure that human-centric values are built into the “behavioral architecture” of new tools.
- Prioritize Agency: True well-being is the ability to write your own life story. We must ensure AI remains a partner in that process, not the author.
Episode Chapters
00:00 – 00:52 Introduction to Flourishing Edge
00:53 – 02:04 Hyperpersonalization: The Future of Well-being
02:05 – 06:02 The Ethics of AI Friendship and Emotional Dependency
06:03 – 09:05 Moving Beyond the “Engagement” Business Model
09:06 – 15:06 The Continuum: Substitution vs. Support
15:07 – 18:03 Cognitive Atrophy and Technological Humility
18:04 – 24:14 Llewellyn’s Journey: From Clinical Burnout to Precision Well-being
24:15 – 31:42 Why Tech Needs Psychologists (and Vice-Versa)
31:43 – 40:20 Preview: AI and the Future of Well-being Summit
40:21 – 46:22 Deep Dive into the AIR Framework
46:23 – 50:25 Three Actionable Tips for Using AI Mindfully
50:26 – 54:19 Rapid Fire Questions and Closing Remarks
Connect with the Guest
LinkedIn: https://www.linkedin.com/in/llewellynvanzyl/
IPPA’s Virtual Summit on AI and the Future of Wellbeing (Late March): http://aisummit.ippanetwork.org/ (Use code for 10% discount)
Connect with the Host
Website: https://happinesssquad.com/
LinkedIn: https://www.linkedin.com/in/ashishkothari1/
Book: https://happinesssquad.com/book/
If this episode challenged the way you look at your smartphone, don’t keep it to yourself! Follow The Flourishing Edge, like this episode, and share it with a friend who is navigating the world of AI.
Happiness Squad Website: https://happinesssquad.com/
Ashish Kothari: https://www.linkedin.com/in/ashishkothari1/
LinkedIn: https://www.linkedin.com/company/happiness-squad
Facebook: https://www.facebook.com/myhappinesssquad/
Instagram: https://www.instagram.com/myhappinesssquad
Transcript
LLewellyn I'm so excited for this conversation together with you on flourishing and AI.
Llewellyn Van Zyl (Ph.D) (:Thank you so much for allowing me to be part of this. think it's a great opportunity and I think it feels like all the worlds are converging at the same time.
Ashish (:Yeah, so let me just start with a very open question. AI and the future of well-being, it's the name of the summit that you, with so many others, Andrew Sorin and others are organizing. So simple question. What is the future of well-being in the world of AI?
Llewellyn Van Zyl (Ph.D) (:I think it's about hyper-personalization. It's about finding people, meeting them where they're at, understanding what their well-being means for them at a granular level, right? And figuring out how to support them specifically and to able to scale that to millions and millions of people in a psychologically safe, effective, correct manner. We are literally at a place at this specific moment in time where we can provide the exact amount of care and support that...
Every single person needs it the way which they need it that we've never been able to do before. And that's kind of like where I feel we are at at the moment. And that's the here and now. Just imagine where we're going to be in, in five, 10 years from now.
Ashish (:Yeah.
And for me, that is the potential if we do this the right way. What if we don't do this the right way? What if we are predominantly going for eyeballs and solving for engagement rather than actual impact?
Llewellyn Van Zyl (Ph.D) (:Yeah. Now, Ash, that's a really, I think if any question, that's the most important question of this podcast. It's what's the implication if we don't do this right now? And before I give you my answer, I want to give you a story. Remember last year in, in, think it was around November, no, just a little bit earlier, but open AI released Jack GPT 5.0, right? Their most revolutionary model to date.
Ashish (:Yes. .
Llewellyn Van Zyl (Ph.D) (:Everyone is excited. No, they weren't because what happened? People were like, my gosh, this model is not the same. It's not 4.0. It's not my friend. taking my friend away. Right. How dare you? Like people became so upset with regards to this new model because it wasn't the same thing that they've become accustomed to that they started loving that they started appreciating because the way in which engaged with people and systems, right. Or with people was in a more empathic.
Ashish (:Mm-hmm.
Ashish (:.
Llewellyn Van Zyl (Ph.D) (:kind understanding kind of like way and now all of a sudden is very robotic. So people felt that we are taking this entire person kind of like away and we yeah it makes us. Now how did we get to that point. Right. So I that I identified two big things last year in beginning of last year they identified one people are utilizing the system not the way in which we thought they were going to use it. Initially it was to optimize workflows and know etc. Now it's about psychological support. So.
Ashish (:Yeah.
Llewellyn Van Zyl (Ph.D) (:the ethical people that we are, we need to build the system in such a way that we can actually be empathic, kind, respond to you in an important way. So that's where 4.5, for example, came out. Right. So we got a psychologist involved. They helped us design the system to ensure that people were a Jerry and principles of active listening and et cetera. So really sat with a psychologist to build the personality out of this thing. Amazing, right. We did.
Ashish (:Yeah Yeah.
Llewellyn Van Zyl (Ph.D) (:It did exactly what we planned on it doing. But what happened? It's not the outcome that we wanted, because now that people were like, that's my friend. We got addicted to it. We became accustomed to this thing. You cannot separate me from this. And the reason I'm saying this, because you can look on Reddit, for example, people started actually grieving as in sadness, depression, because their friend is going away. Right.
Ashish (:Absolutely. Yeah.
Llewellyn Van Zyl (Ph.D) (:So the principle is that even good intentions, right? I want to do something good without the right psychological infrastructure being built within those systems will cause a significant amount of harm. So that is, think, a very good example of what will happen when we do not get these things right. Right. Now, you said something really cool. Talked about like engagement and eyeballs on platforms.
Because that's the fundamental principle on which engineers design systems, right? Because it's easy to manage and predict. If I see, I estimate whether it's Netflix, the amount of time a person spends watching movies or a chatbot, the amount of time I spend on chatting to it. That means amount of time equals it works. So we optimize and build systems around making engagement easy. Gamify, make it look beautiful, buttons here, whatever. But the problem is that if
something does not work. So if the tool you're creating, the actual thing that you're creating in the background is not effective, it's not science based, there's no theoretical or psychological model underneath it. The techniques that provide you is not valid or reliable. You're creating a system then that optimizes for engagement on something that doesn't work, which means you're causing harm because you're creating addiction, dependency, et cetera. So that's why I think
Ashish (:Yeah, and I think this is
I think, it's so powerful, Llewellyn, right, what you're just highlighting here. Let me replay that, friends. Think about this. If you are using, there's so many people who'll say, look, I'm using my chat GPT or Gemini, right? Many people are using these for psychological support. You know, it's my friend. I can ask it questions. That's one of the top uses, right, of these things. Here's the mistake.
And here's something that we have to watch for. We don't want to run the next 20 years with the same kind of core thinking that we ran the proliferation of social media platforms. Facebook could have been the biggest gift to the world if it was about connecting people, discovering long friends, et cetera.
Llewellyn Van Zyl (Ph.D) (:EWEN EWEN
Ashish (:The core engine behind Facebook became engagement and about how can I get more people onto this thing? And what do you do for engagement? I give you content that you resonate with and show that other people, and all of a sudden, we started this thing where we have actually lost our ability to focus. We have become more polarized, less connected across countries and within countries.
Llewellyn Van Zyl (Ph.D) (:Good.
Llewellyn Van Zyl (Ph.D) (:Then...
Ashish (:Most of the tech world has grown on ARR models that focus on growth over everything else. And two of the biggest drivers of are we succeeding in building a business is based on profitability is out of the door for most of these AI players right now. Nobody's even solving for profitability, but am I getting eyeballs? Right? Number two, am I getting engagement? Are people sticking on? And three now more and more as open AI goes into ads as
Llewellyn Van Zyl (Ph.D) (:And yeah.
Excellent.
Ashish (:Can I actually monetize it? Nowhere along those three drivers is, I actually helping the human flourish? Am I actually solving the problem? Look, if you're struggling with anxiety, alcohol, cocaine, marijuana will also give you relief. But is that the relief you want? Or is the relief you want you actually learn to fundamentally rethink?
Llewellyn Van Zyl (Ph.D) (:N.
Ashish (:how you, even you're anxious, what's causing anxiety. Luella, you talk about in your work this beautiful differentiation. I want you to bring this to life for our listeners. You talk about the different systems can either substitute or they can support. And I want you to share a little bit about what you mean by that distinction because as you're using this products, I want you to think about that.
Llewellyn Van Zyl (Ph.D) (:In.
Ashish (:Are these things substituting something for something else or are they supporting you into growing and flourishing?
Llewellyn Van Zyl (Ph.D) (:I think there's a, I kind of see AI systems working on this one continuum, right? From supporting you, helping you, partnering kind of like with you, that would be the optimal kind of like solution, right? So the 10 out of 10 on the scale. And at the bottom end of the scale is like it's substituting something. It's taking something I used to be able to do and now you're doing it for me, right? And all systems, it's not just AI systems, like even a calculator as an example, which I'll use now.
is somewhere along that specific line and where you are at has a certain implication, both positive and negative. I mean, if I look at something like, I don't know how it was in the US or Canada or so on, like when the generation before me, implemented at primary school to use calculators, right? So all of a sudden in primary school, you learn and learn the ability to do basic mathematics, right? Of course it, it creates more space to do more complex things, right? But
Ashish (:Yep.
Ashish (:Absolutely.
Llewellyn Van Zyl (Ph.D) (:Now I've lost that ability. Let's fast forward five years or 10 years, my generation, spell check. That's one thing I've unlearned the ability to spell because I can just quickly click it. Right. So if I have to write something, I have to Google the word to make sure that I spelled it correctly because I don't need that skill. GPS, same difference. So, but all of those things are tools or very specific minor tasks that we have developed to help us to do.
these things a little bit more efficiently and better, right? So it's a very specific task, but now we're in a place where for the first time in our history, are we able to outsource our cognitive thinking, our critical reasoning to systems, and we have no idea what effect that will have. Ideocracy of the movie is a good example of what that could potentially be. Okay, so let's get back to this continuum, right? If we're on the positive end of this, where we say,
I still can do basic mathematics and I can utilize the calculator to help me to do things a lot better and easier and faster and it creates capacity to do more complex stuff. That is amazing because then the system becomes a tool. becomes a partner in something. But when I go to the opposite end of the spectrum, if I create a chat bot that is aimed at, I don't know, alleviating my loneliness as an example and I'm
Ashish (:.
Llewellyn Van Zyl (Ph.D) (:optimizing it in a way to keep you on that system a lot longer. I feel lonely. I engage with the system. I get my dopamine shot. I feel good. I go in again to our setup, chat to my bot, feel good, right? Feel lonely again. So you keep going through this circle and circle and circle. And remember these chatbots are there to help you, listen to you, to understand you specifically. But over time I'm losing the ability
ability to establish and maintain actual relationships with people. And there's this very interesting guy in the Netherlands that's made worldwide news that he's an active relationship with a replica with a replica girlfriend for three years. And it's so interesting to see the videos and things of this person that he doesn't understand how to actually engage, how to show actual affection, how to talk to a normal human being in a way, because
of this specific thing that is created. So we have to be somewhere I think towards the end of that system to support things. But we as a group of psychologists, consultants, we need to be aware, that's also where the opportunity is, we need to be aware where that line is, where all of a sudden something that's supporting me quickly dips into something that's substituting something very core to who I am.
Ashish (:Yeah.
Ashish (:Yeah, I think this is such a powerful continuum that you highlight. And I think it's a real risk.
Right? I so resonate. I use the same analogy so many times, and I said, look, our parents could remember all the phone numbers. We can't remember. I barely remember. I don't know the phone number of my son. It's on my phone. It's on my phone, but I don't like remember it. I remember my wife's, right? Maybe I want. I remember my social security number, but I don't remember that phone. Just real, right? I'm just thinking.
Llewellyn Van Zyl (Ph.D) (:Yeah.
Ashish (:Calculators, I grew up in a world without calculators, so I could do most of my math in my head, but this next generation, they struggle with that. I think the substitution versus support is so playing out right now with this current generation who are growing up with chart GPT, et cetera, just like we grew up with Google, right? Nobody goes to libraries anymore. If we outsource our reasoning and thinking, I think we end up accelerating
the Terminator doomsday fallacy, except it's not that the AI gets smarter, that's gonna happen. But I think our rate of getting dumber and being able to reason and think is gonna keep going down. And instead of support, we'll become dependent so much that without them, I can't do anything. And if once that happens, think about this, friends. If you get dependent on something, implicitly you are giving away control.
Llewellyn Van Zyl (Ph.D) (:Exactly.
Ashish (:power, autonomy and agency to somebody else because now I can say, if you really need this so much, how about $100 a month? How about $500 a month? How about $1,000? And you're like, oh my God, I don't know how to operate without that. So what's the cost of that? And so I think really, really important to think about the impact on agency and autonomy and hence a choice to use the tool. All they are are tools.
Llewellyn Van Zyl (Ph.D) (:Okay then. And then? And then we...
Llewellyn Van Zyl (Ph.D) (:NIN
Llewellyn Van Zyl (Ph.D) (:and
Llewellyn Van Zyl (Ph.D) (:and
Ashish (:to use the tool in a way that supports you and allows you to continue growing rather than substitute, and you end up paying a huge price for that.
Llewellyn Van Zyl (Ph.D) (:END
Yeah, I think you summarize it so well. and I think that you highlighted the cognitive implication, right? Because I mean, it makes you dumber, right? You lose the skill. mean, there was this very nice study, I think published last year that looked at the effect of just writing on actual brain functioning in, the period of one day doing four tasks. They asked people three groups, right? Experimental group, control group, like, managed control group with the one right, totally of chat GBT two.
used the search engine and the other one write with nothing. The government found that efficiency kind of like went up for the group that wrote with trackgbt and so on right but interesting enough practically that it doesn't they didn't fall connection to their work right then they they repeated the the thing four times and at the end they asked all four groups to write an essay or something just out of their own mind one the other group could not the first group the trackgbt group
lost the ability to formulate sentences, right? In four sessions, like on the same day, right? and that was a really interesting kind of realization for me. So even such a small period of time, you can actually lose a quick skill. now if you constantly outsource something there, because the skill is a muscle, right? If I don't think my brain feels like, I can
Ashish (:Absolutely.
Llewellyn Van Zyl (Ph.D) (:I'm not losing the ability. I'm not saying I'll never be able to think or do mathematics or whatever again, but I could take the capacity that you have left there and I can use it out for something else. So this cognitive application is, think you highlighted really well, but I think there's also other things that you need to consider. So if we look at a psychologist specifically, right, you see this idea coming across like chachi PT, addiction and like AI psychosis and you know, all of that type of kind of like stuff, it is a real concern.
Take no stress. Like I need to be able to figure out how to do these things. You know, because my colleague is doing it a lot better than I am. Right. And I don't have what I call like this technological humility because we're, we are professionals and we're talking about like how effective we are using like AI and et cetera. We're using all the buzzwords, right? But none of us say like, dude, I have no idea what you're talking about. Um, like help it explained to me because that way we don't feel like, like this competition thing.
Ashish (:. .
Llewellyn Van Zyl (Ph.D) (:I mean, things are moving so fast. Like I was, I'm busy with a major, major digital twin project at the moment. And so six weeks, I haven't read any news articles on AI, no papers. I usually put like an hour a day out to kind of read nothing. Do you know the whole world changed in six weeks? Multibot, OpenClaw, right? This new idea about a digital swarm that Claude basically implemented as a standard easy access.
Ashish (:Yes.
Llewellyn Van Zyl (Ph.D) (:Like so much stuff happened and I'm just like, holy crap, what happened to me? The point I tried to say is it's okay to not know, but it's even more important to say it that I don't know because that way we don't do harm.
Ashish (:Absolutely. So look, you know, you were a psychologist who, right, actually is now spending so much of your time consulting, coaching companies.
who are building and embedding AI into this field of flourishing. And so I want to start and I want to ask you your story of how you got into this field and how you realized, what was the genesis behind, obviously the framework, we'll talk a little bit about it later, but also what got you so vested to say, hey, listen, I think that is.
real potential for harm and hence as psychologists, actually need to step onto the plate to really guide and be partners in this AI and wellbeing arena to make sure that these systems that are being designed are not doing harm, but actually helping us flourish.
Llewellyn Van Zyl (Ph.D) (:and
Llewellyn Van Zyl (Ph.D) (:esting story. I think that in:Ashish (:Wow. you
Llewellyn Van Zyl (Ph.D) (:And then I was in hospital and I was literally bleeding out because they didn't close the veins there. And it was, it was awful. And at that moment I decided like, I don't want to do this anymore. Like it's academic work. It's all shiza. don't want it. Nothing matters. So I decided to take like a year off and go and explore different things. And during this time, I was sitting with my therapist and we're trying to help me to get through this, like this burnout stuff, right. And figure out like getting, I this.
existential crises and stuff. again, professor of psychology, you know, multiple publications in this area, like wellbeing. So I know my stuff and I know interventions that all work. Right. And she said, told me something that there was the biggest change in my life. She said, you know what you should do? I think it's important for you to start taking walks in nature. It's important for you to connect to the stuff outside so you can stop feeling better. And I'm like, okay. connection to nature.
Ashish (:Hmm.
Llewellyn Van Zyl (Ph.D) (:Strongest predictor of meaning, meaning in life. Great. Meaning strongest predictor of wellbeing. Okay. Great. Excellent. Gonna try it. Walked outside. Absolutely hated it. Why? Because I associate being outside with being dirty and being dirty means I'm unhappy. It makes me anxious. My house looks like like a, like a, like a surgeon's office basically.
So that's the moment I kind of like realized that I thought the problem was our wellbeing models don't work. because like think about Poma as an example, the mental health continuum, I say mental health is a function of feeling good, functioning well, fitting in emotional, psychological, and social wellbeing. So Ash, according to me, my questionnaire, my theorem, according to me, how well are you? Right? So I'm like, okay, well, easy solution. I will go and I will create a holistic.
Ashish (:Yep.
Llewellyn Van Zyl (Ph.D) (:assessment that captures all of these different things. Also important note that even these assessments, they measure and what they theoretically say they measure are two different things. Okay, I'll explain in a moment. So, Wendt created a questionnaire with eight dimensions and wellbeing and medical health issues, but also objective data. We had on 500 odd objective indicators of wellbeing. So where you live,
Ashish (:Sure.
Llewellyn Van Zyl (Ph.D) (:life expectancy at birth, job opportunities, inflation, so psychological, physical safety, political stability. So you use all of these objective data to kind of like manage well-being, measure it. So I did the analysis, some fancy machine learning, Bayesian, put analysis stuff on there and it became like a little bit complex. I thought like, okay, that's not going to work. There's, yeah, but we have all these qualitative questions that we asked to kind of like cross validate, right? So usually we have the questionnaire.
the qualitative data to kind of confirm or explain what's in the questionnaire. So I thought like, cool. Let me go and try and analyze some stuff here until I've got capacity to figure that out. And we asked questions like, what does well-being mean to you? What's the important stuff in your life like demands and resources? And there I realized, holy crap, even things that are like in my case, positive stuff, right? Is a negative thing for someone else. Things that no other instrument measures. Like for example, in
Ashish (:Yep. .
Llewellyn Van Zyl (Ph.D) (:a specific group of people, animal companionship was one of the strongest elements of wellbeing, not a contributor. I mean, in their models, it loaded on the concept of wellbeing. And then I started realizing, my gosh, this is amazing. Got very into machine learning, built models to really analyze all of this data, the qualitative stuff, so that we can build hyper-personalized wellbeing models. So we also have this paper on precision wellbeing, which is the foundation of this.
It basically said that, we have to meet you where you're at and we can only do that with artificial intelligence, right? Machine learning, large language models we use, we fine tuned BERT for example, to do sentiment analysis, one to do like thematic content analysis. So we use all of that stuff to help us to create these profiles. Then the next question for me was, okay, well, I still know you, you're perfect. Now know you perfectly, but I can't do anything with that. So now I have to figure out how to create hyper-personalized
Well-being content you to deliver it to you one in a way that will work for you But also to provide the right type of activity and support that will actually help you. Okay, so that's what we kind of like figured out Presented a conference a lot of people contact me me about it. Excellent So it started getting involved in other people's systems, right? And then this the third major thing kind of like happened is I started realizing it's like holy crap
These people have no idea what they're doing, not in terms of the design and the build. They're excellent, right? But they're building systems around things that they have no understanding of. You're building a chat bot, right? To help a person with their loneliness, but you're designing a system in order to keep a person connected to that thing. So you're not actually helping. You're designing, you're creating platforms around this is what well-being is and we're going to help you with all of this.
There's no theoretical model underneath it. There's no change model underneath. There's no, what I call like the behavioral psychological framework or architecture of these systems because we need to figure out what levers to pull. And we, and then I started realizing, my gosh, the amount of harm that's being caused when a system is designed with good intentions, right. But by people that are optimizing stuff or things they have nothing, they know nothing about.
Llewellyn Van Zyl (Ph.D) (:And that's kind of like where I started getting more into the governance and the designing of the systems part. And that's kind of like where I'm at now, but I just want to make one thing clear, Ash. I'm not an AI coach. Just want to make that very clear.
Ashish (:Yep. Yeah, no, you're not. I think, look, to play back, right, I think what I'm taking away from what you just said is number one, we've known about this all the way through, right, which is one size does not fit all. But.
If we look at several of the generation of the AI coaches that are out there, that people are like, give us your content and we're gonna help you build a coach. And that's what you can sell to people. Implicitly is using some model, right? They will have some version of something and it's gonna give this, you know, here's the advice. You you can't show somebody 25 different things. So you're gonna just choose the three and give that to them. And maybe it works or not. So like,
Llewellyn Van Zyl (Ph.D) (:Good. End.
Llewellyn Van Zyl (Ph.D) (:a rapper.
Ashish (:personalization is fundamentally missing. But even if you are thinking you're personalizing, unless you really know the person, it's not gonna be. So this notion of hyper-personalization is a very real, you know, it's super important to be able to get, if we're gonna give something, we get to know the person deeply. Which psychologists, therapists, and others through conversations can get to.
Llewellyn Van Zyl (Ph.D) (:Exactly.
Ashish (:right over long periods of time, but just asking somebody, you know, go fill this out is not going to give that to you. So I think that's point number one. Right. So if you're engaging with something, know that does that do they really know you?
Llewellyn Van Zyl (Ph.D) (:Anyway.
Llewellyn Van Zyl (Ph.D) (:and
Ashish (:I think the second is what you just highlighted. Even if you said, yeah, but I've taken all of these best practice interventions and I'm going to give them based on that, just because an intervention worked for one group doesn't mean it'll work for you. And so we run a program called Rewire. We built it around, we built it fundamentally taking
Llewellyn Van Zyl (Ph.D) (:Thanks. Yes, sir.
Ashish (:a lot of these interventions, positive psychology interventions, we made them five minutes or less. So we took the science of habit formation, combined them to it. And we run these programs that are typically 12 sessions over a year, many times. And in every session, we introduce two or three micro practices that people can play with. And the whole idea is, I have no idea if this will work for you. Go try it.
So Lou Ellen, go for a walk. You walked out and you're like, wait, this doesn't work for me. Don't go for a walk out there. Do something else. But I'll give you an example on just one of those interventions is what we say a mindful wind down. I'm like five minutes before you go to bed. Do one of three things.
Again, a little bit of a choice, right? And we see really interesting patterns from it. I'm like, by the way, you don't have to do a mindful wind down if you don't want to, right? If that doesn't work for you, it's life situation, et cetera, et cetera. Don't. But if you decide, try one of three things.
Write down three positive things that happened to you and what contributed to it. Or write down three things you're grateful for. And even just within those two. And then the third I say is read something spiritual or something that reminds you of life beyond your work and family. You can read a poem. You can read a Buddhist text or a Bible or a Quran or.
philosophical, just something that is kind of taking you away from this mild world is all, this is my world, that reminds you that can maybe inspire a bit of awe in you. I'm like, and go try it, right? And in effect, really interesting. had this clearly like two groups. One's like, know, gratitude doesn't work for me.
Ashish (:I know it's a very powerful practice, but I just find it like come on like I just like I'm like okay fine she's like but you know what the three positive things do I always have so many things in my list
Llewellyn Van Zyl (Ph.D) (:EEEH
Ashish (:that I can't get done at the end of the day. And by thinking about what are the three things I accomplished today, I cannot tell you how much better I feel every day through it because that list is growing. I have that little intervention for me has changed me feeling like a failure to actually feeling like I'm accomplishing because that list keeps growing. Every day there are three more, three more, three more, three more.
Llewellyn Van Zyl (Ph.D) (:show.
Ashish (:Right? But I think so this notion that you talked about is like, one is personalization really know the person. Second is interventions. Does that actually work for that person or not? And I think the third piece that you highlighted also is, I think is really, really powerful, which is, look, even if you did that, governance really, really matters. Right? In the end,
Llewellyn Van Zyl (Ph.D) (:And go.
Ashish (:we don't completely understand these systems and where they might go. That's why it's not it's intelligence. It's not just a tool. And so how do we make sure we are setting the right guardrails for it to not do harm? I think is a really, really, really important element. And and I think those three technologists not leveraging psychologists to build these systems.
Llewellyn Van Zyl (Ph.D) (:Yeah.
Ashish (:I think, or just leveraging them, like I have five advisors and I have like 1 % of their thing and then hence I think I'm leveraging them. I think real mess, if we're build systems for human flourishing, I think it truly needs to be a partnership. At least that's the way I feel. I'm curious what you think about that, Llewelyn.
Llewellyn Van Zyl (Ph.D) (:I think that this one, I think the one point that was also like missing was that it's about interventions, getting the content that actually fits you. Right. Second was like when you talked about like this, the more information I have about you, live stream data, digital twinning, the better I can understand you. The third thing is like, you're not going to sell that intervention content to a creative person the same way they're going to sell it to a person's highly structured. Right. So it's also figuring out the way in which to kind of like give it forth to them. And then when we talk about the compliance stuff, it's really also about like.
Ashish (:Yes.
Llewellyn Van Zyl (Ph.D) (:What am I actually using this data for? And is there a reason? Because we can use it for the powerful good or bad. But I do think you're 100 % right. It doesn't matter what we do, we have to start seeing or building systems in a way that we see that they're partners with us. Because middle management will be replaced in the next year and a half. Because they're post boxes. They will be replaced by AI agents because they're more efficient in managing people than an actual person is.
Ashish (:Absolutely.
Llewellyn Van Zyl (Ph.D) (:for as an example. how do I actually engage as an employee with a virtual manager? How do I start working with AI Swarm? So the skill on delegation, for example, becomes a lot more important and the value of the bottle mix, because they will optimize things for us, but the value of where those bottle mix are, where AI cannot do a certain thing will skyrocket. So we need to figure out how to partner with them.
Ashish (:Totally.
Llewellyn Van Zyl (Ph.D) (:what is important, where the limitations and stuff are, so that we know which skills capabilities we should use or can exploit to make that bottleneck more effective, more valuable, and more important. I think that part is sharp.
Ashish (:I love that. I love that. So talk to me a little bit, Llewellyn, about this AI and the Future of Wellbeing Summit. I think it's from March 23rd to 27th. Tell me a little bit about why, if you're a technologist, if you're a psychologist, you're a business leader, you're a buyer of these systems, why you need to come and attend this conference if you cared about this field.
Llewellyn Van Zyl (Ph.D) (:Yes.
Llewellyn Van Zyl (Ph.D) (:You know, I think, I think like, if you want to really help and understand people, right. And let me take this, this is step back. You said it used a very beautiful metaphor earlier to basically say that coaches and consultants and counselors sit with a person that got some mental model in their mind about wellbeing. And then we're trying to now based on my story with you, trying to populate that model so that I know how to help you more effectively. Right. And AI.
ales that and we had to figure out how to scale that. now the question is, what is it? What does scalability talk about? When we talk about a coach and you have to create an AI coach, what does every function that he does? Diagnostics, some mental model on what well-being is, there's an empathic thing for how to engage. So there's thousands of different skills that we have to build unique little agents for to compensate for that thing. Now, why do you have to come? What does this have to do with the, the wellbeing summit? Well,
Fundamentally, it's we have to understand what this thing, AI and wellbeing looks like because it will fundamentally change our field. I wrote this paper recently and the opening line was like, we are the last generation that will get to define what wellbeing is before systems optimize it for us. Because I think fundamentally our skill set as psychologists, if we are continuing with what we are going with and we're oblivious to the effect that these things have and will have,
our profession will die just like psychometrists died out, just like typists died out. So you have to come here to understand what the value is that you can bring to these AI-orientated systems. You have to start learning how to speak the language that an engineer and a designer uses so that you can sell your psychological frameworks to them in a way in which they can actually understand.
Ashish (:.
Llewellyn Van Zyl (Ph.D) (:Because currently, and this is how I started, that I'll be 100 % honest with you, I walked into my very first thing where a client asked me to help them to design this behavioral architecture of it. And I said, this is all wrong. You have to focus on this personal resource, do that thing, use this framework, do this type of assessment. And they just looked at me. It's like, what on earth are you talking about? That doesn't work. That is how we can't do. And it was this thing for me. like, you just don't understand, right?
Ashish (:. .
Llewellyn Van Zyl (Ph.D) (:You have to, you're closing harder. And then one person asked me a very important question. It's like, okay, you talk about measuring the right thing because that's the important part about personalization, right? But what are these things? How can I, how can I build these things that you're talking about? Because personality doesn't feel like something to me. How can I measure that? And then it was like, I need to package this stuff in a way in which they can understand because the
The moment that they understand the value of the thing that you're doing, that the light goes on, they totally accept what you say. So I think that's why you have to come to our conference, is to learn what this language is, to understand what your skillset and your capability is, so that you can start to actively contributing to one, course, utilizing these systems more effectively and ethically, but more importantly, to learn how to actually contribute to the design of these things, because that's our value.
Ashish (:Yeah.
Ashish (:Yeah, look, I love it. You know, just to give a little bit of an overview to our listeners, I'm really excited about it because I think it's really, first of all, well structured around the different people, right? So if you think about the people you get a chance to meet,
research academics, clinicians, positive psychology coaches, educators, organizational practitioners, right? You get a chance to have all of the different people who are kind of touching the field. So I really like that because you get a chance to not only learn from the researchers, but from real practitioners. And then I love how you all have structured Llewelyn these five days.
Right, starting with where we are. A lot of people think they know where we are, but as you yourself said, look, this field is moving so rapidly. Six weeks and you feel completely kind of old school. Right. You know, what is this notion of hyper personalization? What are some side effects we need to be kind of really conscious of? You you talk about substitution rather than support, right? What are some of these ethical consequences?
Llewellyn Van Zyl (Ph.D) (:Yeah. Yeah.
Ashish (:And then frankly, what are some real competencies that we need to build? I think this notion of really building this together, I think it's really, really powerful. So I'm really excited. I'm really excited about actually participating in it.
Llewellyn Van Zyl (Ph.D) (:Yeah.
Llewellyn Van Zyl (Ph.D) (:Oh, you are? Oh, well, I mean, you got to be there. That's, that's amazing. I really think, I really think we've got such a, such a cool lineup and we really have like world leading experts in these different fields. Right. I mean, my, I've had the, of course the opportunity to watch some of the content already. we've got like lot of workshops and things. there are two that really stand out for me. The one was by a guy called Yadrik Stander. He's the head of a company called Fit AI that does like talent assessments.
Ashish (:Yes.
Llewellyn Van Zyl (Ph.D) (:and Steve Peralta, so from a practitioner perspective, where he runs, he co-founded the second largest well-being company in the UK, right? Unmined. And their stories were just so powerful for me. When you talk about like these competencies and capabilities part. So those are two major highlights for me. And then I really loved like how we can kind of like, how these different stories take a person from like the possibilities of things where we're at, taking them through, my gosh, you know, there's negative effects and the ethics around it.
to really going up to this point where are we going and what are the possibilities for me and the job opportunities and the business opportunities around it. So I'm really super excited about that. And then also we're building, we'll also launch that down in the next two weeks. We also created our own AI literacy little micro course, hyper, not hyper, but like semi-personalized like you, what's important for you and et cetera.
Ashish (:Hmm.
Ashish (:you you
Llewellyn Van Zyl (Ph.D) (:going to be launching that as well for the participants to get everybody talking about the same language. So I really think we're doing something really extraordinary here.
Ashish (:I'm really, really excited. And I will definitely share this with as many folks as we can, because I think it's important. I think it's important for people who are designing in this field to really learn from this amazing collection of experts. So look, question for you. Are you going to be sharing a little bit on your model, the framework that you call AI-IARA? Will you have some of that content in there too?
Llewellyn Van Zyl (Ph.D) (:Nick.
Llewellyn Van Zyl (Ph.D) (:Yeah, so I think my keynote is really about this specific kind of like model and also the story that currently led up to actually coming up with these different components because it was very practical.
Ashish (:Can you give us a little preview of that, a three minute version of what the model is, so people can actually come and learn more about it?
Llewellyn Van Zyl (Ph.D) (:And... EARLN
So I think we call it AI eora. It just happened to be that way. It came out the acronyms of it, but fundamentally, we, started realizing that the future of wellbeing has nothing to do with figuring out the different components, emotional, psychological, social wellbeing. It's trying to maintain and preserve our ability to write our own life stories, write agency, making choices, doing things, being feeling in control of stuff. So that's where it started. So the framework is really like, what are the, what are these six capacities?
that we actually need in order to help us to maintain our agency as individuals, as people utilizing these systems. But also the second part, these become the design criteria in which psychologically safe AI systems should be evaluated against. So we talk about, for example, the first thing, is like awareness. Am I aware where an AI system is actually starting to influence my thinking, right? Am I consciously aware of that? The design principle is that
Ashish (:Hmm
Llewellyn Van Zyl (Ph.D) (:Do we have checkpoints in place in order to reality test things with people? As an example. The next component is about interpretation. The whole idea about me being able to generate and attach my own understanding and meaning to concepts. Because you know the whole thing about confirmation bias, And if you get thousands of different messages from all the different parts in the same message over and over and you're great and you're amazing and you're excellent.
Ashish (:Yep.
Llewellyn Van Zyl (Ph.D) (:I don't know what's right and wrong because my thinking is correct because someone is automatically accepting everything. So do I still have the ability to attach my own meaning towards something and understanding it from my perspective? That was the second part. So the design principle, of course, underlying that is to ensure that like using these things as a mechanism to test hypotheses. It's not it's a idea that I as the system have. You see it as an idea to be tested, not an instruction that this is the outcome. As an example.
Ashish (:Yeah. I love that.
Llewellyn Van Zyl (Ph.D) (:And the next part there is like this whole, so we've got awareness, right? We've got interpretation and this whole idea around intention. So do I still have the ability to choose my own values and right before other system kind of like tells me what's important for me. And this is really important because like when you ask an AI system, a general question, which most people kind of like do, they don't understand the concept of like context framing, agentic prompting or whatever. It's just going to give you a default response. So some of the data in which it's trained.
and the framework for which it's interpreting that data, some engineers say this is a good idea. So you're actually not attaching your own values to, you're not getting your own interpretation, attaching your own values, they dictate to you what's important. And we need to understand that. The third or the fourth part is to hold ability, still be able to take action, right? To take the stuff that we have, these intentions, and then translating them into something that I can actually do. So still having the ability to make effortful choices.
Ashish (:Yeah. .
Llewellyn Van Zyl (Ph.D) (:the third part, which is this, but I got get using this example about the loading the spot, right? Cause there's actual research about that because we use our capability to establish and maintain good relations with people. So I call it the relational agency. the ability to actually go into a situation and feel uncomfortable mess and feel uncomfortable saying things that know that I can't potentially get harmed by people. Right? So there were the ability to, to have relational agency, I think it's really important.
Ashish (:Yeah.
Llewellyn Van Zyl (Ph.D) (:And they're taking all of this stuff kind of like together is this whole idea around autonomy, the, but the meta capacity of autonomy, right? The idea about like, can still think about my own thinking, but I can still consciously choose when to use AI and when not. So that's basically this framework, awareness, interpretation, intention, action, relational agency, and autonomy.
Ashish (:Yeah, look, as we think about, you know, this framework, even in the work we are doing around, we have a model called Perl. I'll take you through it one of these days.
You know, it's really all, you know, the inspiration for it came a little bit like your model, you like it just came about. We had the components and the inspiration came from the, of the word actually came from the sea, which was, you know, pearls get created from irritants that get into oysters. And it contains a set of interventions that can help turn the daily irritants at work into things that can help us flourish versus burnout.
Llewellyn Van Zyl (Ph.D) (:and mid-ball.
Ashish (:right, was kind of a little bit of the story around it, but I'll take you through it. But I really, really like it, right, like this notion of even with that, as we are doing it, one, becoming aware, right, both sides, getting to know the other person, but also you kind of becoming, you know, truly, truly more aware of what you're solving for, who you are.
Right, this notion of also interpretation, we see the world as we are, not as the world is. And that's true for systems as is true for people. how do we actually think about these multiple perspectives? What's our bigger intention context? What are you trying to do here? Right?
Llewellyn Van Zyl (Ph.D) (:Yeah
Ashish (:I just, I love it. think it's, and then tying it into action that is not one, they can be multiple, but has to be, you know, tied to who you are and how you leverage, right? You mentioned an action to sell a car to a creative versus to a logical thinker is very different. And so, you know, what are the things that relate to you? Keeping the human, the relational element always at the center of it. And
honoring that by giving them autonomy. I think it's such a powerful framework. I can't wait to hear your full keynote, Llewellyn, around it. But look, if there are three actionable tips, as you think about for our listeners who are listening to it, either around the framework, either around these risks or the opportunities that this world
Llewellyn Van Zyl (Ph.D) (:and and
Ashish (:that AI holds for the future of well-being. What are three actionable tips you might suggest our audience to think about? you
Llewellyn Van Zyl (Ph.D) (:So let's apply this in practice. Let's start with awareness, right? I think the first thing I would say is like, look at the tools that the three or four AI tools that you're using, TractaPT, Claude, Notebook, LAM, whatever, right? And just go in your own language, try and write down what you think these things do, right? Write down how you think they arrive to their recommendations. Remember, these are compliance stuff. Like you need to be able to, if you use a system,
to make a recommendation, you should be able to explain how the system got to that outcome, right? So how do you think they come up with their recommendations? What do you think they're optimized for? What do you think the biases and things on the limitations are? Just go write that down. And then go and actually look at the model cards. So every AI system, whether it's a tragedy or whether it's a predictive model,
should have what we call a model card attached to it. You can look at hugging face and explains all of this stuff in very plain language and then compare your understanding of what you think it does with their understanding and see if that overlaps because becoming aware of how this stuff got right works is really or doesn't work. It's really important, but also becoming aware that you don't actually know. And again, there's no shame in that because 99.999 % of the population don't know. So there's no shame.
Ashish (:.
Ashish (:Hmm.
Ashish (:Don't. Yeah.
Llewellyn Van Zyl (Ph.D) (:But it is important for you to understand that those are the important things that you need to know because that's the stuff that because those decisions, design decisions affect the information that you get or the support that you get, for example. So that's my first step. The second one is that I think. Guys, I take one of these days, right, and I do take the most difficult path, not for the sake because it's like, I doesn't really know what the suggestion is or whatever, but go do something that you would fundamentally like.
do with an AI, but in a different way. If you want to go from here to work, you would use the GPS to figure out what the most optimal route is to get there, right? As an example, start your car today and say, I'm not going to take the normal route that I usually take. I'm going to try and take a side road and try to see if we can get to work. Because the important thing here is to understand that we need to still build those capacities for utilizing the skills and things that we usually do without the use of AI.
Ashish (:Yes.
Llewellyn Van Zyl (Ph.D) (:because it helps us to become more aware of how these things influence our lives. And then I think finally, I think think about something that you're using AI for, like at a general kind of like level, whether it's writing or whatever. And if you're going to ask it a question, figuring out, don't know, who's the, I can't think of something now, I'm going blank.
Ashish (:Yep.
Llewellyn Van Zyl (Ph.D) (:What are the three? I'm a very creative person and I'm suffering from burnout at the moment. What's one or two or three strategies that I can use to help me with that? Right. They just use that them in some stopper. So instead of asking the AI for that thing, so I don't do actual research around and not taking the AI summary for it. Go read up because remember fundamentally how do skills develop skills develop due to friction. have to struggle with something.
for that information or that skill to develop for that information to be crystallized. Because if I just ask you for a question and you give me the answer, I never learned that I never actually learned that content and that content will never be able to chance that into inverted commas wisdom. So I those are the three things I would suggest.
Ashish (:Yeah.
Ashish (:I love those. So my friend, you're open to some rapid fire questions to close out our amazing conversation. I've learned so much from it.
Llewellyn Van Zyl (Ph.D) (:Okay, let's go. I'm not good on the spot, but let's go. Let's try it.
Ashish (:Okay, what's a belief you've recently changed your mind on?
Llewellyn Van Zyl (Ph.D) (:I've always thought that psychologists will, there will always be a job for them in this way. it doesn't matter what happens, but I've started believing that that's not the case. And it's not because of the systems, taking our jobs. think it's because of our inability to adapt and our inability to like, accept that we don't know the things that we don't know. So I think that was the biggest realization this year.
Ashish (:What's a habit or a ritual you rely on when life gets hard? So call a person, yeah.
Llewellyn Van Zyl (Ph.D) (:I phone my supervisor psychologist and say listen, need to have a conversation. Call a friend, find a friend. think that's the big one for me.
Ashish (:What's a book or a research or a movie that you would recommend that everybody should watch or read? my God.
Llewellyn Van Zyl (Ph.D) (:Fundamentally the movie is called Idiocracy and it's about the dumbest guy on earth falling into a cryogenic freezing chamber and waking up a thousand years later and all of a sudden he's the smartest person alive and it's satirical and it's funny and it's great and it's will our net and it's of course a major comedy but fundamentally the philosophical thing underneath it is very real. If what's will happen if we outsource everything
Ashish (:I'm gonna watch it this weekend. I'm gonna watch it this weekend. Idiocracy. Love it. What's a question you wish leaders asked more often?
Llewellyn Van Zyl (Ph.D) (:Okay, so, idiocracy.
Idiocracy.
Llewellyn Van Zyl (Ph.D) (:So especially with regards to the implementation of like AI systems, I really think they should ask the question like, what does the system that we're doing make harder? Because every AI system is basically designed to make things easier for us, optimize our workflows and stuff, right? But I think nobody ever asks, like, what does the system actually makes harder for us to do? So what capabilities do they erode, for example? Because I think that question we should ask before deploying.
anything because that question changes everything that we know.
Ashish (:I love that. Yeah, I love that. And the final, what's a playlist or a song that turns your frown upside down?
Llewellyn Van Zyl (Ph.D) (:Link embarks the emptiness machine.
Ashish (:Love it, love it. My friend, thank you. I know this is late for you. And so I really appreciate you getting on, going on our show, sharing the amazing work that you're doing. Thank you for organizing this AI in the Future of Wellbeing Summit. We'll put the notes and the dates and how to subscribe, you know, in our show notes.
you know, for all of our listeners, you'll have a chance to get a 10 % discount on the summit. So you get a chance to go listen from just an unbelievable set of experts and practitioners in this field. Thank you, Llewelyn, for joining, for the work you're doing, and I'm grateful for what you're bringing to this field, my friend.
Llewellyn Van Zyl (Ph.D) (:Thank you so much for opportunity and for just being so open about all of this stuff. think the stuff that you're working on and working with is absolutely incredible. So thank you for allowing me to be part of this conversation.
Ashish (:Be well.