Dr. Michelle Zhou: Empathic AI is Real and It’s Here – But We Need Everyone Involved!

Much of the AI you hear about these days is about large language models trained to look for commonalities and best next guesses. This causes a lot of fear around how AI will be abused – Will the bots take over? Are the inputs unbiased and accurate? Will my teenager cheat on his school essay?

But we can take a more thoughtful and opportunistic view of AI, specifically in areas where we can teach AI empathy. Yes, I said teach AI empathy. My guest today, Dr. Michelle Zhou, and I discuss how cognitive AI is different from large learning model AI, how these systems learn empathy, and how they empower both companies and individuals without the resources for expensive solutions. We discuss why empathy is actually even more necessary, not less, in the age of AI. And most importantly, we chat about why everyone needs to get involved in AI – why we need to “democratize it”, as Dr. Zhou states, in order to be more inclusive and learn how to respond to a variety of needs and people. Dr. Zhou reveals why she believes basic customer service chatbots are one of the worst uses of AI out there!

To access this episode transcript, please scroll down below.

Key Takeaways:

  • AI currently looks for commonalities in people and data, as it learns to be more empathetic, we need to teach it how to recognize differences, not just those similarities. 
  • The more we all interact with AI, the more that AI is going to be smarter about understanding individual differences.
  • There is a time and place for canned responses by a ChatBot, but often people will respond better if there is a specific response to their unique questions and needs.

“In order for AI to be inclusive, then we need more people to be there. If there are more people participating, then you have more diversity. The more involvement from a human side, the more inclusive AI can be.” —  Dr. Michelle Zhou

About Michelle Zhou, Co-Founder & CEO, Juji Inc.

Dr. Michelle Zhou is a Co-Founder and CEO of Juji, Inc., an Artificial Intelligence (AI) company located in Silicon Valley, specializing in building cognitive conversational AI technologies and solutions that enable the creation and adoption of empathic and empathetic AI agents. Prior to starting Juji, Michelle led the User Systems and Experience Research (USER) group at IBM Research – Almaden and then the IBM Watson Group. Michelle’s expertise is in the interdisciplinary area of intelligent user interaction (IUI), including conversational AI systems and personality analytics. She is an inventor of the IBM Watson Personality Insights and has led the research and development of at least a dozen products in her areas of expertise. Michelle has published over 100 peer-reviewed, refereed scientific articles and 45+ patents. Michelle is the Editor-in-Chief of ACM Transactions on Interactive Intelligent Systems (TiiS) and an Associate Editor of ACM Transactions on Intelligent Systems and Technology (TIST). She received a Ph.D. in Computer Science from Columbia University and is an ACM Distinguished Scientist.

Dr. Zhou has been featured in Axios, Fortune, New York Times, Christian Science Monitor, and spoke at Fortune Brainstorm Tech last year.

Connect with Michelle Zhou

Website: https://juji.io/

X: https://twitter.com/senseofsnow2011

LinkedIn: https://www.linkedin.com/in/mxzhou/

Join the tribe, download your free guide! Discover what empathy can do for you: http://red-slice.com/business-benefits-empathy

Connect with Maria: 

Get the podcast and book: TheEmpathyEdge.com

Learn more about Maria and her work: Red-Slice.com

Hire Maria to speak at your next event: Red-Slice.com/Speaker-Maria-Ross

Take my LinkedIn Learning Course! Leading with Empathy

LinkedIn: Maria Ross

Instagram: @redslicemaria

X: @redslice

Facebook: Red Slice

Threads: @redslicemaria

FULL TRANSCRIPT BELOW:

Welcome to the empathy edge podcast, the show that proves why cashflow, creativity and compassion are not mutually exclusive. I’m your host Maria Ross. I’m a speaker, author, mom, facilitator and empathy advocate. And here you’ll meet trailblazing leaders and executives, authors and experts who embrace empathy to achieve radical success. We discuss all facets of empathy from trends and research to the future of work to how to heal societal divisions and collaborate more effectively. Our goal is to redefine success and prove that empathy isn’t just good for society, it’s great for business. Much of the AI you hear about these days is about large language models that are trained to look for commonalities and best next guesses. This causes a lot of fear around how AI will be abused. Will the bots take over? Are the inputs unbiased and accurate? And Will my teenager cheat on his next school essay? But we can take a more thoughtful and opportunistic view of AI, specifically in areas where we can teach AI empathy? Yes, I said teach AI empathy. My guest today Dr. Michels Oh, shares why cognitive AI helps machines analyze individual differences using psychographics and psycholinguistics versus large learning model AI that doesn’t really care about individual characteristics, and how it’s being applied everywhere from education to health care. Dr. Zhou is a co founder and CEO of Juji Inc, an artificial intelligence company specializing in building cognitive conversational AI technologies and solutions that enable the creation and adoption of empathic AI agents. Prior to starting Gigi, she led the user systems and experience research group at IBM Research, Almaden and then the IBM Watson Group. Michelle’s expertise is in the interdisciplinary area of intelligent user interaction, including conversational AI systems and personality analytics. She’s an inventor of the IBM Watson personality insights, and has led the research and development of at least a dozen products in her areas of expertise. Michelle has published over 100 peer reviewed scientific articles and holds 45 Plus patents. She received a PhD in computer science from Columbia University, and her work has been featured in numerous media outlets. Today we discuss how cognitive AI is different from large learning model AI, how these systems learn empathy, and how they empower both companies and individuals without the resources or expensive solutions. We discuss why empathy is actually even more necessary, not less in the age of AI. And most importantly, we chat about why everyone you and me needs to get involved in AI, why we need to democratize it as she states in order to be more inclusive and learn how to respond to a variety of needs and people. And doctors Oh reveals why she believes basic customer service chatbots are one of the worst uses of AI out there. This was such a great conversation full of insights, take a listen. Let’s get connected. If you’re loving this content, don’t forget to go to the empathy edge.com and sign up for the email list to get free resources and more empathy infused success tips and find out how you can book me as a speaker. I want to hear how empathy is helping you be more successful. So please sign up now at V empathy edge.com Oh, and follow me on Instagram where I’m always posting all the things for you at Red slice Maria. Welcome Dr. Michelle Zhou to the empathy edge podcast here to talk about all things empathy and AI which I know is very top of mind for a lot of people These days, welcome to the show.

Dr. Michelle Zhou  05:01

Thank you, Maria for having me.

Maria Ross  05:04

So tell us a little bit about your story and how you came to this work of working on AI and finding ways to work with it so that we don’t lose our human connection and the human elements that make those interactions with each other. So important.

Dr. Michelle Zhou  05:20

Thank you for asking this question. So my area of expertise has always been in interdisciplinary area, it’s know as human centered AI, the idea is to actually gain a deeper understanding of individuals, and then use AI to help and guide that individual. You can think about that in this kind of a process that the AI really need to understand, gain deep understanding of each individual. You probably heard about the SR ro kind of a sad 2000 years ago. If you want to persuade me, you have to feel my feelings and think my thoughts and speak my words, right? So that’s where you’re talking about empathy. If you look at, look up the dictionary, look, the word the word empathy, really meaning cities that basically the machine needs to feel needs to think what the person is thinking, feel what the person is feeling. And because of this one, but we believe the machines can better help people, especially at an individual level, because every individual, it’s very unique. It’s very different.

Maria Ross  06:32

Absolutely. And what, tell us a little bit about exactly what GG does, within this context.

Dr. Michelle Zhou  06:37

Okay, so we bought ourselves that is we enable organizations, for example, educational institutes, to healthcare organizations, to create it there. What do we call it a cognitive AI assistant? So what I mean, cognitive AI means it is those AI assistants that not only do they have more language skills, because right now, everybody understands large language models. GPT, they’re very powerful. But those are of AI agents also have this. We call the advanced the human cognitive skills, and especially the way they can read between the lines, and also rate people. That’s where empathy actually will come from.

Maria Ross  07:24

And how do we, you know, the big question I get, when I’m out speaking, is the fear that it’s only as good as the people coding the systems or the inputs going in. And, you know, we’ve we’ve seen, there’s so much opportunity with AI, but we’re also seeing some of the flaws in terms of the biases that we have it within content and within society. It’s drawing from that content base. And so it’s coming out in AI. So how do we, how do we encode that empathy and that ability to sort of think on the spot as you’re interacting with someone and being able to read someone’s tone someone’s, you know, understand someone’s experience? What are some ways that companies like yours, are dealing with that challenge?

Dr. Michelle Zhou  08:13

Thank you for asking this question. Because there’s the old phenom, or what I caught into either a misconception or a confusion about especially, as I said, like generated by large language models, and inverses, the machines the cognitive ability, right? So let me just give an example. Thinking about it, I want to take a step back as well. So in order to really to be empathetic, as I said earlier, according to the definition of the empathy, which means this you really have to understand this person, right? Understand how they think, understand their passions and interests and understand how they feel about things. Everybody even we see exactly the same situation, you and I are very different. So you may feel different than what I how I felt, right. So which means it is from a psychology point of view. They call they’re calling it the individual differences. So the current language model or the language models are the generated by AI is actually not trying to understand individual differences. They’re trying to understand the common patterns and the structures that are in the public data. So it’s the complete opposite it to buy under to the methodology the approach is to understand individuals. So what a GTA has died quite differently. So we called also computational psychology. So it’s like a psychologist. So remember what psychologists the world do. So you speak with the psychologist, the psychologist, the world observe. Were trying to analyze your behavior and trying to infer who you are really as a unique individual. So that’s why we call the cognitive AI in our term it is trying to use machines to analyze individual differences instead of a common parents, right. And then from there, we’re trying to identify the computational psychology. So given the user behavior we’re trying to infer a machine is trying to infer. So what’s the unique about this person versus another person is about a personality analytics about the psychographics analytics. Because you only need, you must get to that level, in order to let machines to feel who this person is the how this person might feel, right. So just to give a very simple example, let’s say in a very loud, crowded, parade are discussing a huge number, an introvert person will feel very anxious and very uncomfortable. But the extrovert social person may feel very excited to accomplish the kind of L, their emotional kind of very high in terms of why that is why you can see with the same situation different people may feel very differently, right. So the machines empathy really needs to come from, it’s under their understanding their deep understanding of the psychographic characteristics of each individual. At any moment, sorry, many people may not even realize that they still don’t just analyze the text that we understand, actually, there’s a line of the research, you’ve put the psycholinguistics, they show psycholinguistics shows that a people’s communication behavior indicates a lot of their characteristics, but not retrieved from a content. It’s a form those very mundane things that largely language models probably sometimes don’t even care, like how they use articles, how they use pronouns, how they use the structure, for example, passive, passive tense, the versus the very active tense, right. So it’s very different than what large language models are trying to do, which is protect it, almost opposite it. And of course, the two can come together to make an AI assistant much more powerful.

Maria Ross  12:02

So if I’m hearing you correctly, what you’re saying is that many of the models people are exposed to Now, are these models that are animal analyzing commonalities. Yes. And you are taking things a step further with, with cognitive AI, to actually train the systems to look at differences. Absolutely. That’d be a way to encapsulate that. Yeah, absolutely. Fascinating differences,

Dr. Michelle Zhou  12:26

individual differences, very individual differences. So for example, you have your own way of communicating, I have my own way of communicating. So if you use our machine to analyze your communication behavior versus my communication behavior, and you want to shows that it’s almost like your, what do you call it, it’s not your biological DNA, but rather than your psychological DNA, to show that, wow, you know, I’m Maria, maybe in the shower there, they sharing the most are very open minded, they like to listen to, they like to basically embrace new technologies, new solutions. But on the other side, they’re very different. How you handle social relationships, how I handle social relationships, how you handle life’s challenges, like a pressure, how I handle life challenge, the pressure might be very different. Right? It’s really trying to understand the individual differences, the unique characteristics in each individual, only into that level, the machines can be empathetic, otherwise, how, how could they? Right, right?

Maria Ross  13:31

And how so where’s that? Where’s that unbiased way of looking at those individual differences coming from again? Is it is it ever impacted by the people programming the machines? Or giving it the input to infer these different these differences in people? Where’s it coming from?

Dr. Michelle Zhou  13:52

Very, very, very good question. It’s a great question as well. So actually, the training, of course, we still need training data, right? That’s why this area of research, maybe hasn’t caught it so much attention as the large language models that because of the data scarce scarcity. So it needs to come from individuals. So that’s why we have to collect the data, of course, that people have to opt in, right? Individuals data, for example, let’s say, we have the interview of you with another person, then that’s your data. And then I have an interview with you as my data. But like it’s a formal interview. It’s not even a great quality of data, because I’m talking about technology, but the interview, but the conversation so let’s say with the informal, like for example, I have very oh how do you say very authentic, a very open conversation with my AI, Vine kind of data. Would it be the best that to illustrate this person’s characteristics because the adult in those moments that I’m most relaxed that I have? Yeah, Random, I’m already true to myself, right? So that’s why we spend a lot of time we collaborate with the universities to figure out what will be the great data to will collect it. And that we want to be very authentic, and won’t be very actually helpful to actually capture the real characteristics of the individuals. So as what you see is not the public data. So today, not so. So we have done so this, for example, oh, this year, a university study is the five University study led by the Auburn University, they hadn’t collected data for almost three years, and trying to figure out and basically using our machine techniques to infer the individual differences. And then they want to know, how does individual differences actually associated with the people’s real world behavior. So we’re all to be fair, including their GPA score at the real world behavior, right? Second mind reward behavior. This is the third party’s evaluation of those folks, when they have to kind of walk use this to validate it whether our inferred individual differences had an impact, because the traditional research said that individual differences do impact the reward behavior to the study results, study results were amazing. So they showed that actually, our AI chatbot, infer the personality traits that can predict it, those people’s real world behavior, either on par or better, and the traditional assessments. Because psychology says that many, many traditional assessments,

Maria Ross  16:35

yeah, and what I’m hearing from you, too, is validating something I read an article or saw somewhere where they were talking about, we need to not fear AI, because otherwise we’re only going to get a certain type of person interacting with the AI and providing the inputs. Yeah. And so it was it was in response to well, will, you know, will bad actors use AI? And it’s like, yeah, if only bad actors are involved with populating the learning that the AI needs. And so the more of us that get involved with helping to, you know, I’m maybe I’m not using the terminology the right way. But the more of us that interact with AI and populate with AI, the more that AI is going to be smarter about understanding these individual differences, versus just a certain type of person that would interact with AI.

Dr. Michelle Zhou  17:27

Actually, you said it very well. So yesterday, I was talking to a person from the university about the death. So their concerns about AI inclusion, that there they could actually help the people in the whole world, not just the certain population, right? We had this very similar conversation, it is in order for AI to be inclusive, then we need more people to be there is almost like, most most likely in the real world when you when you’re talking about like a voting, right. So if we’re the only one population voting, they weren’t voting the people in that one. But if there are more people participating, then you would have more diversity, people to have a more candidates probably represent a different population, right? This is very, very similar idea in AI as well. So like, for example, psychology, most of the time, the routine students, so then maybe in our current saints that, you know, that will be for younger people, so then two would be great. And now the worse we’ll want to work with the healthcare organizations who are actually doing elderly care, then we have more data from there than we will be AI would be trained, it would be even better to cater to the cerebral to the elderly. So you’re absolutely right. So the more you Mallmann from a human side, the more inclusive AI can be.

Maria Ross  18:47

I love that. I mean, as an author, you know, this is something a lot of authors are concerned about is like, are people going to pull our work and claim it for themselves? And a part of me reacted very viscerally to that of like, No, I don’t want my information being used and cited without me. But then there was a part of me that said, No, but my I want my information out there as part of the inputs to these algorithms, because I actually fight No, and I trust and I validated the content that I’ve created. So do I really care if someone doesn’t know that I wrote that particular paragraph? People are still gonna buy my books. But it’s sort of like the greater good type of donation.

Dr. Michelle Zhou  19:31

Yeah, good work, but uh, you did, you do kind of like also imply another side of visitable at the store, right? So we have been very careful. So first of all, our team is absolutely essential to ask people to get their consent, right. So we always ask people, okay, we’re going to we were going to list for example, in your university, they always have an IRB approval process, which means that is the Research Board has to approve with that particular human study. In this case, people know that what they’re getting into how their data will be used, that all this data can be potentially even to help them. Right. So it’s almost like what do you said it is? If the person doesn’t take voice doesn’t actually express their needs to express their characteristics, then that machines would not know. Right? So the machines would only serve the best for the people who aren’t there who showed up, right, in this case is that, of course, we want to also protect privacy, and will also protect personal data. In other cases, the most of the hours is anonymous, or anonymous, because in this case, that is why we encourage you more and more people to involved in this kind of studies, to get their opinion to get their side of story into the system. But in the meantime, is still anonymous, so we’re still not have the horse, reveal your identity reveal your equity, dark secrets, you anyway. Right. So that’s a good it’s a double edged sword. Right? I think, if people knew about that, they can potentially help them. So just very simple example, we have started working with also on financial and health care Institute’s right to thinking about it is the financial literacy. It’s very important to for people, oh, you know, especially in their probably lifetime, right. But the many people who we were even talking by myself, or sometimes we thought they were financially illiterate, we don’t know a lot about but if we just hide the data shortcoming from the machines that we will never be able to get the kind of help we want it. So we want to know what kind of financial illiteracy exists. The machines that maybe this case, the humans can inject the knowledge and to help. And having said this one, in terms of the inclusion AI, including AI, a quality side of it is also important I want to stress is that in the past, especially past that maybe 10 years, but this few years that says have been changed, and we’re trying to change that too. So AI always considered almost like a probably early days, and maybe when cars computers are jesting, we’re just the invented, owning the people who are privileged that can’t afford rights, of course, when they have the wealth, so they can enjoy the first car, the first computer, the first TV. So what we’re trying to do here it is also to democratize that technology. So they can reach as wider population as possible. And we do that. So for example, in the past that we wanted to create an AI assistant, right. So companies literally have to spend millions of dollars, them to hire companies like the big tech companies like IBM, Microsoft that to help them to build that right to customize it to their solutions, and the small businesses. And they’re the alone individuals that can’t afford any of them. So right now, while there are a lot of the really biggest efforts, that is to democratize that, so we want to make the creation of the custom AI assistant, it’s as simple as somebody can use PowerPoint, as simple as somebody can use a spreadsheet it in this case that they could have the control. Again, this is another inclusion in terms of folks. So yesterday, again, I was telling a story of a girl, I think she’s a teenager girl. And I was looking at the wife that come in, so she was writing. So she actually was very introverted. She also felt like she’s not a very great fit, and with her, like a same age, girls or students see her class. So she was able to use our technique, our platform to create herself as an AI. And so she can talk to, and that gives her so much confidence. She says, you know, AI could be also like me, but uh, you know why now I have an AI friend. So what’s the what’s very touching? So I was looking at the comments as she wrote it, is that the shelter? Oh, actually, it’s okay to be so actually she was talking to herself, right? Well, yeah, to be introverted. It’s okay to be not at the extra popular associate or something so scary. The reason she could do these lies because the two words there i she’d had a need to hire me she couldn’t possibly afford engineers to help her create her right. So that’s what we are really striving add to enabling. That’s why our company’s mission it is we want to enable the creation, the best the human AI team, and make it accessible to everyone. So that’s about it. Two people talking about inclusion, empathy. And the really the humans have been the loop book. Doesn’t matter where it’s at on the recipient side, the users of AI or the creators that designers we call the supervisors of AI, they all should have a say into it.

Maria Ross  24:56

So So what kind of companies are coming to you is it does It ranged from all kinds of offer customer service and employee engagement. And I mean, does it range across industries? What are you seeing? Who are the ones that are really pursuing it?

Dr. Michelle Zhou  25:10

Right? So let’s go back to your the topic of today’s podcast, the empathy, right. So the companies who are reaching out to so have reached out towards the organizations who dear deeply care about their audience, educational East youth, they care about their students, prospective students, as well as the existing students. Thinking about the prospects students, this is the I would say empathy, at its best use of the state right machine and present the best use, thinking about the students who have put came from a family who have never had anybody has gone to college, thinking about the students who are first generation immigrant, maybe you will have a language, language problems, language issues, they want to get a higher education, and they want to learn acquire a new skill, right? They cannot afford to hire a career counselor for 20,000 50,000 bucks, right? So they go to this university alive, they just University. So we’ll use our AI assistant sitting there 24/7. Remember, those people have to work during daytime as well right to have a full time job. But they’re thinking about advising their career. So this is the problem that universities to the world higher in this way higher, our AI assistant is sitting there to really try and understand their prospects, students and needs and wants, and even help them to find out what might be best suitable for them. That’s what I called empathy. If they don’t understand what those people’s needs and wants. How would they recommend a program? So I was looking at some of their transcripts. Some people were saying they said one day says, you know, I love my family, I really enjoy taking care of them. Right? So I’m trying to find a program to then the AI would ask them, could you tell me more about yourself, because we need a certain amount of data, make a recommendation that was looking at the recommendation made for this particular person. It’s about nursing degrees, maybe counseling job, this really matching with this person. So I have to say unique characteristics, being very compassionate, being caring to see how they’re treating their families, right. So then there are other type of people who might be very a thoughtful, maybe introverted, then they will recommend something, slightly different programs. So basically, it’s about a feta mean, education, it’s expensive. It’s not cheap. It’s a big investment for the folks who wanted to get a higher education degree or maybe a new skill. So this is what we’re talking about empathy for us for students retention, same thing. Students, do they have a full time job they had under pressure, and how can you use AI assistant to check in on them to help them right? So that’s when same thing applied to healthcare at another problem you heard about the most things about it can be empathetic to patient that you really have to understand the patient’s pain is not just a physical pain, mental, a mental absolutely pressure, mental acuity pressure as well, right? Because they are not well, and they’re thinking about how I’m going to pay for my family, and how fast I can recover for the Alliance of things there. So this and then we’re starting to see the financial company into ice is very similar. What are you maybe financial trouble? Maybe people wants to get out of financial trouble, right? Again, they cannot hire. I mean, I guess the more I spoke with the potential client or existing clients that are more I felt like it is to this world. It’s so large, it’s so different. Not everyone. It’s a privilege, right? There’s so many people actually are

Maria Ross  29:01

not most of them are not right, exactly. And I don’t hire me, including ourselves

Dr. Michelle Zhou  29:05

to write I don’t have a financial analyst. I don’t have a financial advisor to advise me, how should I save money for my retirement or something like that. So we really see this, that empathy needs to be instilled that into the machines can help the people as we have expected them to do that.

Maria Ross  29:27

And what I love about this is this goes beyond, you know, in my work as a brand strategist and creating connections with customers and with prospects, the very, you know, the image that people have of chat bots, which is oh, it has a limited amount of pre canned responses, and it’s going to give me what it thinks I need and sometimes that actually makes you as a customer, especially if you’re already upset. Getting the canned responses makes you even more upset because you know, it’s a canned response. Right. And I’ve gotten that with like email responses or chat bots where I’m like, You’re clear really a chatbot? Like you’re not listening to me. And so I love this approach of, actually, there’s a time and a place for that. But doesn’t, then there’s a time and a place to really get AI to be able to react to the specific person or situation or what they’re specifically typing or saying. And is that I guess my naive question is, obviously, that’s a lot harder to program. Right? And so, what does that time horizon look like? Is that a really long time? Is it obvious? It’s obviously something that gets better with every interaction? But what’s, what are we looking at here in terms of like, the difference between investment and timing of getting a system to be that responsive to someone?

Dr. Michelle Zhou  30:46

Actually, all the time is now so our customers have been using that already. Player one, it’s just this. So that’s why when I talk to our potential client, existing client is always about use cases, right? The most of the people think about a chatbot AI assistants always in contact with customer services. Customer service, in my opinion, it’s a wild the worst the use cases for AI assistants, you know why? Because the people are already very upset. And also they only they normally need urging her help. Right? Have you seen a person who wants to call Customer Services? Because they just want to say hello, what the gentleman wanted to do kind of a chitchat, right? Maybe they have people like that, but very few. Right, right. So that’s why what are we, our AI assistants are best to use the so far we have seen as well, right? I wouldn’t call them customer services, per se. Always the like in a situation to help people to make a very high value, high stakes decisions, like what I’m talking about in education, or maybe even healthcare, right. So let’s say education, I wanted to get a degree, I didn’t have a degree, I had a just the maybe high school degree, I found that this is very, extremely hard, difficult for me to find another job. But I have no idea because none of my family members has ever had a college degree. So I, I want to help. But because of this one, you remember, they also see from a psychology point of view, people also have access to what are called a social desirability biases, they actually may not be willing to talk to a human, a counselor because they felt like I’m inferior. I feel uncomfortable, because I’m exploring and witness exploring stage AI seriously would be the one of the best approaches, you know what, first one AI doesn’t judge, I don’t really, I don’t really care. If you had a degree, you don’t have a degree with kind of a family, you come? Does this best the trend to understand what your needs are. And to provide that kind of a how this is the where empathy also shows up very a lot, right to ask that. Say, tell me about yourself. So what what are your and what are what are your hobbies? So through this very common and informal conversation, I’d gathered data and really understand who this person it is then a made a recommendation and can make and make a suggestion. So that’s the best step, right? Similarly, in health care, let’s save somebody who wants to one who was maybe grandparents or maybe bad parents that maybe this person suspect this person has an intubated, maybe Alzheimer disease or Parkinson’s disease, not having enough evidence, because those people are not the doctors and physicians themselves. Right, but they want to just investigate a little bit in order to talk to a medical doctor. Yeah, first one is the cost the prohibited another one it is the MIDI is not ready. They’re not ready yet, right? So this case, if they go to the website of the healthcare organization, they want to get as much information as possible. In this case, the AI really should chime in, to be empathetic at this point to be be covered empathy. Asker so what are you what’s your goal? So what you’re trying to find this information, when we ever to go, the website has ever you see, they want to know about your goal, they want their goal, what is the market here to you? So I wanted the AI to serve as the interacting medium there to ask what I want, right? And I go to a website, you go to websites, especially if they say you use your author, you sell books. If I want to inquire about your book, maybe for my purpose, you never know maybe the reason for me to buy your book is because the I felt that there’s a value in your book I wanted to recommend to somebody versus that. I want to read it that certain chapters in a book they’re totally different reasons that I haven’t ever seen the static immediate like a website and do that. Because w one version, right the great thing about the interactive media like the air assistant, it can do that. Not only can it can do The understand the user school understand is one, he cannot truly deliver the information in a best possible way tailored to the person’s needs. That’s what I call empathy. Two, three are actually talking about I just published an article in the university business talking about the, the AI tutor your own learning Amiga, right? So think about it. Oh, I was talking to the t shirt, which these are funny. So when you’re talking about geometry problem, first one, math is already the hard problem. Most of the people, then we’re talking about the geometry problem using examples in construction, thinking about how many people are familiar with construction examples? Containers, the walls. So it’s just adding an extra layer of anxiety. So I was talking to the teacher who has been teaching math, he was telling me 40 years, I was asking her, Hey, wouldn’t it be interesting? Would it be using a helpful for students? Let’s say, we use examples the students are most passionate about boy or girl who are very interested in playing soccer or baseball, let’s notice my boss, they’re familiar with that. Right? wouldn’t be able to say won’t be relieved that anxiety the first like you said, Let’s talk me about them. So you hear that the problems right nowadays, the for the textbooks? For the one, it’s impossible to have billions of aversions for millions of students, that course, yeah, by now we have AI, we can really do that by understanding a person’s passions and interests that especially how they learn, and what their cognitive style it is, because some people are very emotional, some people are very example base that we can really tailor that content to the person individually. That’s why I added empathy at scale,

Maria Ross  36:47

but it is at scale. And it’s you know, like, I’m just, I’m laughing at your math example, because I have a nine year old and if they only would use examples from pokimane, that would be amazing. Like it would be undo it. Right. Right. So as we wrap up, I just want to ask you this last question. And I have a perspective, I would like to hear your perspective from the world of AI, is, do you think the need for empathy is going away? Because of AI?

Dr. Michelle Zhou  37:12

Oh, actually, it’s more than that. Not go, not only won’t go away actually needs more, because we have to teach AI more about empathy? And from a human side of it, right?

Maria Ross  37:24

Yeah, we have to, we have to figure it out ourselves before we can teach it to, to a system. And also, you know, my perspective is also when we are automating a lot of things and being able to customize that skill of empathy that we can have, from a human to human perspective is going to be even more valuable, because we will have the machines taking care of the things that sort of, quote, unquote, anyone can do at the lowest levels. But then, like you said, we also need to build our empathy so that we’re when we’re encoding these systems and teaching these systems and providing content and data for the systems we’re doing. We’re teaching what we know. In other words, one word of caution, that machine empathy, versus human empathy might be different. So this is a how so long, you know, from the empathy point of view, people often again, confuses the two concepts. One comes about is that people really feel what I feel number one is called empathy. Yeah, say content of empathy. As effective empathy. Yeah.

Dr. Michelle Zhou  38:29

So second part of empathy, maybe needs to act upon the empathy, which is very different, ran to somebody, maybe I can do what do you feel? But you know what? I’m not gonna act on it. Right. Right. That’s very different. So what I my vision is for machines, that is because machines are machines, where their electronic parts, right, maybe in the future, there may be more made it like using human tissue, I don’t know, human society, tissue, something like that. But I think what’s important for machines today is not necessarily to let them to really feel right from the material point of view. But we want them to act it as if they felt it was advantageous. You know, why this advantage you? Have you heard about it, like a first responders, the police, men, and also the nurses, health care professionals, they got burned out. They got burned out, it’s because they’re humans, they have emotions. When they feel other people feel they feel the pain, they feel the one they also have the burden we call the emotional burden on themselves. But that mighty human machine said is that we can teach them how people feel ask them ask machines to to act as you thought. They actually feel what, that’s the difference. That’s why actually we want to teach machines to act upon it to the behavior as you feel it. That’s because it’s very difficult for humans to do you know why your Cubase has this emotional burdens that they can take anymore, where she don’t have that

Maria Ross  40:00

That is such an important point, because I’ve often talked about the different types of empathy, cognitive empathy and affective or emotional empathy. But both of those are great. But if you don’t do anything with that information, it it’s kind of a wasted, in my mind, it’s a wasted connection. It’s a wasted anguish, right? Compassion is empathy and action. Now that I have the information about you, what am I going to do next? Am I going to, you know, be silent and let you talk? Am I going to communicate in a different way than I was going to before. And this is why we talk about especially in the business context, you can still be empathetic and make tough decisions. But your empathy enables you to take the action on the tough decision in maybe a different way than you would have if it was just like, oh, by the way, 500 of you, we’re laying you off today. Good luck, it’ll pick up your last checks. Right? Right. You’ll lay people off as an example. But you can do it with empathy, you can do it by understanding where they’re coming from, what do they need? How do you need to communicate the decision you’re making? And I love that is such a new little nugget for me that you just shared of the power of, of AI and making sure we do teach it quote unquote, empathy, because it’ll never quit. It sort of it won’t. It won’t suffer from empathy, fatigue, or compassion, fatigue, right? You can just keep going and be there when we as humans really need to.

Dr. Michelle Zhou  41:31

Absolutely. You said so well, about the compassion, right? So which means it is that requires the human teaching, because he or she don’t have the knowledge. That’s why I said it is that we work so hard to want the domain experts to teach machines, we don’t want to get the IP people to teach machines. That doesn’t make sense, right? Because the domain experts that you said, the counselors, the human advisors, they really knew how to act upon the empathy than actually part of it. We want them to teach machines. That’s why becomes more important than democratizing the tool. We keep saying that. So you cannot just hit OK, it people go say it up. That’s not going to happen. That’s not going to work. And we really want a domain expert in the loop of creating AI. Right? Oh, my gosh,

Maria Ross  42:23

this is so great. This is so great. Thank you, Michelle, for all of these insights. And what a wonderful conversation. Like I say with many of my guests, I could probably talk to you for another 45 minutes, but we’re gonna we’re gonna wrap up. So I will have all your links in the show notes for folks to learn more about you and learn more about Gigi. But for folks that are on the go, where’s one of the best places they can go to find out more about you and your work? Oh, gigi.io Okay, ju J i.io. i

Dr. Michelle Zhou  42:49

O right? Yeah, I know the LinkedIn to find us ug dot iOS.

Maria Ross  42:55

Perfect. Thank you, Michelle, for your time today. Thank you Maria. Bye. Yeah. And and thank you everyone for listening to another episode of the empathy edge podcast. If you like what you heard you know what to do, please rate or review or share it with a friend or a colleague. And until next time, always remember that cashflow, creativity and compassion are not mutually exclusive. Take care and beyond. For more on how to achieve radical success through empathy, visit the empathy edge.com. There you can listen to past episodes, access shownotes and free resources. Book me for a Keynote or workshop and sign up for our email list to get new episodes, insights, news and events. Please follow me on Instagram at Red slice Maria. Never forget empathy is your superpower. Use it to make your work and the world a better place.

Learn More With Maria

Ready to join the revolution?

Find out how empathetic your brand is RIGHT NOW, and join our newsletter to start shifting your perspective and transforming your impact.

Privacy
This field is for validation purposes and should be left unchanged.