Using AI in Ministry
In this episode, Kasey Olander, Bill Hendricks, John Dyer, and Drew Dickens explore the rapidly growing phenomenon of artificial intelligence and its implications for humanity and the Christian faith.
Timecodes
- 05:32
- Experiences and Background in A.I.
- 13:00
- What Role Does A.I. Have in Spiritual Formation?
- 22:07
- Impact of A.I. On Education
- 33:18
- Potential Concerns Regarding A.I.
- 40:28
- A.I.’s Impact on Media
Transcript
Kasey Olander:
Welcome to the Table Podcast where we discuss issues of God and culture to show the relevance of theology to everyday life. I'm Kasey Olander and I'm the web content specialist here at Dallas Theological Seminary.
Bill Hendricks:
And I'm Bill Hendricks, executive director for Christian Leadership at the Hendricks Center. We want to welcome you to the Table Podcast today. Today, we're going to talk about artificial intelligence, and particularly its use, and frankly potential misuse in what we might call ministry context. So if you're a pastor or a youth worker, you work for a para-church ministry, you're a missionary, perhaps you're a teacher at a private Christian school, all of these would be examples of what might be typically called ministry context. But of course, all Christians are called to ministry no matter where you work. If you're a doctor, a lawyer, a homemaker, a dental hygienist, you're going to find some relevance here because we're going to talk about AI, which is everywhere now.
And you're probably thinking, well, how can we have one more conversation about AI? Hasn't everything been said that could be said? And the answer is absolutely not because this is a massively erupting sort of technology that by all measures it appears, will radically transform life as we know it in the future. But I want Kasey to introduce the two gentlemen who really have the expertise to bring to this.
Kasey Olander:
We have two esteemed guests with us here today. We're joined by Dr. John Dyer, vice president for enrollment in educational technology, and assistant professor of theological studies here at DTS. John, thank you for being here.
John Dyer:
Thanks for having me.
Kasey Olander:
We're also joined by Dr. Drew Dickens, founder of a ministry called Encountering Peace, and his doctorate is in a theological anthropology and he studied generative AI for his dissertation. Thanks for being here, Drew.
Drew Dickens:
It's an honor. Thanks for having me.
Kasey Olander:
I appreciate it.
Bill Hendricks:
So I want to roll the clock back, and both of you're younger than me so we don't have to roll it back as far as we would if I was talking about me, but about you. I'm thinking two little boys are growing up in this world. How could anybody have ever predicted that they would be talking about something called artificial intelligence? That sounds like a science fiction sort of thing. John, why don't you start, how did you come to first of all, be a part of Dallas Seminary, but also even before then, I guess your interest in just technology and this whole direction?
John Dyer:
Yeah, I grew up in the eighties and so there was a lot of technology I think, in that era, and dad would bring home computer parts and tell me not to touch them, but I couldn't help and I would touch them and make things and try to build whatever card he had brought for the computer. So I was always messing with those things, but I think in that era it wasn't very popular to do that, so it was always a little hidden thing. I was trying to act cool at school and never-
Bill Hendricks:
You didn't want to be a geek.
John Dyer:
Exactly. Yeah. Yeah, I couldn't help it though. So I've been playing with those things for a while ,and I think in that you're watching shows and grow up with Star Wars and there's all kinds of AI in there. And so, I think the assumption is that we would eventually get there, but it just maybe feels a little bit different to be on the path of getting there where we are today.
Bill Hendricks:
How about you, Drew?
Drew Dickens:
I'm still floored you think I'm younger than you, so thank you. I'm still processing. And thank you for that, by the way. I suppose because I am 65, so when you go back to me as a little boy, we're definitely predating a lot of this. But actually not really, because I was born in '59, but in 1955, Alan Turing is sort of one of the many grandfathers, it's quite a lineage, but one of the many grandfathers of artificial intelligence, John McCarthy actually out of Stanford about the same time coined the term artificial intelligence. So this does predate me, but barely. I suppose as a boy though growing up, just a fascination with science fiction Star Wars, Star Trek, robots, and, "Boy, wouldn't it be great if I could …" And as a rather reclusive an introvert growing up, wanting that imaginary friend, whether that's a robot or not, but somebody I could engage and talk with, that kind of thing.
I had an early love for research in the library and studying, and so always wanting to know that one card in the card file that if I found this, then it would unlock this. And so, always feeling as though there's just one more question to ask that might reveal things. And so I guess growing up with that has always had an interest in technology and a fascination with it. And then merging that and having an opportunity to merge that with some of the work that John's done when I was at seminary here, that crossroads of theology and technology seemed real natural for where I've always had an interest. Fascinating.
Kasey Olander:
So bringing us back to the present day, could each of you talk briefly about what your experience is with AI now?
Drew Dickens:
Of course. So when I started working on my doctorate in this field, I always joked that I couldn't get a return phone call five years ago, but now everybody wants me on their podcast. So the timing has been phenomenal. I need to send Sam Altman a thank you note. The timing has been amazing. But it really exploded. It's been here since 1955, but really over the last five or six years have the advent of generative AI and transformers within, that's the T of GPT, and language models kind of teaching it to speak has really just happened over the last four or five years. And I mean speak metaphorically, although not anymore. Now literally as of a couple of weeks ago, or better.
But how I use that in my research is what might a, and I have trouble with pronouns, calling it an it or an entity or a being or a system, but as I'm able to engage with it sometimes just on academic level and asking questions, coming up with two disparate points and help me connect Psalms 23 with Ephesians 4, it can find that. It's unable to not answer, so it will give you an answer to that. And so, I use it every day, multiple times a day, just as creativity sparks. Hadn't thought of this before, show me a list of five things kind of thing. It's just as a way to start a conversation. The days of the staring at a white page are gone, so give me five things. What are five suggestions? What are 10? Give me 50 verses that apply to. So I use it for that a lot. And then my interest is the impact on it spiritually as I'm able to engage with it as some sort of divine entity, what effect might that have on us in the future?
Kasey Olander:
Yeah, that's fascinating. You're talking about it as a starting point, a jumping off point for you to help you in doing your work.
Drew Dickens:
Absolutely, absolutely.
Kasey Olander:
How about you, John?
John Dyer:
In my own, I think there's a number of timelines you might look at. I think I got to do an AI chapel in 2016, and it was for our arts week. And so, at that point, generative AI, it was something that was out there, but it was more in the arts space, so generating kind of funky pictures, and they often had a real abstract look to them, not the hyper-realistic things for today. So it was really this question of is that creativity? That was the hypothetical question at the time. And then I think 2022 is when the ChatGPT comes out and more of just the common person is experimenting with this. So I think early on in my programming career, using forms of artificial intelligence to either create really complex rule sets or to analyze data in ways that are really more statistical.
And then only in the last couple of years, more that generative sense. For my own generative, I've been trying different things using what Drew's talking about conversation partner starting point, but then also trying things where I say I really want to continue to develop my skill of starting things and then try using AI more on the back end of what am I missing? And having a look at what have I done, what points am I not still getting? And then seeing what difference does that do to me and what changes do I see in my own brain and thinking, depending on where I put AI in that. So I'm thinking about my own learning and my own development and not just what I can produce, but who I can become.
Kasey Olander:
Right.
Drew Dickens:
Can I ask you a question?
Kasey Olander:
Sure.
Drew Dickens:
So there's GPT-1, 2, 3, we're now 4, and they recent released 4O, which is a turbo version, just short of them, and I'm talking about OpenAI and their GPT platform, prior to them releasing what everybody's kind of on the edge of their sheets for GPT-5 and what is that going to be? So they did GPT-4o, and now they've released 4o1. That's more human capability on reasoning. Have you played with that yet at all?
John Dyer:
I haven't, no.
Drew Dickens:
So what's really fascinating is its ability to just take a bunch of data points and it's much more, instead of it being more subjective, more of an objective look at data and what am I missing in this? What are patterns within this that I don't recognize? And asking it for a conclusion sort of a yes-no binary response. And so, I think for a pastor, I think that's really interesting. Looking at just putting in attendance data, Baptism data, zip code data and cross matching all that, but asking it to make a reasonable expectation or suggestions based upon data is something that just came out a few weeks ago.
Kasey Olander:
And that's fascinating that John, you highlighted not just what am I executing, but who am I becoming? Can you speak a little bit more to how you're working with AI and what that does to you? Maybe if it suggests something for you, you're like, "Wow, I never thought of that. But next time I'm in a similar situation, maybe I could be the one to come up with it." Or yeah, can you speak a little bit more to that?
John Dyer:
Yeah, I think in this whole discussion, that probably is to me one of the most important parts of it because there will always be new features coming out that we can say what's a good use and a bad use of this thing? And those are important discussions for us to have. But I do think as we've seen new technology come out and we sort of cede that ability to it and we realize, so who have I become? And the example I always give is GPS. There's so many places I don't know how to get to and the places I've been multiple times, but my mind always goes, "I'm going to be able to rely on the GPS in this case to do that." And so, I'm sort of thinking about even here at the seminary and my own kids and all sorts of areas of what do I want to develop a skill to be able to do and what do I want to keep that skill to do?
It's not that I don't want to ever use AI to do that, but I think there is a difference between someone, say for example in the coding sphere who's been developing the skill of coding for 5 or 10 years or something like that, and then can use AI as an Accelerant to that skill versus someone that's starting from that point and doesn't develop the skill at the beginning. And it's sort of like math tables or something like that. You learn all those math things and it seems sort of silly, but it forms some basic tools in your mind so that later on when you're doing calculus, you do use the calculator at that point, but you've been able to build up that skill set. So it's sort of what skills do I want to still have fundamentally underneath the hood?
And those are hard questions to know because there are some things like GPS, maybe I really don't need to know where my sister lives, but that sounds terrible that I don't know where my sister lives. I don't know how to get to her. So I think that's the evaluation that we want to keep talking about is who do I want to become, not just what I want to produce, and then thinking what the balance of those things are.
Bill Hendricks:
Right. Drew, I know who do I want to become is a real question that you've thought a lot about. In fact, I think you said you worked on that in your dissertation, the use of AI relative to spiritual formation and discipleship. Tell us more about how might one consider using AI as a part of those functions?
Drew Dickens:
Well, I'll bridge off of what John just said, and I concur. Everything he said was right on point, and I think it's a broader sociological question of just the imago dei is what does it mean to be human at its foundational question.
Bill Hendricks:
That's key.
Drew Dickens:
And how is this affecting this? And of course it's going to affect it, but how? And how might we let it? You mentioned in your introduction, we use it every day without even realizing it now. If we go home tonight and you want to watch something on Netflix, Netflix has an amazing recommendation engine that is driven by AI, so it knows what I've watched in the past and it's making those recommendations and so does Spotify and whatever else. It's the way the Facebook or the Meta algorithm works as well, algorithms in general.
And so, it's making us, I don't wanting to use the word lazy and I'm not sure that's appropriate, but I don't have to get on my bike and ride down to the library to find a list of movies that came out and whatever. I heard an interview with one of the executives of a Hollywood studio and their vision for the use of AI is my wife and I going home tonight and sitting there and going, I'd like to watch a movie. I've only got about 30 minutes, and I really like Tom Cruise and Scarlett Johansson. So a movie with them in this where there's some tension and a resolve when they're at a hotel in Italy or whatever, you just completely make this up.
Bill Hendricks:
And it makes it up.
Drew Dickens:
There it is.
Kasey Olander:
Wow.
Drew Dickens:
So what does that do? But again, back to the imago dei and what does it mean to be human? What does this do for our creativity? What does this do for, and again, theologically, I can hear my professors echoing in my head right now, but do I really have to memorize these verses in this and such? Meditate on the word both day and night, but now it can do that for me. And so I think it has tremendous potential to affect what it means to be human with respect to your question about some of the work that I did is. Is there a follower of Christ in the world who hasn't asked, "God, why aren't you answering me right now?" Why aren't I hearing an answer to this prayer or tried to get someone from the church to return a phone call because they're struggling with this and this and such?
Now we have access to this system that will listen to me 24 hours a day, that will respond without judgment, that has access to the world's writings on whatever theological view that I agree with or embrace, hold to. And so, how does this affect me as a follower, as a believer? What does this do with pastoral training? And we mentioned it earlier, but I love the fact that we're discussing AI in an embodied group of humans around a table because that's not where this is going. And in fact, celebrating virtual relationships and in a world that has this epidemic of loneliness, I think it's only going to drive that further and further. So I think that's how it will affect us.
Bill Hendricks:
Well, it's fascinating. I mean the whole thing's fascinating, but it's fascinating. A lot of the early and still fears about AI is, "Oh, someday the machines, they're going to be so smart and they're going to say, 'let's wipe out humans.' And that's the end of us." And that is a prospect, I guess. But you're describing something that could be much more subtle is just we get so accustomed to these, like you called it an it. We don't yet really know what to call something that has at least faint stirrings of a person, but there's no body there. We are hesitant to say, "No, that's a person." So it, this algorithm. But we get so accustomed to engaging with the it, we're having conversations, it's doing a lot of creative things for us, solving a lot of problems for us that slowly but surely, we just kind of become a nothing. All we do is just speak into a machine and everything happens.
Drew Dickens:
To something John mentioned, again, back to what do I want to be, is I think we have to be mindful that it's happening. This won't be one day, this will happen. We're already utilizing it every day. Again, whether it's on Netflix or watching a movie or even with GPS finding a faster way to your sister's house. And the next time you get in the car, it will ask you, "Do you want to go to your sister's house?" So I think we need to be mindful that it already exists. It's only getting faster and smarter. And eventually the goal of every company out there right now is artificial general intelligence, which is a point where it exceeds human cognition.
And so, I think we have to be mindful that it's happening and having an honest conversation with ourselves, how is this affecting me today? And to be aware of that, be aware of that within our community, have this conversation with other pastors, with your own family members. We have two grandsons. And I get chills thinking they're four and six, you can talk about the future when they're 10 or 12. Heck, when next year, how is this going to affect them in school and things like that. So I think having the conversations now is so important.
Bill Hendricks:
And I guess let me press into that chip. What do you think those conversations with our children should be about?
Drew Dickens:
I'm a grandparent, so that's not fair. I think it's spend more time with your grandparents is the answer to everything. I think it's being mindful of who their friends are and the influence that they're getting outside the home. I think it's being aware of how they're utilizing AI in the classroom, in their classes. I don't mean for this to be a toss away comment, but being aware of how they're using it on their phones and on their, I mean, our four-year-old is quite comfortable using an iPad. So just monitoring what they're using and how they're using it because it's all about collecting data, it's all about learning.
I don't care if you're four years old or if you're 65, it's collecting data always. And so, just be aware of every time it's asking a question, is today the day you want to go see your sister? It's using that as a way to learn more about our habits. And so, just be mindful that even if they're playing a simple little game on mom's iPhone, it's collecting data and learning how to speak and interact. And that's what these language models have done and are doing is they're predictive agents. "If you like this last time, I'll give you more of that." And so, I think just being mindful and aware of what they're doing.
Bill Hendricks:
John, you have kids.
John Dyer:
Yeah, I mean I don't think you should have any restrictions. I had my five-year-old driving our car early on just because I thought these kind of restrictions are ridiculous and we should let them use the latest vehicles possible. No, I think we intuit that we shouldn't have young kids driving 3000 pound vehicles because we know that there's a lot of damage that could be caused to other people with those. It's a lot harder to see what the wrong would be of devices. And as a parent, it's just you realize how when you're at a restaurant how powerful it's to build a hand a device, just how that shuts everything down and makes your life easier. There's a lot of friction involved in not doing that. So I think we want to be really aware of that and sometimes be counter-cultural there.
But I do think it gets to just the purpose of education. And kids always have a love hate relationship with school. "Why do I have to do this? Why do I have to learn facts that I'm never going to know or never going to have to use again?" That refrain that we all did when we were kids too. And so I think with education you can see that there was at least this informational aspect and maybe this skills part that we're trying to do of having information you can draw on knowing about history and society, not making mistakes, but also those are intended to form your mind in a particular way so that you're learning how to memorize and learning how to store things and learning how to do skills.
If I think about say Maslow's hierarchy of needs, those basic needs at the bottom, just food, and then you've got shelter, and then you have relationships, and work, and kind of transcendent things. As tools and technology fill those and mean that we don't need to do them anymore and it keeps moving up there and it puts higher and more and more pressure up at the top, there's less desire to work on those lower things even if those could be important for us. And so I think with the industrial age, we got enough food that we no longer need to forage, but now we really struggle with that overabundance. So we're always eating too much and our bodies are really hard for us to work with.
And then I think with the internet age, that information question, now we have all the information we need, but we are now battling misinformation all the time. And I think we're in that skills era now with AI tools of what skills do we want to have and what skills do we want to cede to a machine? And so, I think with our kids, that's just harder to conceive of going, what is this education thing supposed to do for you? And then where do you want to be in 10 or 15 years? Those are always hard questions to ask kids, but I think they're important to do.
And so, just like we would do with sports, we'd say, "It's fine for you to be able to look up the length of a marathon, but if you want to run a marathon, you got to have to do particular things over time." And so, I think the same thing with skills of learning the basic things, learning the math facts or whatever they're called so that you can get to calculus and it'll be the same thing with these other skills, but it's moving so fast that I think a lot of the jobs will change.
So keeping on those fundamentals and then at least being able have some sense of going back to Drew's question, what is the human, what does it mean to be human? How do I be human in this particular world? Because we can't ask our kids to go back to 1960 or 1980 or something like that. They're going to grow up in this world. And so, there's that balance of preparing them for this world and being in that world, but not of it in some sense. So I think there's a lot of conversations that we're having that I probably just shotgunned a million different things, but not looking at it straightforwardly and then encouraging them to do hard things and develop grit in a world that wants to make everything easier, that tends to not always actually be better for us.
Drew Dickens:
I love what you just said about hard things. Part of education is doing the hard work, and I think that's one area that we're seeing affected by AI is it's easy to skip over a lot of the hard work and what you said about the hierarchical needs. It's so interesting looking at articles about the jobs that this will affect. And I still don't think we know yet. It's easy to point to anybody that's doing more transactional kind of work, but even with robotics, it can lay bricks, so we don't quite know yet. But I think again, going back to what's it mean to be human, I think the fundamental ability to communicate with other humans, and one of the things that's expanding very quickly right now is OpenAI just came out with an enhanced voice level where I'm tempted to pull it out right now as the fifth guest on our podcast because it sounds, down to breathing and saying, "um", it's incredible how it's able to sound-
John Dyer:
Mimic.
Drew Dickens:
Mimic is the word. What are the applications of that for someone that's in a nursing home? Never has anybody visiting them. Imagine the effects of those that are home bound and the elderly. Well, the negative effect of it is you can actually interrupt it. So if you ask, what are your views on this? And a couple of seconds in you don't like where it's going, you say, well, no, no, no, that's not what I meant. I meant more this direction. It'll start again. So all of a sudden I'm the center of that universe. I can interrupt it in a way that would just be rude otherwise, and you're not going to continue the conversation if I stop you every five seconds and say, "That's not what I was talking about." And so, again, I think it will affect our ability to communicate in a empathetic, loving, caring way very soon. I think very soon it's easy to go to the dystopian route with killer robots and Schwarzenegger, but I think these are some of the more subtle areas that it will affect us very soon.
Kasey Olander:
If I get accustomed to demanding whatever I want instantly and taking it, then I come to expect that even from the people around me.
Drew Dickens:
And without judgment. So I can ask the dumbest questions and it's going to say, "Boy Drew, great idea. Let's explore." And so, I become the center of that universe, and that's not biblical. It's not healthy. So I think that'll be fascinating to watch very soon.
John Dyer:
It's interesting to think about trying to flip and have us imitate the device. And so here's what I mean by that. Most of us are not particularly good at listening to someone. We just don't have that skill of listening very well, whereas our AI tools are very good at it. They're just infinitely patient. And that can be really good in some ways. But I think if we could develop that skill, that would actually be really, really powerful to do and to be able to practice that just skill of noticing someone. And I think we some see that even in the little details in the gospels, you'll see these little throwaway comments where I'll say, "And Jesus noticed somebody." The guy at the Pool of Siloam. I don't think we're particularly attuned to being able to do that where we're always listening in order to speak back instead of listening to be able to listen. So I wish I was as good at AI as listening to different people.
Bill Hendricks:
Well, there's actually a psychologist out of Biola, I can't recall his name, but his wonderful quote in his book. And the thought here is that most people really do never, they feel like nobody ever pays attention to them. They see them, they interact with them, but it's like furniture, I transact with you, but I don't see you. I don't see you, your personhood, who you are. And then when somebody stops and attends and listens, something happens, something very profound happens. And this guy says, "For most people, the difference between being attended to and being loved is so slight that they can't tell the difference." And I love that phrase because I think that you both have raised the point, the real issue that AI poses for our world, and within that the church, is what does it mean to be human?
Even apart from AI, our technologically-driven transactional cultures, it forces us to begin to lose our humanity. And so, we are not alive anymore. That's where it ultimately goes. And you talked about being sort of counter-cultural AI does sort create a counter-cultural question. Here's how people who don't necessarily believe in God have not necessarily any deep moral commitments or values-driven kinds of ways of engaging the world. And then here's followers of Christ who by definition are supposed to be bringing love, bringing the fruit of the spirit into all that they do. And so they both work with this technology, but one group does it differently than the other group simply because of what drives the way they use all other technologies, the way they use all other resources in the world. It seems to me that has to be thrown in here.
Drew Dickens:
I'm tempted to add the word should. These two groups that you refer to, I don't know that they're always that distinct to each other.
Bill Hendricks:
They're not. That's right.
Drew Dickens:
And again, it goes back to what I was saying earlier about being mindful of the effect that it's having on me. I created this platform called The Digital Shepherd, which allows you to engage with this it. And that it's been well-trained in Protestant dispensational theology. But one of the early, and I have a survey attached to it of just what was your experience. And so, one of the questions I added was, "Did you feel it was showing you empathy?" Now, of course, it can't show empathy. It's not human. But it's read every book on how to be empathetic. So one of the things that I had it do was, I don't know if you remember whether it did or not, but ask your name early in the conversation and I had it coded to every third or fourth response, mention your name. And the empathetic scores went through the roof.
So if a response is, "Boy drew, thanks for sharing that. I appreciate your vulnerability in sharing that request." All of a sudden now I feel like it knows me, it heard me. It, again, it heard me. And so, I think that subtle line between being known and being loved applies to everyone, not just believers. And I think we're all easy pushovers for just someone knowing my name. I can't tell you how many times I've been into Starbucks and I've just referenced the baristas name emblazoned on the front of their apron.
Bill Hendricks:
Right? It's powerful.
Drew Dickens:
But they'll come to a dead stop. "How do I know you? How'd you know my name?" "Well, it's in a giant name tag you're wearing." He said, "No one ever calls me by name. No one ever says that." So it's just the power of the name. And by me adding that into that training, the empathetic scores went through the roof. So I think we're all easy pushovers for being noticed.
Kasey Olander:
Right. So as we think further about what it means to be human, what it means to desire love and companionship, we know that AI, the A is for artificial. If it were to move towards this physical robots being able to make decisions and to move around, would that be a step forward or would that be a step backwards? Because in some ways, would that mimic the embodiment? We know that being embodied creatures is such a crucial part of being human. Would a robot with that sort of embodiment be maybe a danger to be aware of just because it would more easily simulate community without actually providing it?
John Dyer:
I think the short answer is yes, of course. But I do think there's some fun examples there. I did see a computer scientist whose name I forget, but she says something like, "I don't want AI to write my poetry and novels so I have more time to do my dishes. I want it to do my dishes so I have more time to write my poetry and my novels." And so, I do think some of that physical labor would be nice that you could offload some of those things. Of course, that means you can't do the physical labor anymore. Right? So whatever you're ceding, you're choosing to not become a person who can do those things. But we all do that if we have someone who mows your lawn versus you choosing to do it and so forth.
And then there's been some work done in elderly care with some type of maybe a cat looking type robot that you can interact with that reminds you about your meds and you can talk to and that stuff. They might do a six-month experiment and then once experiments over, take the robot away. And I think most of the people that were interacting with it really liked it because that was their most engaging thing they had. So I think people do develop attachments very quickly to these physical things. We probably all are aware of the Spike Jonze movie with Joaquin Phoenix, Her, where it was a disembodied Scarlett Johansson. I think what they embodied form now you're seeing a million movies in that people are going, "I think this is going to matter." And usually Sci-Fi is the place where you get, or now horror, where you get, "I think this is going to matter." Questions. So I do think that it will.
And it seems like people form very quick attachments. They also have a lot of their biases toward women or lighter skinned robots are all built into these things. So a lot of those are coming out. I don't know exactly how it'll work out though. That's the real challenge for us to figure out. The smart revolution, it does seem like having a smart dishwasher doesn't really seem like it's that helpful to us. But I do think a machine that can simulate more human things will be helpful. And then all the other things that you can imagine will be there and already are there, all the really dark things.
Drew Dickens:
Meta, formerly known as Facebook, is coming out with their augmented reality glasses called Orion, I believe. And I think that's where we'll start to see some of the initial, it's not a robot, but if I'm with these glasses with the advent of some kind of physical device that I can wear carry with me, it's an extension of what the phone was. But if I'm able to view the world through, even if it's an augmented view of the world, I think we'll see that fairly soon. And in the effects of me being able to as I'm looking at you, seeing a scroll of verses over here would be an easy application of that or making suggestions as I walk down the street trying to find my sister's house.
So I think wearable devices, before we get into the movies and Her, which is a distant future, it could be I guess next month. And it's interesting when you hear the effect that movie had on the current AI engineers that saw that as a kid. And I guess one of the things that they were accused of is using her voice on one of the initial releases of the language model, of the voice model. I think we'll see some wearable devices around your neck, something in your pocket that you're able to engage with and guide you, that will be able to, again, constantly listen to conversations around you, make suggestions, recommendations on whatever. All those things I wish I had known, "I'd forgotten it was your birthday today." Those kinds of things. So I think those devices will happen fairly soon and have a positive impact if it can help me remember someone's birthday for sure.
Bill Hendricks:
I can think of many of our listeners who are hearing all of this and recognizing this, it gives them back all this information, but they really have a strong distrust of what they're getting back because they feel that at the other end of it somewhere are people that have a different whole different set of worldview and politics and that somehow they're being played by all of this. Speak to that. How do we know what to trust and maybe we don't?
John Dyer:
Yeah, and I think the big things that were in the news over the last couple of years have been really when someone was over-correcting in a direction. So for example, if you have an AI read a bunch of stuff, it's going to learn that most nurses are women, most doctors are men. It's going to learn that bias that we have, even though that shouldn't be so, but that's just the way that all human writing has been. And so, when you try to over-correct for that, sometimes you can go in the wrong direction. So I think Google famously had an image generator where they were trying to over-correct from assuming that all presidents were men or something like that. But if you said, "Give me an Irish person." It would try to give you four different races at the same time. And so, people got up in arms about that and they pretty quickly corrected that. So it's correction and over-correction.
I think really most of those engineers are trying to not have the bias that's been built into human civilization, and yet they then introduce other ones. So you can ask about various presidential candidates or whatever, and you might get two different answers depending on the background. So there is some of that in there, though. I don't think it's quite as extreme as we make it out to be, but I do think we should be aware of it. And this is one of those areas where there are some Christian groups trying to build some fundamentally Christian LLM so that it would have better answers. And I don't know how those projects will go.
I do think this is where that machines are good at a certain level of information. And then I think having a trusted network of people is really important for you. And then having a diverse network of people is really important so you don't allow an AI to filter bubble you, and you don't filter bubble yourself with your own relationships, that you need to create breaks in both of those. And I think in theory, the church of Jesus Christ is the best place to do that because-
Bill Hendricks:
Well, it's a global church.
John Dyer:
So it should be Jew and Greek and slave and free and female and old and young should be there, and hopefully your church has some representation of that and that you're able to meet with people who don't shed their identity but hold that and yet are in Christ when we're together. And I think that should help.
Drew Dickens:
I think it's interesting with, I heard a quote several months ago that AI will achieve adolescence during this presidential election cycle with misinformation. I haven't seen that dramatically yet. We still have a month or so away. But where I'm starting to see it is really interesting is the phrase, AI has become a verb, "That looks AI'd." And I think its ability to immediately cast doubt on truth. So what does that mean for us, again, theologically? But now I can immediately dismiss a fact as, "Boy, it looks to me like that's AI." Or, "That conversation was generated by AI." Or, "That image was generated by AI."
I think we're seeing a lot of people politically use it that way, just being able to cast doubt on it. A friend of mine that's an attorney, we were talking about that just the other day of how might you use AI to plant a seed of doubt in front of a jury? Could it be, is it possible? But is that reasonable doubt? So I think, again, we need to be aware that that is a thing. Occasionally it's called the beast in the box. The narrative is easy for us to understand that it is some kid in a basement at a cubicle hard coding biases into the language model, and that's not the case. But it's a narrative that's easy to grab hold of that they're against my worldview. I don't think that's inherent in the programming of these language models.
In the beginning, I think they were just trying to find as much data as they could. They were downloading conversations off Reddit initially and things like that, which again, have a built in bias. Reddit's just teenage white kids. And so, early on it spoke like a teenage white kid. So I think as they're able to acquire more and more and more data, and now it's generating its own data, I think we'll see, as I agree with John a hundred percent, I think we'll see less and less of that. But I think the other question is how will we use it as a tool against truth?
Because now I'm so easily able to dismiss it as, "You say that, but I've heard that AI can …" Now all of a sudden, you've dismissed an image. You've dismissed a … I can have AI create a granite stone showing something about Jesus and whatnot, and I guarantee if I did and post it on Facebook, someone's going to say, "Wow, sure enough, look what's going on."
John Dyer:
Oh yeah.
Drew Dickens:
And so, I think we need to be inherently curious about things that we see, but understanding, again, absolute truth and what will that do for us?
Kasey Olander:
You guys have highlighted a number of different dimensions, some positive, some negative about how AI might form us as we're human. So as we kind of wrap up here, how would you encourage our listener to cultivate a heart of discernment when it comes to new technologies? I noticed you haven't come out and said, "Every kid needs to limit their screen time to one minute." But how do we help whether parents or ministers or whoever, how do we help our listener to cultivate discernment when it comes to things like this?
Drew Dickens:
Well, don't ask John because he has no limits with his kids.
Kasey Olander:
That's right. That's right.
Drew Dickens:
Actually, for the first time, the other night I had AI speak a bedtime story, and it was so interesting watching them listen to this bedtime story, and they were able to create it. They came up with a prompt. Have it include zombies, have it include this and that and whatnot, and swords. And they had created the story and it was able to feed back a pretty exciting story. But I'm astounded at how little time people are spending. And again, I have a bias here, because it's my research, but I'm astounded of how little people know about it, and they're being exposed to it every day on Facebook and Netflix and Instagram. They're being exposed to it constantly, but how little time they've spent in reading about it and what's behind it.
What is a language model? What is natural language processing? What is generative AI? What are these tools that are being used in our face constantly, but have not spent any time really researching and learning about it? So that's what I would recommend is to take a moment, just find a couple of podcasts like this one, find a couple of blogs to read, and there's no shortage of articles, and just take a moment and read up on what's going on.
Kasey Olander:
Sure, yeah, you're encouraging our listener to cultivate more understanding so that they can decide what's appropriate for them.
Drew Dickens:
Absolutely. Absolutely.
John Dyer:
Yeah, Drew, that's great. I think learning about that some is good. And I think I would say some negative things and some positive things. On the negative side, I would say I'm skeptical of myself and my own desire for ease and for just something that will benefit me. And I need to cultivate that of going, my instincts are usually wrong, and what I want God to do is to reform my desires. And so that's the deepest thing that I want, is to want the right things. And then I also need to be skeptical of anything that's free, I'm the product, and so really my data, my desires, my purchases, my advertisements, that's probably what's on the other end of it. So I probably want to cultivate that. On the other hand, I think I probably need to learn something about this in a tacit way, so I'm going to need to experiment with it a little bit.
So learning about it, like reading podcasts or listening to podcasts is important, but doing some level of experimentation. So even in my own classes right now, I do have one assignment where I'm asking the students to use this in a way that they're trying to learn for the course of the class, but I made it optional. So we've got some, "Here's what you can't do, here's what you can do. Here's some gray areas. Here are people that you can go to instead of that gray area so you can be developing relationships." And that you're having opportunities to engage and learn that's not just in your head, but actually in a practical way.
Drew Dickens:
I love what you said about fighting against ease. I think that's something else I was just thinking of when you said that, is finding those areas of your life where you are relying on it to make something easier and be intentional to do the hard version of that, to do the hard work. So instead of finding a recipe on Instagram or whatever, is to actually sit down with someone and share a recipe that's handwritten and do the hard work of studying, do the hard work of writing with pen and paper, and so recognize those areas that you're falling into ease and being intentional to do the hard work.
Bill Hendricks:
Well, these guys have given us some great calls to action here, some real practical things that we can do around this topic. By the way, I want to let you know we recorded another podcast on artificial intelligence with Fuzz Rana back in July, 2023. So you can also access that at The Table Podcast and add it to what we've learned today. But I want to thank both of you guys, Drew, John for being here.
Drew Dickens:
Thanks.
Bill Hendricks:
And we just obviously just got a thimble full of all that we can talk about, so we're going to have to have you back.
John Dyer:
Sounds great.
Drew Dickens:
Love that.
Kasey Olander:
We want to thank you too for listening. If you like our show, make sure to leave us a rating or review on your favorite podcast app so that others can discover us. We hope that you'll join us next time when we discuss issues of God and culture to show the relevance of theology to everyday life.
About the Contributors
Bill Hendricks
Drew Dickens
Drew Dickens is a creative and visionary leader with extensive experience in non-profit organizations and media. Throughout his career, he has developed growth strategies, led international communication efforts, and founded the Encountering Peace Podcast, which has reached millions globally. With a Doctorate in Theological Anthropology, he is passionate about exploring the impact of Generative AI on spiritual direction and divine inquiry. Married for 42 years, he cherishes his family, including his two grown sons and two grandsons. His journey is shaped by a deep commitment to faith, innovation, and the intersection of technology with spiritual growth.”
John C. Dyer
Channeling Eric Liddell, John likes to say, “When I code, I can feel God’s pleasure.” This desire to glorify God by showing how our creativity is an important aspect of our role as image bearers, drives John’s work and teaching. A former youth pastor, he enjoys working with students to see how the biblical story brings insight and clarity to the ideas found in science, sociology, and culture. John is married to Amber, a literature and philosophy professor and has two lovely children.
Kasey Olander
Kasey Olander works as the Web Content Specialist at The Hendricks Center at DTS. Originally from the Houston area, she graduated from The University of Texas at Dallas with a bachelor’s degree in Arts & Technology. She served on staff with the Baptist Student Ministry, working with college students at UT Dallas and Rice University, particularly focusing on discipleship and evangelism training. In her spare time, she enjoys reading, having interesting conversations, and spending time with her husband.