Pierce Berke sits down with Michael A. Elliott, the 20th President of Amherst College, for an exclusive conversation about AI in liberal arts education, the future of higher education, and what it means to lead an institution through unprecedented technological change.
All right, hi guys, welcome to Tech and Business. Today we have a very special guest. We're going to be interviewing President Michael Elliott about Amherst AI, upper education, and much more. So please introduce yourself.
Hi, Pierce. I'm Michael Elliott. I'm President of Amherst College. What I know about AI comes mostly secondhand. but I'm really excited to have this conversation.
Yeah, awesome. So let's get right into it. So thank you for sitting down with us again. The last episode we spoke about many different things in terms of current events. This conversation is going to be a little bit different.
Tech and business covers AI, technology, and the business landscape from a technically grounded perspective. Today I want to talk about how AI is reshaping higher education how a humanities scholar reads this moment. These questions are intended to give our audience and the Amherst community a substantive picture of where the college stands and where it may be headed. All right, so question one. In custology, you analyzed how contradictory groups construct competing narratives around the same historical event and how those narratives shaped American identity for a century.
AI is being narrated right now as both utopian liberation and existential serving the interests of the people telling them. As someone who has spent his career studying how stories shape American self-understanding, how do you read the narratives being constructed around AI?
That's a terrifically researched question, first of all. I love the fact that you go back to my book, which is about how competing narratives of warfare in the American West shaped the way that both people experienced at the time, and then certainly remember historical events. Just put in a plug for the book. And you're absolutely right. We are now at this moment when we talk about technology, specifically about artificial intelligence, where people are drawing on and articulating a variety of cultural narratives, most of which precede artificial intelligence itself as a way of understanding and arguing for how we should understand what's going on.
And there are lots of different cultural references that get pulled out for this. And it's interesting, the techno-utopian strain of this really doesn't have as much to draw on in terms of popular culture or historical memory. The closest thing maybe is something like the Jetsons, right, where the vision of a future or a technology is simply there for us and makes our lives dramatically easier. You know, I remember, I feel like 10 or 15 years ago, you started seeing this meme like, dude, where's my jet car? And some of what you're reading about AI is the promise that the jet car is coming, a version of that, and literally driverless cars and one version of that.
More, I think to the point, is a utopian fantasy that technology will finally deliver something that has long ago been promised in contemporary society, which is a reduction of labor. And there are different really interesting cultural texts that, you know, fantasize about that possibility. There's a great utopian novel from the 1890s called Looking Backward. But the person I think about is an economist named John Maynard Keynes, who in the 30s predicted that, you know, technology would mean we could radically reduce the work week, something that's never come to be. But you hear that in the kind of utopian string.
This is not only going to improve the quality of your lives, it'll also serve the two. new technological solutions to problems like climate change and cancer, but also you may have the opportunity to work less. The other side, the pessimistic version of this, has a much greater scientific fiction literature to pull off. Dystopia I guess always sells a little bit better. To my mind it goes back to Frankenstein, the story that we will create something that we will no longer be able to control and that will become a direct threat to us.
Of course, that's a popular science fiction trope of all kinds. I've heard a lot of references to 80s movies lately, War Games. Did you watch War Games?
Yeah, okay. I'm dating myself here. Terminator. You hear a lot about Skynet, are we in the midst of building Skynet? So, you know, how do I read those narratives?
I'm not surprised by them. I think that in a moment of uncertainty, we pull for whatever we can find available to us. And I still see them as being kind of contested and emerging right now just as a kind of cultural historian. And I think at some point, somebody will go back to this moment and probably to the entire period that begins with the introduction of the internet and continues through this moment. And they will talk about the uncertainty that these technologies created in the ways that people navigated that uncertainty.
And you're right, they can certainly be turned into arguments for how to understand this moment. What I think is really interesting is just that as I read and absorb things, is how much more powerful the pessimistic narratives are. and more widespread, at least in the circumstances that I travel. The fear seems real. And yet, at the same time, what is advancing in terms of technology also seems inevitable.
And I think that that's a key part of this whole way we're understanding this movement. We may not like it, but we think it's coming. And we feel a sense of powerlessness about what might be transpiring. And that's a hard cultural condition to measure and to inhabit, but it feels a lot like where we are in 2026.
Definitely, yeah. And I feel like a lot of people can sometimes feel very overwhelmed with figuring out what may be objectively true and what's maybe swung in a certain way to satisfy certain
Well, when you read surveys of people working on AI, what's interesting is, from the way I read them, some of the same people who think that this could be the most important technology since the printing press in terms of delivering material abundance and security and health and all of those things, also think there's a non-zero chance that it could be an existential threat. And so when you have the people working on it who feel both of those things, that is really hard. And then, of course, we reach for all kinds of metaphors when we describe what it actually is. Even this term, and a lot of people spend time talking about it, is actually intelligence the right term for what this technology is or does.
And then we get into the ways and the anthropomorphize it. We call it by names. We like chatbots. I feel like people. The technology has clearly been developed, the large language models resemble people, and they're kind of human speech and personality, and we can do so much like that.
And now, there are really interesting questions that are arising about whether or not AI could have rights, if so, what would that look like? So, there's a whole field called science and technology studies that I don't I've not immersed myself in. There are better people to get out of your show to talk about this. But, you know, that perspective we're pointing out is sort of science never takes place outside of the human realm, inside the realm of the human condition. And then we always understand science through the tools that are available to us as humans.
And that means that the technology in this case is also shaped by those human understandings of what it can and should do.
Yeah. And moving into a similar type of question, you had wrote a very insightful iRobot column, and you had stated that AI is falling into a lot of predetermined cultural merits, similar to what we just spoke about. Margaret O'Mara and Fred Turner have both documented how Silicon Valley adopted frontier mythology from the American West. to naturalize this technological expansion. You spent your academic career analyzing how these frontier narratives justified violence and erasure.
Do you think the liberal arts education that Amherst provides equip students to see through those narratives? Or are we still teaching the old version of critical thinking while the new mythology is being written in real time?
That's an elegantly per question. So hopefully the critical thinking skills that we develop here The reason we do it is so that we have the tools to analyze that mythology and understand it as mythology. That doesn't necessarily mean that we Understanding something to be a mythology doesn't mean that it has no truth value. And it doesn't mean that it isn't powerful. Myths are powerful.
We live by myths. I mean, there's a whole field devoted to that. And the stories that we tell ourselves become true in some ways because of the ways that we tell the stories. So it's not always the case that seeing through the mythology is like, you know, the old Marxist idea of throwing off false consciousness. But I think to the point of the narrative, the scholars that you referenced, I think this idea of inevitability is really important, that one of the things that deep cultural myths do is create a sense that what's happening right now is part of a long march of human history, natural history, and therefore is both inevitable and maybe even really can't be shaped.
And of course, that's not true. That doesn't mean we should stop it. But absolutely, if we wanted to, we could, as humans, slow down the progress of this technology. Absolutely, we could shape how this technology has been developed and used. And we have choices to make as a society.
As an individual, we'll have choices to make about how we use it, how we think about it. But most importantly, the choices that we have to make on a social level are very real. For instance, so far, the United States government has made a decision not to regulate the development of AI. It also made some interesting decisions about AI and defense lately. Those are real decisions.
I'm not sure I'm in the position to be a great judge of whether, especially the first one was the right one. I have both some desires to see things under regulatory regimes and some real concerns about dysregulation actually crystallize the current kind of standings of AI companies right so that it creates an I'm sure you've read articles to say it's like big AI companies want regulation because it means that they all kind of and say on top yeah and then you know I have real fears as a Democrat with a small dean on a Democratic party But a Democrat with a small D about the idea that China might outstrip the United States in terms of the development of artificial intelligence. So my point being, these are choices.
And they're made through politics. They're made through society. And we don't get to opt out of the consequences of these choices. And so it's best to educate ourselves in trying to participate in them.
and moving on to more Amherst-related questions. I wanted to ask about our free access this semester to Gemini Notebook LM and Zoom AI Companions campus-wide this past January. So we have the AI and Liberal Arts Initiative, which runs book clubs, film screenings, cafe sessions, the AI Working Group, which has been meeting since fall 2025, And I would love to hear some more of the specifics of where Amherst actually is right now regarding AI and where you want the college to try to be by this time next year. What has changed or what is being planned to change in terms of curriculum, policy or investment potentially since you became president?
Well that's an enormous question. So let me start by saying you're right to identify a number of different developments that are going on related to AI because this is a technology that seems to affect virtually everything we do in the college. So one of the things that makes AI distinct is that it supplements and tries to simulate thinking. We're in the business of thinking. That's what we do.
And so this does feel different to me than some of the previous introductions of new communications technology. Maybe we can talk more about that, how it is. Is it like the growth of the internet, which is probably the most recent technology? You mentioned a couple of things. Yes, we did make Google Workspace tools available to everybody on campus starting in January.
Why do we make that decision? In part, it was a recognition that students are using versions of these tools. And a large part of the decision was made to incentivize students to use the Google Workspace tools because they provide a different level of security for our students. So when I say this, in case any average students are listening out there, if you're an average student and you're using Google Workspace through errors, that means your privacy is being protected in a way that's different than if you use a commercial version of Google Germany or if you go on you know, create your own Claude account and such. And so we really do want you to think about using our tools, especially for any kind of classroom materials or materials related to other students.
So that was the decision there. It was a decision that took us a while to arrive at. We were actually a little bit slower than some of our schools to make these tools available to us. You mentioned that we do have a working group on artificial intelligence and we have an initiative called AI and the liberal arts. Let me start with the AI and the liberal arts has actually been going on for quite some time.
It's led by Professor Lee Spector, book clubs, as you said, podcasts, springs and speakers. I mean, you look, you're nodding, so it looks like you're very familiar with us.
It really is, you know, and that's meant to be an intellectual arm to think about the ways that AI intersects with a variety of liberal arts initiatives. And I think it's already had a significant effect. Students participate, but also it seeds other things. It reaches faculty. We then think about what they're doing in different ways.
And I think it's also generating the development of some curriculum, which I think is interesting beyond computer science courses, which of course we have computer science courses about the development of artificial intelligence. The AI working group is really in some ways kind of two working groups. One piece of it is focused much more on the curricular side and the learning side. What does the future of our curriculum and the way that we learn need to change because of AI? What steps do we need to take in the near future?
What decisions do we think we're going to be facing down the way? And then there is actually another group adjacent to it that is asking questions about the fact that, listen, we are a large organization that runs processes. We're not a business, but we have a lot of business practices, like we run payroll and HR. What are the ways that AI can help us do those things better? I think every large organization is asking that.
On that piece, I'll say, so far we have not really found very much. And this is from what I can gather constantly with our peers and actually most businesses right now, that AI is not quite ready to do some of the things that we think it might do in the future. But it's something we want to be asking. The most interesting set of questions for me is around the curriculum and what the future of learning looks like at Amherst. And so where I want to be, let's say a year, is to first of all have more classes in the curriculum that deal with AI as a subject.
You know, we have a classroom on AI and the law, for instance. I can imagine us as on AI and the arts on the environment and environmental impact of artificial intelligence and a variety of different things. You know, and I'm an English professor, so I only know so much about what could go on in other different disciplines. But I do think that it's an important object of inquiry. And that's what we do in a college like this is we look at objects of inquiry for a variety of perspectives, a variety of different perspectives.
I would be surprised if pretty soon there's a class on the economics of AI. I mean, I say some of the challenge of treating AI as an object of inquiry is going to be the fuel that's moving so quickly. So economics AI, you'd probably have to rewrite the syllabus every two weeks. The other piece, and other piece, is I think we need to create opportunities for students who are not in computer science to understand these tools, what they do, what they can't do, how to use them responsibly, how to use them ethically, and frankly, when not to use them. And that, I think, may take place to a certain extent in the curriculum, but may also be other kinds of learning opportunities.
the January term courses, for instance, or things that are offered as online modules. And this is the kind of thing that we're asking and developing actually in partnership with other liberal arts colleges because we're all thinking about these same things together. And then a third thing that I would say that I think we need to be a better place on in a year is probably a tighter set of policies about how we're going to handle academic integrity related to artificial intelligence. And I don't want to preview anything because this is something that really has to be talked about by the faculty who are involved in creating these kinds of policies. But we need a better solution for moments when faculty member has a good reason to think that as soon as you use your artificial intelligence, but there's no actual evidence.
Definitely. Yeah, and I know currently there's different philosophies depending on department by department. I know like for example, LJST has moved away a little bit from like essays that are completed strictly like at home versus like in class essays. But I was curious a little bit more to see what like general faculty opinions are like collectively.
I mean, the faculty, the beautiful, wonderful thing about the Amherst College faculty is that they are not of one mind about very many things. And artificial intelligence is clearly one of those things. So, you know, you mentioned some faculty, you know, taking active steps to make it harder to use artificial intelligence in their classroom. We definitely have some faculty who are taking that strategy who worry about it and want to make sure we should talk more about this, that either classrooms are made in spaces where the thinking that goes on is not significantly augmented by artificial intelligence and where the work that's being evaluated is not augmented by artificial intelligence. We have others, you know, and I'm sure you've experienced some of these, who are fully embracing it and incorporating it into their assignments, sometimes actually interrogating what AI can do and not do in the course of those assignments.
And I've heard some inventive things that people have done where, you know, you write a paper, then you ask AI to write a paper and you compare. or you talk about giving AI some questions and seeing the limits of what it can do. And I think there's a lot of experimentation out there. And then we have some faculty, let's call it the middle. I don't think it's quite like a range on the spectrum that way, but who said, well, listen, My job is to teach a subject and to evaluate the work.
And my students see it as AI. They're just cheating themselves. And it's not my problem. And I'm just going to grade the work that they turn in. As a college, we have said all of those forms, all those ways are acceptable.
And that's, first of all, the kind of place that Amherst is, where we give faculty a lot of autonomy. I also think. We're at a moment where we don't want to curtail any experimentation that's going on. We want people to think through and think together, including faculty thinking together, about the different kinds of relationships that they're going to have to AI in the classroom and the curriculum. And letting those experiments play out also let us come to some, I think, more interesting ideas.
And if we were saying, this is the one way to go. The one thing that we have tried to insist on with faculty is that they need to be clear with their students about what that approach is, or even assignment by assignment. So for instance, I taught in the fall. And I had three essays. It wasn't that much work.
Don't tell anybody. But there's a lot of reading. But three essays, and the first one I said, you can't use AI. And then the second two I said, you can use AI if you want, but you have to tell me how to use AI. And if I think you're using it, you don't tell me, I'm going to get you.
And that was one way to handle it. I don't know if it's exactly what I'll do next fall when I teach the course again, but that's the kind of thing is to make sure that at least students understand what the expectations are. We might even move to a place probably next year might be a little bit too soon, but not just in the future where certain courses are marked as being kind of AI-free courses so that students can know maybe you can intentionally seek out those courses if that's something they want.
Yeah, and that ties in perfectly. about that announcement. So I know Provost Humphrey had just announced that two-track vision in which there's money for courses that is allocated to intentionally use AI and some courses that are explicitly AI free to preserve deep reading, writing voice development, and slower but more intentional thinking. And I find that definitely really genuinely fascinating. And what does it mean necessarily to declare space Is this preservation of something essential, the slow struggle of writing a first draft or the discomfort of not knowing or is there a risk if it becomes a rearguard action?
So that's an excellent question. So let me start with kind of a philosophy and it really ties to where I think we need to be as a college in terms of what do we want a cumulative experience of an Amherst education to be for every student. And of course, as you know, we have an open curriculum, so that presents some challenges. But to the point of why we would want some classrooms to be AI-free, I believe strongly, and I've talked to a lot of people who work in AI and they've strengthened my conviction in this, that the people who have the capacity to lead in the AI economy are the people who have built the kind of careful thinking skills, the slow reasoning skills, who have the creativity and judgment that has been the interpersonal relationships and interpersonal emotional skills, which are by the EQ is going to become even more important.
that have been hopefully the hallmark of an Everest education for at least a century. The kind of thing that I experienced as a student, that you experienced as a student, and that if we lose that, our students will no longer have the advantage that they have because everybody's going to be able to use AI as a crutch and a shortcut for thinking. And so who's going to lead them? The people who can't think, who do have judgment, who can be good in the room, a lot of what things you learned in a neighbor's education. It's how to sit around a small table and talk to people who don't know you at all and Make eye contact and speak in coherent sentences in some ways.
Yes, it matters what you're talking about. But some of those skills, like just matters that you can do that, right? Because someday you're going to be in front of a room. I'm guessing I can see maybe where you're going and you're going to need to pitch a group of people on an idea and you're not going to be able to rely on your AI to do it for you. Because they want to know that you, Pierce, are going to be able to lead them and not your clot bob, whatever we're calling it by that point, the chip in your brain.
So we have to make spaces in the curriculum to do that. I agree with you. It can't be entirely a rearguard action. Although I do think there's a way that new technology often helps us figure out what's valuable about the thing that we left behind. Notice how like vinyl records are back now, right?
Vinyl records are great. They sound brilliant. They have this beautiful artwork. We miss the physical artifacts. In a way, I think residential colleges may be a throwback to that.
And in higher education, writ large, I think you're going to see an increasing differentiation between colleges like ours and some bigger universities that do become much more integrated with artificial intelligence as a mode of instruction and offer something very different. So we need to hold on to what's special. What is going to distinguish the liberal arts is that we are still going to be a human-centered education. So that's why we need those kinds of spaces and need to develop them. At the same time, we do need to teach students how to use AI tools as I was saying.
I'll use them effectively. I'm going to use them responsibly. We don't want them to be disadvantaged. We know there's going to be an expectation increasingly in the market, whatever they do, that they are understanding how to use these tools effectively. Right now, that might seem daunting.
It is a little daunting because the tools are developing so quickly I suspect at some moment that development might plateau a little bit, and it'll get easier to do. Just like, you know, right now, a lot of our students, I'm guessing, learn how to use Excel really effectively by taking some kind of online class through YouTube. There will be versions of that. How do you think with technology? And then the third thing that has been for us is to, we need more classes where people can think about AI as an object of inquiry, because it is going to be part of their lives.
I don't know how long it's going to be a separate subject from other subjects. There have been other moments in the academy where different ways of thinking have become a move from being specialized subjects, so ubiquitous that they don't have to kind of exist as separate bodies of knowledge anymore. So what promo is trying to do is incentivize those three tracks in different ways. And that is where we want to get to as a college.
Yeah, very important for sure. And in a similar light, juxtaposing this more well-reasoned thought out, maybe slower comparatively approach compared to some other universities, I wanted to talk about some other moves like Colby opening recently a $30 million AI Institute and participating in a $20 million NSF fund Bowden has and is hiring 10 new AI faculty. I was curious if there's maybe, as a student here, I was definitely genuinely curious about the thinking behind this kind of more cautious and hesitant approach. If it's more intentional or if there's maybe a larger investment plan for the future that hasn't been articulated properly yet.
So I do think our approach is more intentional and more bottom-up. Those kinds of gifts, you know, they're often gift-generated, often lead to really hasty moves and kind of top-down. And what's interesting is that we are actually partnering with both Bodin and Colby on things that they're doing because they are looking for good ideas for how to use these resources that they have. Will we make a big splash like that? I don't think that's something that feels necessary to us at this time.
Obviously, we always are interested in having more faculty, but going out and looking for faculty right now who work on AI also feels like you're going to a saturated market and you may not necessarily get the most original thinkers. In some ways, pretty soon, again with so many subjects, everybody is going to be thinking about AI in their different fields. So it is a more intentional approach. It is a more bottom-up approach. And I'll say, again, we're in partnership with these other schools.
But one thing that distinguishes Dartmouth, and you mentioned somebody participating in a research initiative, is the research component. And obviously, we also have faculty who are doing research on artificial intelligence and computer science. We're actually, I think, very well positioned in terms of our science faculty on AI, generally just the strength of our science faculty. We're a college that has invested more in research than most liberal arts colleges have. And so I feel good about our competitive advantage.
I am not somebody who likes to try to grab a headline. And I even think, again, what we think of AI now, a million, five years, start to feel really outdated. And you don't want to overinvest in one vision of something that's evolving rapidly.
And think about it. You're a junior. Yeah. So basically you started in like November of your freshman year. All of a sudden everybody was talking about chat GBT.
Yeah. The seniors are graduating. They never heard of Jeff GBT. Maybe there's one or two of them who are, you know, super, super nerdy on the frontier. But basically they had not.
That's how quickly this is evolving. And in four years, will we still be talking about these large-language models in the same way? I really don't know. I don't think artificial intelligence is going anywhere. But I think right now, most people in their minds equate artificial intelligence with the large-language models of basically open eye and anthropopic, and then maybe some chatbots.
And my guess is that in five years, those will have gone away, but what artificial intelligence is and the ways that we think about it will be very different.
Yeah, yeah, definitely. And it's important to definitely examine all the possible ways that it could impact education before jumping into something significant like those initiatives.
You know, one of the things that's interesting, I'm sure you've read these articles too, about all of the tech executives in Silicon Valley who limit the exposure of their children to technology and smartphones. It's not the same thing as what we're talking about here, but there's something similar. They know something. They know something about how this technology can actually short change and short cut. the development of cognitive skills and abilities and capacities that are just essential to being human.
Yeah. Yeah. And I mean, there's a lot that that is yet to be discovered about how it may truly impact. I mean, because that's another thing we had spoken about before, I feel like sometimes there can be a big lack of like general kind of like awareness of its current capabilities as compared to the ways that like this technology is being portrayed in the media and a lot of people can kind of jump straight into expecting that the technology will be able to do all of their reasoning for them or replace certain previously critical ways of thinking in terms of completing an assignment. And it's important for people outside of the CS landscape to have general awareness and training regarding how capable it truly is right now rather than expecting the technology to
be this type of like paradise like super like advanced like tool that can kind of just like replace all hard work previously it's definitely not there yet but I think like a lot of fellow students who maybe aren't as up to date with a lot of headlines expected to be a lot more advanced than it is I can definitely be a big negative in terms of developing those critical thinking skills and using it properly as a way to augment your learning rather than to replace certain critical ways of thinking. As for the next question, I definitely wanted to ask a little bit more about how you think So the Lumina Foundation and Gallup survey from this past week captures something I think a lot about as a student entering the workforce soon.
75% of employers say a degree will be as or more important in five years. And yet 69% say graduates need moderate to significant retraining just to function. That's a fascinating contradiction. At the same time, AI is getting better at the things college traditionally may teach people to do such as writing, research, analysis, synthesis. You've been asked the question about the value of college and political terms at AEI and Aspen and I've heard those answers, but I wanted to ask you a version of that question that I think is maybe a little bit harder and more interesting.
When certain tools can maybe do what the graduate can do, what is the education developing that the tool cannot replicate and what is the thing that the student walks away from Amherst with that is not
I'm happy to elaborate. The biggest thing, and there are others, that AI cannot replicate is judgment. It's a nebulous quality. It is central, though, to all of what we do in education. It's the ability to reach a judgment with usually incomplete and often contradictory evidence.
execute that judgment and question it and revise it as as one goes forward. I mean that's essential to so much of working life and the skills that are underneath that, the writing skills, the research skills, and now analytic skills, whether it's quantitative or qualitative. all culminating in that moment of judgment. So one of the important things is going to be that we have to be able to kind of continue to cultivate that judgment and at the same time cultivate enough of those other underpinning abilities, capacities that It's easy to say, well, AI can do the writing for you. It can do the analysis for you.
And it can, to a certain extent, you can use AI at this point to produce serviceable writing, maybe an improved writing, if you know what good writing is. You can use it to express your thinking if you know what you think already. Right now we're at a curious moment because the people who are using AI have largely not been educated with AI. They brought AI into their education at the end, like you, you went through high school, junior high, and six plus weeks of hours before working with AI.
Before everything changed. So that's gonna change and that is something that worries me. But it will be essential, right? So if AI is an equalizer, if it's a democratizing force, if that is the effect of technologies to democratize knowledge and content, then what does differentiate success? It isn't about judgment, it's about communication.
it is in some ways going to be about vision and creativity. Those are things that the liberal arts are very good at cultivating and have been for a very long time. In a way, liberals have never really been about content transference. We certainly do that. A measure of that will still go on in our education.
I still want my surgeon to know the human body quite well, and not to just like ace his tests because you can download the right module from Claude or whatever, right? Or her test. And, you know, I want them to have the feel of the brain that you can only get if they're digging into mine by digging into a lot. These things, these things, some of these things are truly irreplaceable. I don't worry a lot about the employability of our graduates.
I don't want to say I'm not concerned about it. But am I anxious that all of a sudden liberal arts graduates are going to be kicked to the curb? Actually, I'm not. I think they're going to be more valuable and more employable. I don't know if all the jobs that they currently are moving into are going to be the same jobs in the future.
And this is where I am a little bit stuck as I try to talk to people and imagine what that looks like, the future of working in different so-called knowledge jobs. I made clear quotes with my fingers there when I said that. But the kinds of things that a lot of Amherst graduates do What happens when an investment bank no longer needs 100 entering recent college graduates and still only needs 25? But they still need those people of levels two and three. So how are they going to get there?
And how are they going to get them? It's something I haven't quite seen. My guess is the kind of work that Amherst College graduates do may change in title and in industry. I wouldn't be surprised if we have a lot more students going into startups and more entrepreneurial pathways, which I think could be great. That's not going away.
I think a lot of the kinds of creativity that our students have may find its expression in different kinds of industries in the future. This will not be painless. There will be a moment when people are expecting one kind of job. I'll just find out that's not available. I think I don't want to minimize that.
But I do have confidence that our graduates more than most have both the capacity and the education that will enable them to thrive in this moment. I do think it's a very dangerous moment to be educating yourself for a job. no matter what that job is, unless you want to repair elevators, which we desperately need here at Amherst College. Well, there's elevators. Seriously, we know all those jobs.
HVAC, yeah. And listen, we need those. That's an incredibly important part of the economy. I do not mean to minimize that. That's not our portion of it.
It is a bad moment to be saying, this is the job that I'm fixated on, because I do think there'll be some shifts in employment. I don't know what the timescale is. I'm sure I've read, and I'm sure you've probably read some of the things, too, that say that the reports that AI is already shaping the entry-level job market are kind of exaggerated, that that's really more just about the economy and tariffs and things like that.
And then for our next question, I wanted to talk a little bit about Alexander Mikkeljohn. So I've read in the past that you take a lot of inspiration from former president Alexander Mikkeljohn. And you quote a lot his phrase, thinking independently together. He believed education most create democratic citizens through unrestricted inquiry. And he was eventually forced out of Amherst for being too radical.
If Mikkeljohn were president today, what do you that institutions sometimes may resist.
That's an excellent question. So I'll think about this. It's true. Meiklejohn is kind of one of my heroes in the history of Amherst College. He was president in the early 20th century.
By the way, he was pushed out not just for being too radical. There were some other personal matters that put him at loggerheads with the trustees. It's a really interesting history, but we can save that for another podcast. I guess one of the things I would say about Meiklejohn was that he insisted that the life of the scholar and the thing that the college man, which was a product of his time, the college women at Amherst at that point, the college man had to be devoted to was kind of this relentless self-questioning and pursuit of truth. And so I think one of the things he would be doing is looking for thinkers who are really pushing the boundaries about what this means.
And he would be very skeptical. I'd like to think of both the we are doomed because of technology camp and that we are saved because of technology camp and that he would be trying to make sure that both points of view are expressed in the faculty. He was somebody who liked to hire unorthodox thinkers, he defended hiring a communist, which was a pretty big deal in the 1910s. And he wanted this to be a serious place full of the contest of ideas. And so I think he would press on that.
And then I would like to think that he would press on the idea that thinking about these questions is really essential for thinking about the future of a democratic society. So, you know, AI has the potential to really reshape our democratic life. And you could think again, small d, knowledge, more democratic with a small d, more people could have access to that. It also has a possibility to increase inequality, depending on how it's developed, how it's used, and some of the environmental impacts. And then the thing that I worry about is the erosion of trust in institutions and in each other.
And I've heard people even say, that they feel like the trust of relationships between students and teachers is being undermined by artificial intelligence because you can't trust with it, but what each other says. And that's obviously concerning for us as a college, but even more concerning for us as a society.
Definitely, yeah. And the question of what originality will mean more personal question that I wanted to ask looking forward. So I graduate in 2027. The students arriving this fall, unbelievably, will be graduating in 2030. It's true.
But what does Amherst look like for them? What is the version of this college that you were building toward where AI is not a challenge you are managing, but something the institution has a clear position on?
So I think, you know, I tried to articulate this earlier. I think in some ways there are places that it looks the same. But it's going to feel like there's a level of intentionality built into that. It's going to feel a little bit more like a refuge from AI and a kind of training ground. I would like to think the vision of the college I'm building toward is one that preserves the things that have made an Amherst College truly valuable, the kind of what we call close colloquy, the intimate relationships that we have, students, faculty, staff, and that allows for the kind of pleasure of testing and shifting ideas just like we're doing here, right?
This is an intellectual exercise that we're going through, but it's also kind of fun. And we need to hold on to that and cultivate it. And at the same time, students are learning how this technology can augment their thinking and can allow them to do things in effective, efficient, and responsible ways. I think from the outside, it actually looks very similar. We'll have, by 2030, we should have the geothermal stuff done.
Students that will be a couple of years old. Chapin should be renovated, as my hope is, or rebuilt. And maybe we'll be getting ready to tear down Anyhow, my point being, from the outside, I don't think it looks that different. I think one of the things though that might be different by 2030. So let's imagine in 2030 those students are graduating and my God, we are meeting the class of 2034.
It will happen. I think the class of 2034 may be choosing Amherst for slightly different reasons than the class 2027 chosen and maybe even the class of 2030 chosen. They may be coming here because they believe not just that Amherst will prepare them for a job, but that it will help them understand what kind of people they want to be and how they're going to be in service to the world. that they will be here, they will have chosen Amherst over, say, other fine universities because it offers something distinct as a liberal arts college. Like I said, I'm sensing that we are potentially on the precipice of a kind of increasing differentiation in higher education between liberal arts colleges and larger universities, where we really lean into
the fact that we are providing a human-centered education in ways that larger institutions may not be doing any longer?
Yeah, definitely. Very insightful. And then for our final question, to zoom out a little bit, I certainly wanted to ask. At first we had started talking about how you study how Americans construct narratives about history. That's what custology is about at its core.
What is the narrative Americans are constructing right now about AI and education and what are we getting wrong necessarily?
Let's zoom out a little bit, and I hinted at this earlier, and think about not just artificial intelligence, but think about artificial intelligence as the sequential following of the growth of the internet and network knowledge. So what has happened over the last 30 years, let's call it that, has been the erosion of expertise the democratization of knowledge in terms of content where people can reach each other, and the kind of undermining of authority and trust in institutions. And you can look at all of that and you can see both positive effects and negative effects of that and maybe argue that some of our populous energies right now are actually a result of that. So the anxiety that people feel about economic precarity is the result of that.
I think when historians look back on this moment, cultural historians, they will see artificial intelligence as part of this era where technology destabilized authority of all kinds, creating new forms of wealth, creating different kinds of politics. What I think, you know, what will it mean for education when people, when people look back is I think that they will look back and say education had to reinvent itself. away from its traditional role in society as the sole source of knowledge. And in order to do that, it first lost trust and then hopefully, historians will say, and then it rebuilt trust. That's my hope and my optimistic hope and we are at a moment where we need to really work on that, all of us who are leaders in higher education.
But that trust will be in something that is slightly different than where we started in the 1990s. And in order to get there, we have to both make sure that people feel that education is accessible and that it's serving society and that we are making good on and as education institutions and how we're delivering on those goals. So my hope is that we can do that and I think we can but it will take a lot of hard work and you know it's something that you think that Eris College can and will lead in doing is rethinking what it means to build trusting institutions like ours.
And do you think that some of that comes with more of that general education outside as we were talking about like specifically here at Amherst outside of the CS department having a more diverse range of disciplines and thinkers really inquiring about what AI will mean to society and how many different ways to communicate.
I don't see how it cannot, unless we're both wrong. And AI does not have the kind of social impact that we're talking about. And to back up, I don't think anybody at this point really knows what AI is going to mean for human society. My own guess which is worth less than the paper it's printed on, is that it will be nothing exactly like any of the visions anybody is sketching out, but it will still be enormously consequential. Probably neither is terrible nor is awful, as some people are describing.
Neither is terrible nor is wonderful, as some people are describing. But if it is as consequential as what we're thinking about, just like today, you can take a lot of different classes that talk about technology and the effect of technology in the world. Yes, you will definitely be thinking, talking about artificial intelligence. And I'll point out that there have been different disciplines talking about artificial intelligence for a very long time. The very first person I ever heard use the word artificial phrase, artificial intelligence, was an editor at the error student who was my editor who was coming up.
And she was a philosophy major. And I believe she wrote a thesis about artificial intelligence. Because artificial intelligence was, and still is, of great interest to philosophy departments. And so it is not an entirely new subject. Yes, large language models are an entirely new subject.
and maybe we know much more about neural networks than we ever knew before. But this question of what is intelligence? What can technology do and not do? What is human, not human? These are old questions and artificial intelligence are just bringing life in new ways.
Yeah, yeah, definitely. And I mean, so much is, I mean, like, in my machine learning class alone taught by Professor Specter, we mentioned earlier, I mean, he likes to talk a lot about, like, Piaget and different philosophers that have analyzed, like, the acquisition of knowledge because a lot of AI is about, like, emulating our brains and how we think. lot of interesting interdisciplinary questions there to be brought up but just wanted to conclude and say thank you President Elliott for your time and for the depth of this wonderful conversation and for our listeners you can find this episode and all of our coverage at techandbusiness.org stay curious thank you thank you stay curious love that Awesome. That was so interesting.
Guest
Michael A. Elliott is the 20th President of Amherst College. He graduated summa cum laude from Amherst in 1992, earned his Ph.D. with distinction in English and Comparative Literature from Columbia University, and spent 24 years at Emory University, where he served as Dean of Emory College of Arts and Sciences. A scholar of American literature, Native American literature, and public history, he is the author of Custerology (University of Chicago Press). Under his presidency, Amherst launched the AI in the Liberal Arts Initiative (AILA), a Generative AI Task Force, and a college-wide AI Working Group.