Effective Altruism provides a popular—and potentially dangerous—approach to modern ethics.
By Stephen Dames
“Could you guys keep it down?” the girl at the neighboring table in ButCaf walked over to ask. “This conversation is insufferable.” After a moment of stunned silence and a quick apology from both of us, my interviewee Nicholas Hazard, CC ’25, resumed speaking almost immediately, apparently unfazed. He’s used to the criticism.
Hazard is a member of Effective Altruism. Although the organization’s identity and aims remain the subject of heated debates, its members generally seek to answer the question: How can we do the most good with finite time and resources?
The Centre for Effective Altruism was formed in 2011, but the movement predated the institution by several years, if not decades. While EA as a movement has existed since the mid-2000s, most individuals who have not attended one of their workshops or read one of their books have heard of them from one of two personalities: Sam Bankman-Fried or Elon Musk. Bombastic and controversial billionaires both, Musk and Bankman-Fried have, at one point or another, given money to or been associated with EA. Bankman-Fried in particular was identified with EA until his fall from grace last December, and he supposedly was an active participant in EA circles for a number of years before his indictment. Once Bankman-Fried’s alleged crimes came to light, however, the organization distanced itself from him as much as it realistically could.
But these billionaires are not the whole picture and, in fact, consume an outsized share of the spotlight. Their celebrity obscures the growth of an organization that has only gained in power, money, and notoriety both at Columbia and in the world more generally.
…
A mix between a French salon, a professor’s apartment, and the office of a start-up, the “EA apartment,” as it’s known, is unadorned but well-furnished. It has ample seating on plush chairs, couches, and a fluffy carpet. With high, bare ceilings, the space is ample yet strangely sterile.
Two Columbia student EA members live there, and the student group uses the space for events and seminars. I visited recently for a meeting of the Arete Fellowship: an 8-week reading group and seminar exploring the core tenets of EA, most notably the question of how one can do the most good. Participants meet once a week for an hour after reading or watching an hour’s worth of material in advance of the session. The weeks progress like a typical seminar, with discussion leaders, a syllabus, and a designed focus for each class.
However, this is no dorm room bull session—the discussion is not free-form but well-moderated. Designated discussion leaders serve as proto-professors. The readings are mostly if not exclusively produced by EA-affiliated organizations and individuals, consisting of reports from nonprofits, think tanks, and charities, as well as books and journal articles. A selection of notable EA-affiliated books rests on the windowsill behind my seat, among them William MacAskill’s canonical What We Owe The Future accompanied by Peter Thiel and Blake Masters’ Zero to One.
The moments before the meeting felt like a spell in a doctor’s office waiting room. I was expectant, slightly hesitant, but also keen to figure out what this was all about.
The two discussion leaders, Hazard and Columbia EA president Dave Banerjee, CC ’25, as well as the three members of the night’s reading group sat in the room with me. The former president of EA and current co-VP of the AI Safety Program Rohan Subramini, CC ’24, later estimates that in the past three years, that room (or one like it) has hosted about three to four hundred Columbia students, all of whom chose to take part in one of EA’s many fellowships or programs. This semester alone, Columbia EA is offering at least five separate fellowships—some with multiple sections per week.
Written in brown marker on a whiteboard in front of the group were the letters “I—T—N.” They stand for Importance, Tractability, and Neglectedness, an important framework EA has for looking at what they call “causes,” or global issues in need of attention. By calculating how important, tractable, and neglected a cause is, an EA adherent identifies the optimal causes to support in order to “maximize” the amount of good one can do. This “maximization of good” often takes the form of figuring out how best to divide and donate one's capital, a strategy that, to me, feels if not privileged towards the wealthy, then at least targeted to them.
Following this framework, today’s discussion is on “existential risk,” a key idea in EA and a major point on the syllabus. The Stanford Existential Risk Initiative, itself an EA-affiliate, defines existential risks as, for example, “extreme climate change, nuclear winter, global pandemics (and other risks from synthetic biology), and risks from advanced artificial intelligence.” As Hazard said in our interrupted interview, “existential risks have increasingly become the focus of most people who are using EA to guide their professional choices.” It’s easy to see why.
While the likelihood that any one of these “existential threats” individually occurs is fairly small, according to EA-affiliated philosopher Toby Ord’s book The Precipice: Existential Risk and the Future of Humanity, the chances of an existential catastrophe occurring in the next century is a terrifying 1 in 6. In other words, it’s a worldwide game of Russian roulette. While many doubt Ord’s odious estimate, few if any in EA would question the existence and importance of existential risk. Many within the organization pledge to try to mitigate the effects or stop these from occurring. During the discussion of these existential risks, I noticed a slight but distinct apocalyptic undertone—a focus on the end of the world that made it seem not only possible, but nigh.
The strategy pursued by many EAs to “effectively” do good in the world is to pick a career where one is making a targeted and positive impact, which, to them, is quantifiable. Driven by concerns about these “existential risks,” many EAs pursue careers in AI research, medicine, or the nonprofit world. For example, Subramini, a physics-turned-CS major, hopes to pursue a career studying how to align AI with human values: “I’ve switched to being a computer science major, in large part, because I think the influence of AI on the world is likely to be very large and could go very well or very poorly.”
Besides existential risk, nearly all EAs consider such issues as global poverty, disease protection, or animal welfare to be highly important, tractable, and neglected causes. Banerjee told me that he has taken a pledge (common among EAs) to donate at least 10% of his income to targeted effective causes throughout his life. Currently, most of his donations go to animal welfare charities; indeed a seemingly disproportionate percentage of EAs are vegans, and most care deeply about animal rights. Banerjee sees it as an issue he can directly impact: “When I think about those causes [existential risks], and I compare them to animal welfare, it's just so hard for me to seriously compare these two issues because on one hand, I'm seeing animals being tortured for their entire lives, in conditions worse than the worst forms of torture we've ever inflicted on humans, and I think this is a really hard question to reconcile.”
Upon hearing Banjeree’s reply, one thing stood out to me besides his impassioned plea: the varied—and seemingly contradictory—frameworks of thought that inform EA's program. While the organization does not purport to have a single credo or ideology, there are two discordant strands developing within EA regarding their strategy for achieving “the good”: one emphasizing donations towards select groups focused on solving present-day “causes”, and another on choosing careers that can best help solve “existential” problems.
These approaches are best exemplified by two separate and equally influential EA-adjacent non-profits: GiveWell and 80,000 Hours. While GiveWell focuses on the “cost-effectiveness” of donations to certain charities—attempting to quantify or “maximize” the “effectiveness” of one’s donation—80,000 hours devotes itself to advising “people with an undergraduate or postgraduate degree” and “who live in rich, English-speaking countries: especially the U.S. and U.K.” on how to pick a career which has the most “impact.”
Although their approaches are not mutually exclusive, these organizations do appear to represent an essential schism in EA. Yet, it is also in this schism that we see our first essential component of the EA movement: to whom it pitches itself.
While these two approaches do appear to be quite different at first (and are different to an extent), they both are philosophies for the global 1%: those with significant education, money, and privilege. It seems like these people, and these people alone, are the changemakers for EA.
…
Within this “schism,” there is one concept that seems to drive the discussion more than any other, and seems to profoundly matter to most of these EA “changemakers.”
Uncomfortably shifting in my seat, I listened as Banerjee and Hazard explained the basic EA program in the seminar, with both of them coming back regularly to a seemingly central concept that I didn’t quite recognize: longtermism. Trying in vain to remember the concept from the EA literature I’d read to prepare, I jostled the Peanuts Christmas mug that sat on the table in front of me, Snoopy smiling along with my confusion.
“Longtermism is the argument that not only do humans living right now have moral value, not only do animals right now have moral value,” Banerjee expanded later in his interview, “but perhaps beings that have yet to exist also have moral value.”
Moreover, many longtermists would say that we should place the same or a similar moral value on someone yet to be born as someone who’s alive right now. A contentious subject among EAs more generally, the debate certainly rages at Columbia: Subramini is a longtermist, Banerjee is at least sympathetic, and Hazard is ambivalent. But few, if any, EA members have no opinion on longtermism and the potential implications that the idea has for the movement and the world.
Longtermism, as an idea, struck me almost immediately as something different—something radical. The proposition that all future beings have innate moral value is not self-evident, and is, at its core, controversial.
When one concerns oneself with the horizon of future possibilities it would seem to be easy to lose track of present suffering. Moreover, a philosophy that only concerns itself with “existential” risk has the potential to momentarily forget, or even shrug off, “lesser” risks. Mass exploitation and death seem like small potatoes when compared to the end of everything.
It was also at this point that I couldn’t help but feel the essential maleness of the room and how this philosophy seemed to give and take away agency to those it deemed to be worthy. In a room where nobody was capable of carrying a child, I couldn't help but wonder what longtermists think about abortion, and whether or not the rights of the theoretical descendants of a woman superseded her right to have or not have them.
Soham Mehta ’24, a former Arete fellowship participant, told me that longtermists “see the horizon of our moral concern as essentially infinite.” Mehta, however, is no longer involved in EA. He ceased being involved with the organization formally, skeptical of both longtermist philosophy and the EA movement at large. “With longtermism, there’s the paradox that way more people are always going to live in the future,” he said, “than live presently. When are you going to care about people who live now? You’ll always have an excuse to not care about poverty now.”
While still discussing longtermism, Mehta points out the mostly unsaid class dimensions of EA. “It’s a really good excuse for rich people to donate their money to really eccentric causes that you can rationalize as being more effective, but aren't actually more effective.”
…
During that first EA meeting of mine—sitting on the folding chair in that airy, pale living room—I found myself thinking of a quote found in Scott Turow’s 1978 memoir One L, an account of the trials and tribulations of his first year of Harvard Law School. The author describes how, in the process of “learning to love the law,” one learns a second language he dubs “Legal.” “Of course, Legal bore some relation to English—it was more a dialect than a second tongue—but it was very particular,” he writes. “Moreover, throughout Legal I noted an effort to avoid the normal ambiguities of language and to restrict the meaning of the word.”
Like Turow at the beginning of his legal education, I found myself an outsider looking in on a tight-knit, rarified world that spoke a language that seemed like my own but that, for all intents and purposes, I could not speak. The terms they used felt like a motley mix of Silicon Valley buzzwords, philosophical idioms, and phrases that belong in a non-profit meeting room, collected in a dialect that I penned hastily in my notes as “EAish.”
After several weeks of this immersion in the EA milieu, I caught myself slipping more than once into the EA lingo: thinking in terms of “causes,” approaching problems through the “ITN framework,” and using phrases in my writing such as “capital allocation” and “global governance.” The language is pervasive and, to me, that seems like the point: In redefining language, one can redefine the world.
But to what extent does EA actually want to redefine the world, and how do we know that they will go about it in an ethical way?
Many EAs identify either themselves or the movement at large with utilitarian philosophy. While some don’t accept this label willingly, it’s easy to see how the movement may be stuck with it. While EA and utilitarianism are hardly ideologically identical, it would be hard to argue that they don’t share many views, and that utilitarians and EAs could often find common cause.
Theoretically, many EAs would view two lives saved anywhere as more valuable than a single life saved in one's own community—a pretty boilerplate utilitarian position, but radical nonetheless. In her much-publicized talk at an EA conference at Berkeley in 2016, Ajeya Cotra puts a classic EA position quite succinctly: “Choosing from our heart is unfair.” Moral intuition is rarely at home in EA, and the unquantifiable is seldom welcomed.
Utilitarianism, as a philosophy, is not intuitive to me. Mehta agrees, objecting further: “What’s the point of mechanically doing good if you're losing your capacity to connect with your community—for example, caring about small-scale change, like working in a soup kitchen in the US?” he asked. “That is objectively less effective than becoming a quant and then donating your money to fund vaccinations in a foreign country, but you should still work in a soup kitchen because that’s part of what it means to lead a full, grounded life.”
The notion of a “full and grounded life” does seem to be foreign in most EA circles. The inherent value of your own life—not merely as a vehicle for good—is not often emphasized. Instead, humanity is thought about as an abstract idea: a cause to be advanced, and a group to be saved. While noble, this crusade seems to come at the cost of our individual humanity, and the role that humanity plays in our local communities.
Furthermore, Mehta finds fault with EA’s lack of engagement with the local community, and its focus mostly on educated professionals, or in this case, college students: “I think if EA ever did anything beyond education in Morningside Heights,” he said, “that would be anathema to their own philosophy.”
Carol Chen, GS ’23, another former Arete fellow, also felt uneasy about EA’s quantitative and utilitarian side: “The whole idea that we should allocate resources purely out of a maximization of measurable goals seems to me to be anti-humanistic, and also not aligned with projects I personally wanted to take on.”
The conflict between the quantifiable and the unquantifiable is a difficult one and seems to make humanities majors like Chen feel as if they were caught in a bind: “That brought me into kind of a cognitive dissonance,” she said, “because on one hand, I think I am committed to trying to do good in the world beyond my selfish or hedonistic pursuits, but the other I just couldn’t see myself being an AI researcher, or being someone who just dedicated myself to these concrete goals.”
Given the opportunity to get anything across to me at the end of our interview, Hazard made a counter-argument worth considering: “Effective Altruism shouldn’t cancel out our sort of intuitive moral associations with people—treating people around you well, but also showing concern for our local communities,” he said. “If you think of it, empathy is the foundation for Effective Altruism.”
Hazard made his case compellingly and convincingly, and, to me, the idea of empathy as the foundation for EA seems exactly right–that’s the problem. In the idea of empathy, there is an inherent separation between the self and the other, where one doesn’t form bonds based on common cause—which would be solidarity—but instead attempts to view the world from the perspective of the individual “empathized with.” This is an altruistic worldview.
This idea of the inherent separation between the empathetic and the empathized is something Mehta discussed with some EAs when he was in the fellowship: “They told me verbatim that human progress is ‘heavy tailed.’ What that means is a few people—you could call a cognitive elite—produce the bulk of things and move humanity forward. The everyday person doesn't, but a Columbia student who's already been filtered to an extent fits the bill. It’s much more worth their time to focus on Columbia students who are producing that ‘heavy tailed’ progress.”
This is not a solidaristic companionship with those around you but is fundamentally high-minded: a ‘cognitive elite’ empathizing with and potentially helping the non-elite.
…
Later in his book, Turrow describes how by learning “Legal” he “had the perpetual and elated sense that I was moving toward the solution of riddles which had tempted me for years.” To be honest, at the end of my time in the EA world, I didn’t have this feeling that Turrow describes. I didn’t feel elated, moved, or even all that convinced. But what I did feel from all of the EAs around me was an incredible sense of passion, honesty, and dedication—one that did light a spark in me.
Mehta put it best: “They’re very kind people. I think one thing you find in EA is that there’s not much artifice on the level of individual members—I think the people really do care … They really sacrifice because they really believe in what they do. And that's refreshing.”
The EAs I met were always trying to be effective at their altruism; honestly evaluating their own actions was an essential part of their life. When I asked any of the EAs that I interviewed if EA influenced their way of life, all said versions of “yes, definitely.” They are decidedly not hypocritical, and all act on their beliefs with a conviction that is frankly, from a non-believer, inspirational. Coming from the sarcastic and pessimistic political world I seem to inhabit, this honest idealism was refreshing.
While not necessarily rooted in any EA doctrine, this idealism reinforced in me the idea that passion and belief are not features to be embarrassed about—they ought to be striven for. Change doesn’t come about through incoherent pessimism (a plague on any college campus) but through actionable and passionate belief.
We all too often fall into the trap of unactionable abstractism—and I would argue EA does as well sometimes—so instead of being armchair elitists, we must embrace the embarrassment of true belief, of faith, and start to believe again.
Comments