Competence and practicality are severally underrated among many people who believe they are doing good. This belief seems to often short circuit critical thinking and the person ends up diminishing their impact or having an adverse effect.
Examples:
Habitat for humanity - for the cost of sending one incompetent american to build a house in the third world, an army of local builders can be hired. If you really want to make impact, donate rather than going in person.
Talented and capable people who end up working low impact non profit jobs at low pay. If you had a real job and donated half your income to the cause, both you and the cause would be better off.
Stupid advocacy. People who get excited about feel good slogans like "cancel rent" without considering impact on the medium and long term supply of housing, and thus end up amplifying the problem they believe they are fixing.
These are just a few examples where not doing the seemingly good/moral thing can be much better than doing it.
I think this analysis is a bit oversimplified. If your goal is to build houses, then local labour is clearly better. But if it's to recruit lifetime donors, then perhaps shipping out incompetent (but fabulously wealthy) Americans is the way forward.
Similarly for advocacy, our model should perhaps be that seemingly extreme proposals serve to stretch the Overton window before the middle-ground wins out.
Naively disregarding approaches that don't myopically optimise for first-order impact seems likely to have harmful reprocussions.
That said, lots of donations are deeply ineffective, and if you want to prioritize I'd suggest the excellent guides from https://www.givewell.org/ , who reach many of the same conclusions as you do, after more detailed reasoning.
But there are definitely dangers when charities try to maximise revenue:
1. They are (to some extent) competing for donors. If it's zero sum, then more marketing spending means the least efficient charities "win".
2. It might even be negative sum, as donors turn away from charity in general.
3. A focus on revenue changes the way charities operate, and might change the type of people who work there.
I think there's a sweet spot to engagement. You don't have to send the donor out there, but sending them photos does help. They get the personal connection and the feeling that their money was well spent without needing to waste a lot of money. Obviously it's not 100% efficient, but it brings a lot to the table - a certain level of accountability, engagement, and not too much inefficiency.
“Similarly for advocacy, our model should perhaps be that seemingly extreme proposals serve to stretch the Overton window before the middle-ground wins out.”
This seems like an argument in favor of openly advocating positions that are good but too far from the current consensus to be adopted. Dragging the Overton window toward stupid policies or bad behavior is a different issue.
Which is the goal of Habitat for Humanity, correct?
> if it's to recruit lifetime donors
Which is not the goal of Habitat for Humanity, right? It's only a means to an end. The end is building homes for people. If that end can be accomplished by a means more efficient than "recruit lifetime donors", shouldn't it be done that way?
But if sending the incompetent American gets you a lifetime donor who pays for many houses to be built (assuming he would not have without the feeling of personal connection he got from his experience incompetently homebuilding), then isn't that the right thing to do if your goal is to build as many houses as possible?
You mentioned efficiency, but the goal was never efficiency - it's to build houses. Would it be better if people were entirely rational and just gave money instead of incompetently homebuilding and then also became lifetime donors? Of course. That's not reality, though.
> if sending the incompetent American gets you a lifetime donor who pays for many houses to be built (assuming he would not have without the feeling of personal connection he got from his experience incompetently homebuilding)
If that is the case, yes. Basically you are saying that, because of a quirk in human psychology, the only way to get people to donate, over the long term, a portion of their income, derived from them doing jobs they are much more productive at than building houses, to building houses, is to engage them by having them build a house themselves first.
The question then becomes, does this actually happen? Do people who volunteer to build homes end up becoming lifetime donors? Or are those two separate sets of people?
Donor engagement is a primary driver of donor loyalty and retention. This is pretty fundamental stuff in the nonprofit world. If you can build a deep personal and emotional connection, such as "helping build a house with their own hands", that is a big boost to average lifetime donor value. (Nonprofit donor development can run on decades-long timeframes - engage a high schooler and you're more likely to be able to solicit funds from them in their 50s.)
On a smaller level, this is why many charities will get an initial $5 donation and then spend more than $5 on further solicitation of the same donor. The best indicator that you're going to give money to a charity (or political campaign) is that you've already done so. Lots of people feel this doesn't apply to them personally, but data-driven fundraising bears the point out.
> The best indicator that you're going to give money to a charity (or political campaign) is that you've already done so.
This ignores why you gave the money though. Example, I donated to a particular children's hospital because it was the dying wish of a friend that donations be made to that specific charity instead of sending flowers to the funeral. I don't care about or have a connection to that charity in any other way and I'm not going to dontate to them again but they hound me incessantly for more money.
> Basically you are saying that, because of a quirk in human psychology, the only way to get people to donate, over the long term, a portion of their income, derived from them doing jobs they are much more productive at than building houses, to building houses, is to engage them by having them build a house themselves first.
Having people engage in a process isn't just for personal fulfillment of "morally incompetent" people: it also serves as due diligence.
Trust is a big factor of deciding where you want to donate your money. Having walked through the process as a volunteer, you have a much clearer picture of how your donations are spent, and yes you are more likely to donate.
Trust has to be earned one way or another.
Piggybacking on this home building example with an extreme example: how would you feel if you donated for years to some non profit which builds houses, but later learned that child labor is used to build those as well as cheap, structurally unsound materials, with the non profit CEO pocketing millions?
Yes, this is a good point. But note that it's a different point from the one that I was responding to. If the purpose of having volunteers build the houses before becoming lifetime donors is due diligence--them gaining trust in the process--then it doesn't matter how efficient they are at building the houses, because the tradeoff is not them building houses vs. whatever more productive work they could have been doing with that time and energy. Nor is it about having to overcome any irrational quirks of human psychology, giving the donors "personal connection", etc. The tradeoff is the donors having trust in the process vs. not--i.e., them gaining the information they need to rationally believe that their donations will accomplish the end goal, vs. not. In short, it's a perfectly rational investment of time and energy for the donors, even if they are terrible house builders.
I used to donate a lot, but have lost trust when I've seen reports on how much money is wasted by groups. Now I tend to donate directly to someone or group that I personally know and/or have worked with.
> [building houses] is the goal of Habitat for Humanity, correct?
I think that's PART of their goal. They're a Christian organization. "Service" is also a part of their mission, that means having people physically work for/with the needy (and not just give money).
Moreover, much of their work is done domestically where construction labor is NOT cheap. Yes, some things are going to be shoddy, but it's really hard to get a house built and renovated for low cost in the USA. There are challenges not only in finding labor but also general contractors and materials.
Could all the "building of houses" be done more efficiently if Habitat for Humanity was all just finance and management operations? Perhaps, but the service aspect would be missing.
The point of the article under discussion is that "service", if it just means "doing stuff yourself" without regard to how productive you actually are--how much you actually help--is a morally incompetent goal. If you have a feeling of service after productively helping someone, that's great. But if you have a feeling of service while the actual impact of what you did was negligible (or even negative), that's bad. So "service" can't be a goal just by itself; it has to be something like "service that actually does help".
"service" definitely can be a goal just by itself - perhaps you can argue that it shouldn't be a goal by itself, but I think that it's undeniable that in certain cases it is a truthful, accurate description of a goal that some people have.
And IMHO the "should" discussion is even not that relevant there because it does not change the values that people actually have. As people say regarding decision theory, "the utility function is not up for grabs"; people's goals are what they are, and if it turns out that someone's goal actually is more about personal service than good results, well, you can't force them to change so let's just facilitate them to get good results while still achieving their core goal, to get a win-win outcome.
It's plausible that their goals might also be achieved just as well without actually productively helping someone - so you might want to nudge them away from that and towards more productive means; but these nudges won't be effective without taking into account what their actual goals are and what types of "service that actually does help" are not valid options because they don't satisfy the goals of the potential helper.
Habitat for Humanity was never just "doing stuff yourself". It is an organized effort with decades of history. They also take donations and hire enough experienced contractors who ensure the work is done to code (I've volunteered before).
The volunteer aspect is there so that people who don't have money but have time can give something. More fundamentally, there's a religious element to the "service" (that's why I put it in quotes). In Christianity, being near the poor and literally serving the poor is very much a spiritual act which is done not ONLY for the benefit of the poor but for the benefit of those who are serving them. This is decidedly NOT morally incompetent. I admit this isn't an ROI which would be seen as valid in the project management book of knowledge, but it's real for the people who believe it.
The OP essay was just a sloppy way for a founder to justify his pivot-- which never actually needed justification.
The actual justification for the pivot was simply the original idea, "Quirk", wasn't ever going to get vc funding and make a 100x - 1000x return. I think it's disingenuous to invoke the concept moral competence for what is ultimately a materialistic decision.
"morally incompetent" yes, but morals are not the primary motivator in that situation, only a secondary factor. People want to feel good about themselves, moral behaviour leads to feeling good. But physical work, pride in your work, connection with your work and the people who benefit from it also make one feel good.
It is basically like mothers' day cake: There is cake on sale that will be far better than what you can bake. But quality isn't the point, connection, pride and individualisation are.
> Which is the goal of Habitat for Humanity, correct?
>> if it's to recruit lifetime donors
> Which is not the goal of Habitat for Humanity, right? It's only a means to an end. The end is building homes for people. If that end can be accomplished by a means more efficient than "recruit lifetime donors", shouldn't it be done that way?
But it can't. You can't hire these hypothetical teams of cheaper local builders when you have no money.
Also, people are forgetting that Habitat for Humanity also builds homes domestically. I've volunteered for one of those jobs, once. My understanding the labor was split between 1) future occupant sweat equity, 2) unskilled volunteer labor (me), 3) volunteer skilled labor (people in building trades).
I knew someone who worked for a similar charity, that sent emergency food to areas where children were starving. They have volunteers run little assembly lines for two hour stretches where they manually mix and pack the food. That work could be easily automated, but if they did that, they'd have no money to buy food for the machines to pack, or to ship it where it's needed. That's because the corporations that sponsor the packing events (where they get most of their revenue) are far more interested in feel-good team-building exercises than actually feeding starving children.
I don't think it's quite so bad with individual donors (who don't have pockets as deep as corporations), but it's quite clear that getting the donors involved increases their engagement significantly. There's a big difference between sending a check and being there than just sending a check.
The focus on first order efficiency is fatally ignorant of actual human psychology.
Which is not the goal of Habitat for Humanity, right?
To build more houses they do really need to recruit the lifetime donors. The whole point of the donor's doing some of the work is to get the donations in the first place. It generally is pitched as a team building exercise.
Deliberately stretching the overtone window to extreme positions is how things go to hell - you might think you are just applying leverage but plenty of others will happily occupy the space you created and push things farther out still.
Overtone creep is how the French Revolution progressed from enlighten liberalism to the guillotine and indiscriminate mass executions.
> seemingly extreme proposals serve to stretch the Overton window before the middle-ground wins out
I've not seen any evidence that this is true, nor that "Overton"-thinking has any evidence as such.
OP's comment is precisely that backreactions to this kind of thinking are often larger than the alleged shifts that are assumed to occur.
NB. The "overton window" is a description of a consensus, not a model; ie., there is nothing of the consensus which is like a dial to be moved.
The reason that people view X as possible is the latent preferences around X, incentives and institutional practices, and their (essentially rational) judgement about the practicalities and possibilities of X.
Eg., Marxism isn't "outside the overton window" by some weird dialogue exclusion. Ironically, it is only in the conversation because of "overton thinking" ie., hype.
It is outside the window because peoples preferences, incentives, and institutional practices are overwhelmingly oriented away from Marxism; which has no practical or evidentiary basis. No social mechanism of implementation. No group of people have the incentives, reasons and power to implement Marxism.
Activism cannot "move the overton window". It provides no reasons, incentives, or power. It changes no institutional practices; it offers no evidence.
"Hype activism" frequently does more harm than good, by reinforcing the status quo.
The “Overton Window” is a concept developed initially and further elaborated subsequently by a policy think tank, largely as a tool for activists (specifically, that think tank itself) to understand how they can move the social consensus to drive policy.
Your link even, largely, makes this point, to quote:
Sometimes politicians can move the Overton Window themselves by courageously endorsing a policy lying outside the window, but this is rare. More often, the window moves based on a much more complex and dynamic phenomenon, one that is not easily controlled from on high: the slow evolution of societal values and norms.
It's a description of why politicians will advocate for some policies; it's not a theory of change. (Such a theory would be unevidenced and obviously false).
The quote says "rare", but i'd say, "essentially never". The "slow evolution" is also not really of norms. The public do not "Accept" ideas within some "Window" that you move by "Advocacy".
Almost all "beliefs" are grounded in technological, incentive, preference and practice structures. They are symptoms of much larger mechanistic forces that regulate behaviour (, belief, etc.).
The significant majority of "belief change" is caused by technological and economic change.
If you read more carefully, you will recognize that it's saying it's rare for elected politicians because of the short-term constraints of needing to win elections, and that change in the Window is driven by actors who aren't concerned with losing their job if they advocate outside of it.
> It's a description of why politicians will advocate for some policies; it's not a theory of change.
It's both.
> (Such a theory would be unevidenced and obviously false).
This seems to be based on your own (“unevidenced and obviously false”, IMO) naive rational choice theory of change. Yes, broad forces are responsible for what would be the long-term equilibrium if conditions were static. Conditions change fast enough that the conditions which set the long-term equilibrium don't determine the actual Overton Window, or even consistently the short-term direction of change, though they are part of the context which influences change.
Overton window is pretty effective in some ways. Just look at how much of what is taken for granted nowadays as the moral high ground was considered grossly immoral and unthinkable just a generation ago, e.g. gay marriage. This was largely achieved intentionally through media and other forms of social programming. Seems like a win for the Overton window to me.
Attitudes towards same-sex coupling have much to do with birth rate, health and population shocks.
Much of the regulation of sex comes down to population & birth rate fears (this is the history of abortion in America: initially only discouraged because immigrants were outbreeding white people).
The religious origin of anti-samesex prohibitions is just a part of techniques to keep birthrates up (since, historically, 20% of children died before 10).
With massive death rates up until 1940s, it is only in the very modern era that we have the pill & tiny childhood mortality rates.
In this environment our sensitivity to "reporductive freedom" has decreased dramatically, drastically tapering off these prohibitions.
If we were in a pre-40s world of high infant mortality, etc. MANY would regard homosexuality as immoral, precisely because it would seem to limit the size of the future generation (seen, here, as a "misuse of the body to serve the needs of pleasure vs. society").
Activists haven't "moved the window", they are precisely a symptom of forces they are profoundly ignorant of; here, laregely technological. Once these have taken hold, activists have only to prod at the now meaningless tradition.
You don't consider the pill part of the Overton window? It was introduced largely due to an ulterior agenda of population control and eugenics, and seems to have furthered that goal effectively.
The perfect is the enemy of the good. Many of these charities are technically irrational but realistically people are likely not to get involved at all otherwise. For example, many people donate food to collection bins for food banks, despite the food banks being able to buy food wholesale for a tiny fraction of the price. Being aware of this I smugly refrain from donating overpriced retail food but I haven't yet gotten round to actually sending them some cash yet instead and I suppose not many other have either.
> Talented and capable people who end up working low impact non profit jobs at low pay
I currently volunteer at a raptor conservation charity. For no pay at all. They cannot afford to pay anyone but their trained bird team and a small office staff. Volunteers like myself are a force multiplier for the trained permanent staff. By doing mundane jobs for them, we free them to apply their expertise where it matters. Pre-COVID, these mundane tasks included basic animal husbandry, grounds work and infrastructure maintenance. Good skill fits for these role include carpentry, building, 'roady' type knowledge (how to build and wire-up outdoor structures) and many others. Post-COVID, new tasks include cleaning, sanitization and crowd management (facilitating social distancing).
> If you had a real job and donated half your income to the cause, both you and the cause would be better off.
I used to provide financial support to them before I retired. Now I only work intermittently, and I'm using my income to fund my own time working for them, for free.
This is genuinely the best way I can help them. Especially now that their income has been decimated by COVID lockdowns.
[Edit] The flaw in the parent's line of reasoning is assuming that all a non-profit organisation needs is money. Sometimes they need actual stuff doing (such as building a new aviary) and volunteers let them reduce or even eliminate the not-insignificant labour and contracting costs.
I think what you're describing is "moral competence" as the article frames it. That is to say: your efforts are not directed to the end of "how can I be seen as working to support raptor conservation" but rather "how can I support raptor conservation?"
Or, in other words: how can you genuinely further the cause, rather than simply seem to?
> If you had a real job and donated half your income to the cause, both you and the cause would be better off.
This argument takes as an unstated axiom that the correct way to organize society is by having everyone in a role that strictly maximizes that system's assessment of their economic output. It then concludes that the correct way to effect change is to embrace that system and after the results are in, redirect those economic outputs to desired goals.
As the reader may have noticed in examining that unstated premise, this presents quite a dilemma when it becomes clear that this way of organizing society is in fact itself the main cause of the human misery you're trying to eliminate.
This is such an underrated point. A lot of the lucrative “real” jobs are quite destructive and counter to the problems that many charities and NGOs try to solve. Also, if everyone thinks this way it’s going to be very hard for these charities and NGOs to find the skills they need at rates that they can manage to fund. A skilled worker can potentially generate a lot of value that might not directly flow back to the organization they work for but to other parts of society - that means the organization has no business case for hiring them (the economics don’t exactly work out) but in the bigger picture it can still be a net benefit.
> This argument takes as an unstated axiom that the correct way to organize society is by having everyone in a role that strictly maximizes that system's assessment of their economic output.
No, that's not the axiom. The axiom is that scarce resources should be allocated to their most productive uses. If the scarce resource of your time and energy can be more productive at building houses if you have a real job and donate a portion of your income to house building, than if you built houses directly, then that's how that scarce resource should be allocated.
Note that it is you doing the resource allocation here, not "the system". It is true that "the system" is very inefficient, but what it's inefficient at is providing people with opportunities to choose from to be more productive with the scarce resource of their time and energy. The way to make that more efficient is more free markets and less government regulation; but our society tends to be the other way around.
The problem comes when you try to define this phrase in a way that’s not a tautology.
Our current economic system defines economic productivity to be the thing that generates the highest economic returns according to the rules of the system.
Without an independently derived set of values the phrase doesn’t have any descriptive value.
You could just as well be describing Bostrom’s paperclip maximizer, and I would suggest, in fact, that this is exactly what you’re doing.
> The problem comes when you try to define this phrase in a way that’s not a tautology.
Economists solved that one along time ago: revealed preference.
> Our current economic system defines economic productivity to be the thing that generates the highest economic returns according to the rules of the system.
That's because our current economic system is not a free market, so instead of reflecting the revealed preferences of people making choices in a free market, it reflects the revealed preferences of politicians and financial institutions.
There's no reason an economy has to work that way. Our does because it's driven by politics, not economics.
> Without an independently derived set of values
What people want is an independently derived set of values.
> Economists solved that one along time ago: revealed preference.
I'm quite certain they didn't solve the problem of determining the correct long term value system for the human race. For context I have looked into the idea, I have a degree in Economics.
We don't even have a perfect optimal strategy for the completely bounded game of chess yet. But somehow we've solved the problem of determining optimal strategy for the unbounded game of social and economic interactions between millions of independent actors? It's a pleasant thought, but no, we haven't.
In real life what people are doing is making economic tradeoffs within a narrowly constrained section of a complex adaptive system within which no actor can do much more than guess at the prospective outcomes of their decisions.
Or put another way, in economic terms how would an economic actor express a clear preference for a sequence of events that leads to a personal connection with someone who doesn't have any money, when they are not aware that's even a thing they can do?
> I'm quite certain they didn't solve the problem of determining the correct long term value system for the human race.
Having a non-tautological way of capturing what "more productive uses" means, which is what I was responding to, in no way requires solving this much more difficult (and quite possibly unsolvable in that there might not be a single answer) problem.
> in economic terms how would an economic actor express a clear preference for a sequence of events that leads to a personal connection with someone who doesn't have any money, when they are not aware that's even a thing they can do?
If they're not even aware of the option, then the obvious thing to do is to make them aware of the option. Economists call this "creating a market". Enabling transactions to take place that couldn't take place before is one of the primary ways that new wealth is created. If this isn't happening, it's a sign that "the system" is, once again, not a free market; and the way to fix it is to make it more of a free market.
> If this isn't happening, it's a sign that "the system" is, once again, not a free market; and the way to fix it is to make it more of a free market.
Indeed. In our back and forth we've now teased out the basic "no-true-scotsman" argument used to advocate for free markets, also known as the Efficient Markets Hypothesis.
Why the free market will solve the problem. What's that you say? The market is producing disastrous outcomes making life unlivable for huge swaths of humanity? Well clearly you should add more free market then, as the cause is clearly not having enough of that.
The reality is that not all social goods and goals can be priced, or measured, or even understood and articulated in a way we all agree on. And, the really core and subtle point that often gets missed is that it's literally impossible to base your normative system on the idea that we'll all just optimize for outcomes when that system is non-deterministic.
All models are false, some are useful. For sure the economic canon has some analytic and descriptive value, that's not in doubt. But it has very serious and severe limits, and that's where all the actually interesting public policy discussion happens.
> the basic "no-true-scotsman" argument used to advocate for free markets, also known as the Efficient Markets Hypothesis
No, that's not the basis for wanting more free markets. The basis for wanting more free markets is that, under conditions where a free market is not efficient (and yes, such conditions often exist), the failure modes of a free market are still better than the failure modes of the alternatives. In other words, free markets are not "best" so much as "least worst" (the worst except for all the others, as Churchill would have said).
In the particular case we were discussing, you were assuming that a transaction was possible (a personal connection) that both parties would value. That means, in a free market, both parties would agree to the transaction, so it would happen. The problem, as you stated it, was that a market did not even exist for such transactions--the parties weren't even aware of each other. And, as I pointed out, the obvious way to fix that is to allow someone to create such a market. Nobody needs to make any decisions about "how social goods are priced" to do that--the market participants themselves will take care of that, by deciding which transactions to engage in and which not. See further comments below.
> The market is producing disastrous outcomes making life unlivable for huge swaths of humanity?
No, it's not. What's producing those disastrous outcomes is the failure modes of the alternatives to a free market. See below.
> The reality is that not all social goods and goals can be priced, or measured, or even understood and articulated in a way we all agree on.
Yes. And in a free market, we don't have to agree on all those things. All we have to agree on is whether or not to engage in particular transactions. If you and I can have a transaction that both of us voluntarily agree on (and that's the definition of a free market, that all transactions are voluntary--nobody is ever forced to make a transaction they don't want to make), then it doesn't matter what the rest of our values or goals are; all that matters is that we agree to make the transaction.
And that is why the failure modes of free markets are better than the failure modes of the alternatives--because the alternatives force people to do things that they don't want to do, or that they even think are very bad ideas, just because someone else in a position of power says so. That is what makes "life unlivable for huge swaths of humanity"--people being forced to do what some idiot in power says, instead of what their own intelligence and common sense would suggest.
> it's literally impossible to base your normative system on the idea that we'll all just optimize for outcomes when that system is non-deterministic
You're assuming that there has to be a single "normative system" that everyone is forced to abide by. And that, as above, is precisely what a free market does not do, and why our current system, which does do that, causes so much misery. Your so-called "public policy discussion" is all about what normative system to impose on people. Your public policy "experts" never even consider the idea of not imposing things on people at all.
> Our current economic system defines economic productivity to be the thing that generates the highest economic returns according to the rules of the system.
If by "the rules of the system", you mean the idea that things are worth what people will voluntarily pay for them, then I agree with your definition. But that's not tautological. It is a distributed price-setting process that everyone contributes to through their own economic choices.
When it comes to charity, people contribute time and money based on their perceived impact. As a donor, I would value 100 new houses higher than 10 new houses, but that's just me. Other people are free to make different choices.
What is clean air worth? What do you pay for it? In the US, you pay taxes which are distributed under the purview of a democratic republic, whose officials may decide to create an EPA, which may enforce rules, etc. Tautologically, clean air is now worth something in the US, because the system pays for it. But it wasn't the case 100 years ago. Or in another timeline where Nixon didn't create the agency. It's worth something because the conditions of the system aligned its interest with that outcome, not because of some direct preference of the populace. In many countries, clean air isn't "worth much" because you can't pay for it, even though the people want it badly.
> It's worth something because the conditions of the system aligned its interest with that outcome, not because of some direct preference of the populace.
You make it sound as though "the system" is a sentient agent that acts independently of the preference of the populace. Democracy in the US is far from perfect, but don't you think that the creation of the EPA was at least somewhat related to the preference of the populace?
Sure, the system generally may loosely correlated with the long-term preference of the populace depending on the government, such that people don't revolt. But the system is designed to keep itself in power or maximize its own goals. It doesn't decide any more than water decides to take the path of least resistance downhill.
Did the populace decide they like bread and milk in the back corner of the grocery store, or does the capital-maximizing system decide its best if you walk through the aisles to get there? Are impulse purchases of gum at the checkout aisle because people really wanted gum when they walked into the store? On and on it goes.
It actually isn't. I'll talk from "virtual" experience. It's food production in a virtual economy. I have the option of overcharging people and expanding my business faster or charging less so that even the poor players can afford it and not having enough money to expand. If I go with the "charity" route it will take much longer for the underlying problem to be solved.
By doing economic charity (not to be confused with donation based charity that charges nothing to the people receiving help) you're also ruining the price signal for other people. They see that the market has been saturated with cheap products but a few weeks later there is a shortage again because the "charity dude" is not producing nearly enough.
This is also one of the reason price gouging laws don't make sense. People will scalp and sell for market rate. The one who is losing out is the original manufacturer of the product. They could have used that money to fund extra production or you know, just reward them for producing the valuable product in the first place. It would still mean less money to scammers.
If you think that the product should be provided at a lower price then involve the government and let it sell the product at subsidized prices. That way it can fulfill an obligation to the people but at the same time also eat it's own words. If the manufacturer is overcharging too much then cut out the middleman and produce yourself.
Transfer this to someone who isn't running a business but instead just tries to get a better job. That person still has the moral "obligation" (it's obviously optional) to do effective charity or at least not make things worse.
See "Effective Altruism" as a topic and the book "Doing Good Better" as a more in-depth exploration of what's being said here [1]. It's really interesting to see how psychology and outcomes are interrelated.
> Talented and capable people who end up working low impact non profit jobs at low pay. If you had a real job
These are real jobs. As real as any other job. It is absurd to consider them not real. It is also not like people would be super lining up at those positions.
And third, the people week make that choice do it because this is what they want to do. They might or might not be successful or happy in corporation. If only stupid people worked in those positions, those non-profits would never be a good place to send money to.
I'm glad someone pointed this out, it irks me. Also, when you work at a for-profit job and donate money, a lot of that money goes to pay salaries. Wouldn't we want those nonprofit workers to be talented and capable?
If you keep thinking these dissident thoughts you may eventually come to the conclusion that huge portion of do-gooding is done for the benefit of the do-gooders, primarily the feel-good benefit. A few are trying hard to build a link between their own feel-good and the benefit of the supposed beneficiaries, but wishful thinking is rampant.
I often think of this SMBC comic [1]. If Superman wanted to maximise good, he could just turn a crank really fast.
However, maximizing utility is not the only goal. People want to feel good about their actions, and dumping money in a black hole just isn't that satisfying.
"dumping money in a black hole" is a very misleading word choice. The analogy with a black hole suggests that money disappears with no good.
Donating to cost-effective charities directly and significantly improves the quality of life for numerous individuals. Just because you don't "solve the whole problem" with your donations doesn't mean it shouldn't be done. Just like you don't choose to stop eating because eating a single meal won't satisfy your hunger for the next 30 years.
It's a deliberate choice of words. Yes, your money is going towards a good cause, but if you accidentally sent the money to the wrong place, you couldn't possibly tell. Your donation is never mapped to a specific result. It's just added to a much larger pool. You will never get a message saying "your donation paid Paul's first month of rent" or something to that effect.
By comparison, volunteering at a soup kitchen feels real, regardless of effectiveness. You see the people you help, and I suspect that's why people prefer being charitable in less efficient ways.
This isn't true. There are charities that do exactly what you claim they don't do. For example, https://www.againstmalaria.com/Default.aspx maps each donation to a specific bed net distribution, then gives you a status page (that you can share with others!), which tells you exactly what stage your distribution is at.
For example, I can see that one of my donations is currently in the manufacturing stage, one is on a boat travelling to the country, and one is being distributed to households right now.
After the distribution is complete, they'll post pictures from that distribution, as well as conduct follow up surveys in a few months, to ensure effective use. All this information is shared with the donor directly.
It sounds like charities have an accountability problem that could be solved with technology. If everything the charity did was public then people could rest assured that their money is well spent.
There are charities with extreme transparency. For example, all the money sent to the highly-rated (by GiveWell) charity the Against Malaria Foundation (AMF) gets tied to specific distributions (specific villages). You get to see the photos of your money at work and updates about the quality of the malaria-protecting bednets years after they have been distributed.
Evolution fucked us again. As a result, we feel drawn towards performing acts where we can be seen as doing good, not towards acts that do good. Being in a soup kitchen gives you witnesses, writing a check does not (especially with our shitty culture that often discourages people from talking about their philanthropic deeds).
This is very true. I just want to point out one under-appreciated side effect of hands-on altruisim: it does often change the person for the better.
It's often better to give money, and there's no better way to see how little you can do with your hands vs how much you can do with a dollar than to go and see it yourself. Even raising awareness about the problems with a visceral experience of being there can make a person more prone to donate money, advocate for causes, and be contemplative in their policy choices.
But, money is not a sure-fire solution.
Even when donating money, this is a cognitive barrier that must be overcome.
In major philanthropic work, a huge problem is matching the donor's expepectations of targetted "This person saved 1000 people" investments vs a "general fund" investment to, say, a global vaccine initiative. There's no traceability when you dump your dollars into a pool to buy a billion vaccines, but there is enormous positive impact. Contrast that with, say, building a well in a village (something that is usually poorly done, poorly maintained, and not very helpful compared with a municipal water project or sanitation education program). But the well has immediate impact that can be photographed and they probably got their name etched into it.
These aren't made up examples, I've heard these stories from policy / fund raisers. They just shake their head and agree that the world is slightly better, even if it could have been even more slightly better.
A colleague in my extended network had a pithy way of describing this: "Many distribution problems look like moral problems from far away." Not saying I agree with it completely, but it did change how I started looking at framings of societal problems.
I’m personally sympathetic to your concerns, but to play devil’s advocate: Who’s to say that typical high-paying “real jobs” are a net good for society, (even) assuming that their occupants donate half of their earnings to worthy causes?
I’d imagine that plenty of “talented and capable” people working in (what might be considered) “low impact non profit jobs at low pay” are aware of effective altruism, or similar movements that seem to reflect your perspective, but would consider working in high finance/Big Tech/management consulting/etc counterproductive to their ultimate goals.
Gentrification is often seen as bad but it's merely a matter how it's managed. Displacement is not a problem if you build the newcomers enough housing. They are willing to pay for expensive high rises and live there. Be happy that they don't actually want your house. But for the NIMBY that's the worst outcome. He doesn't want his neighbor to sell the plot because he wants to sell his own plot for profit. The lack of housing eventually results in displacement.
The opposite of gentrification is far worse. Stagnating communities feel the same effect. The local economy is transforming. Local culture is getting lost. Except all of it happens for no one's benefit.
People get the wrong impression and start "fighting" gentrification and then introduce real problems through misguided policy.
A lot of social ills are rooted in "The world doesn't work." And then we try to find the right feel-good pill or the right talk therapy when what we really need is a jobs program or a bridge or store that sells a thing that works and if you come up with a real solution, no one will connect the dots and say "Homelessness is down because someone built a better mousetrap." Instead they will turn their baleful eye to the latest earthquake or the latest revolt or the latest political drama and continue to complain that the world is broken.
Doing things super well is hard. Heroics make headlines and headlines have something of a tendency to actively interfere with problem solving. Problem solving tends to be done quietly at your desk, in your lab, in some back room and people who love good press tend to be better at playing to the crowd than at actually solving anything.
If you want to make the world a better place, make a business that solves a real problem and be decent in how you deal with people. Don't be a hero. Don't focus so much on the social and emotional stuff. Build a better widget instead and build it with an awareness of the social and emotional stuff and a sensitivity to the current state of the world which is always high drama and lots of pain points.
Doing anything well is really hard. There are lots of ways to do things badly while getting lauded for it in certain circles.
Isn‘t this just the typical oscillation from virtue ethics (mindset is most important) to consequentialism (results are most important) and sometimes deontological ethics (acting itself is most important). This seems to swing like a pendulum every few decades into the „mainstream“. Depending on how/where/when you grew up, one of these is your „moral compass“ (which lead to preferential world views like individualism, utilitarism, ...). This „moral compass“ often changes over time and your own kids will probably have a different starting point.
„Moral“ is just an ordered set of values that feels „obvious“ or „innate“ to you, but other people (especially from different ethical axioms, see the three above) have other ordered sets.
All of them have their downsides and have been en vogue since Aristoteles, still societies (or political parties) basically argue over the exact same stuff for millenia without any progress whatsoever.
Why the long intro? I basically consider the concept or even discussion about „moral“, especially „which would be better“ not only pointless but actually harmful.
(not a perfect analogy: asking „what was before the (original) big bang“ makes no sense assuming our concept of time started with the big bang. But this question probably killed/tortured less people in history than confrontations resulting from conflicting sets of moral values).
Escape hatch from here? Nietzsche actually saw that coming, dont just stay at nihilism/absurdism but maybe read „beyond good and evil“... :)
Part of the "trap" is implicit assumption of the subject / object distinction, which we are stuck with so long as we use language to think and communicate. But all human expression is generated from an initial non-verbal intention. So I think "trying to do good" is actually fundamentally important, because it's the precursor to good things happening
The author is making a fantastically insightful point that more people need to understand, but the whole thing is ruined by the strange choice to use a massively loaded and judgmental term like "morally incompetent". I know it's cast as self analysis and so is supposed to be somewhat self-effacing, but it also means that folks who really need to hear it won't.
Personally, I love the judgmental tone of 'morally incompetent'. It's what the term needs.
Those behaving in morally incompetent systems deserve some judgment. They may deserve a pat on the back for their efforts, but just because you're pursuing a feel-good dream doesn't mean you get to ignore the reality around you. I've seen nonprofits ruined by such mindsets. Working on the mission at all is the moral component. Being effective, neutral or making things worse through naivety is the competency component that needs to be weighed in any effort -- regardless of the effort's moral imperative.
I wish he related it to the product(s) in question, how it changed from before to after, and maybe say if there's 'good' being done with the new, more morally-neutral, product.
Reading their documentation of both projects, it's not clear how they relate to each other. It went from an app that doesn't seem like it'd need a multi-player infrastructure (some kind of CBT self-help app) to a library for multiplayer apps? It seems less like a pivot and more like a full startover.
> It seems less like a pivot and more like a full startover.
This is a pretty good description of what we did. The path went something like CBT Journal => Regular Journal => Notion Clone => Notion Clone with an API => API for "Notion Features" => Real time collaboration as a service, within the course of a month or so.
I love the idea, but competence isn't the right word here. Competence doesn't have an intrinsic ethical dimension. It's not unethical to sincerely try and fail, while the author does ascribe a pretty clear ethical dimension to moral incompetence.
I think humility is the better idea here. The examples of moral incompetence all center around egoism. Moral competence seems to follow from removing oneself and focusing on outcomes over personal validation. It's a call for helpers to serve and be humble.
I've had this thought before many times. Having gone down this path a bit, I would caution the author: deciding "I'm going to solve a big problem" is at once incredibly important (to get to the solution) and deeply arrogant (what, _you're_ gonna solve poverty?). Don't let the necessity for arrogance get in the way of your willingness to do good.
There's another aspect of it that the author doesn't address, which is that for a lot of people working on these problems, they're also looking for a way to pay their bills.
> The signature move of the morally incompetent is to be told about existing solutions that they were previously unaware of and then soldier on without any critical examination of any added value they're providing. Others working on the problem are ignored entirely or seen as a threat to their own solution.
If a person isn't already rich, then they're often balancing doing good for others with making at least enough money that they don't have to work a side job. In that sense, others working on the same problem absolutely are a threat to one's livelihood. Being aware that helping others can sometimes be in conflict with helping oneself and figuring out how to resolve those conflicts in a way that doesn't reduce one's effectiveness in helping others is important, but it seems to me that not working to help other people who have different approaches to the problem might not necessarily be an ethical failure but rather an economic one. Working with someone else and doing it their way might not be an option if there isn't a viable route to funding ones' own work.
On the other hand, I do agree that humility is valuable. If you're in it mostly to feel good about yourself rather than to do the most good even if it isn't personally rewarding, then the end results might not be as good as they would be otherwise. But then I'd rather have more people trying to help others even for bad reasons than not trying at all.
I think you're on to something, but "humility" doesn't seem to quite get at it. Maybe "focus"? It seems that the author is saying that to be moral and morally effective, one needs to be ends-focused, as opposed to means-focused. IOW, if a morally focused person hears of a way to provide the same results that they are, at half the cost, they should rush to implement that method, or join that effort. A means-focused person might tend to recruit more people or money to their cause, seeing the better method as competition.
Competence does have a moral dimension as far as the relevance of ethics, specifically the importance of work ethic and a modicum of self-awareness. There is a point where repeat negligence and the opportunity cost of its resource drain becomes a moral failure.
However sound your epistemology may be, this feels like unhelpful gatekeeping. Saying a one line disclaimer "That doesn't mean that trying to help people is bad" falls flat when you spend the entire article constructing a special category of personal failure for people who fail to effect change: "moral incompetence".
This doesn't help anyone learn anything except that apparently if you fail, it might be because you didn't actually care about succeeding in the first place! Your failure likely involved concrete issues that can be learned from and changed, chalking it up to "oh well it turns out I was the problem" is an unproductive take-away and only serves to discourage people who can't or won't have the right intentions as you define them. Let's leave the mental purity tests aside, modern society works because people have the space & incentive to do good regardless of their intention.
Why such an extreme dichotomy between the morally competent and morally incompetent? The article makes it seem like have an ego _at all_ makes one morally incompetent. We can genuinely strive to help, and we can also want to glean an egotistical sense of self-worth/importance at the same time.
I think that one is morally incompetent when they _only_ strive to advance themselves through acts of kindness/helping, or when their shallow act of do-gooding harms those who need help more than it helps them. But certainly we should be allowed to like helping because we feel good about ourselves, right?
Jeez, this strikes me as such an unnecessarily harsh and critical self analysis.
The biggest issue I have with this take is that the author's distinction between moral "competence" and "incompetence", from what I've read, doesn't really have anything to do with a moral or ethical system. It seems like his morally incompetent have admirable moral intentions (help depressed people with CBT), but suffer basically from implementation failures. They don't know how to help effectively, and are taking all of wrong lessons from working in tech (searching for direct & measurable impact, searching for technological solutions etc) and could probably be gently taught how to help more effectively. Dividing the world of people who want to help into binary 'competent' and 'incompetent' groups will just serve to discourage people who want to help but don't know how.
I also don't feel like I have a sense of what a 'morally competent person' actually does, or how they design strategies to help effectively. The article feels like a lot of unfortunate self-flagellation for being 'morally incompetent' and not as much constructive dialog about how to actually help better.
Edit: To OP - maybe there is a perspective where you don't have to be so down on yourself here? Maybe this idea 'failed' from a cashflow & user acquisition perspective, but if it helped all of the people who downloaded and used the app, didn't you do good in the world? Maybe you didn't do good at billion dollar scale, but does that matter if you improved people's lives?
> The article feels like a lot of unfortunate self-flagellation for being 'morally incompetent'
That's an acceptable thing to do for catharsis, but not so much fun when your diary entry lands on hacker news.
The problem is that the person reading the blog (like me), reads it in our own voice and the division between "working to cure cancer" vs "curing cancer" seems to be one of consequence not intention or effort (merely "wanting to" is weird).
Because being congratulated for trying-so-hard sucks, when you know you are failing instead. So much harder on your ego to tell people "I can't do this", while they're patting your back for almost doing it. Or maybe, I read it like that because as a parent of a newborn + working from home, I'm going through this somewhat.
I read the "morally incompetent" as just short hand for "not being honest about current state of mind".
But y'know what, that sort of honesty can kill things because everyone who succeeded knows the dark times where everything seems hopeless until something out of your control drops in to give you a push forward. To broadcast "I don't have it in me" usually prevents such serendipity.
> “ I read the "morally incompetent" as just short hand for "not being honest about current state of mind".”
That’s a great way to put it. I think “incompetent” is the correct term because aside from not being honest with one’s self on these “do good” issues, people can also just be very ignorant about it.
Being ignorant about the actual, measurable, demonstrable humanitarian side-effect of your beliefs and actions is rightfully seen as moral incompetence and should be harshly criticized.
I like a quote from Robin Hanson about this (paraphrased):
> “I wish people felt more of a social obligation to believe accurately and less of a social entitlement to believe whatever they want.”
Many, many in this space care more about developing careers, brands, technologies, etc than helping people. Helping people is a side effect that's published as a purpose.
It's not really a fault. It's actually quite like open source. Most open source contributions are ultimately for selfish-but-harmless reasons, and that's completely ok.
This really wasn't supposed to come across as harsh, and I'm really not down on myself, though thank you for the concern! I think the context of the word "incompetent" is what's doing it here. Maybe "effective" and "ineffective" would be better.
"So in order to continue Quirk, Quirk needed to make people feel worse for longer. "
You really ought to say something else. I know you don't mean it, but some mental health professionals read that a little sideways. I get it -- the folks who have a need for sustained engagement with therapy were difficult to reach or require more intensive efforts or something. So why not say that?
Hmm I agree, this isn't really what I meant. I was concerned we had a business model that treated success stories as failures (folks feeling better unsubscribing) and failures as successes. While we had some early success with folks, the overall direction we were going was getting worse, not better. It felt naive to assume that the forces of the business model wouldn't drive future us (or potentially people we hired) towards doing more bad than good, even though that wasn't what we wanted to do then. And so while we still had absolute control of the company and weren't blinded by a survival instinct, we pivoted.
Your app can succeed. Maybe you just didn't know the secret of successful behavioral health businesses. If half your clients are not indigent then you're doing it wrong. I know -- this doesn't seem to make sense. Oh but it does, because the economic value of assisting a bipolar person hold down a job or helping some 12 year-old show up to school is much much higher than getting a suburban family to ratchet their anxiety down a notch, and value eventually drives revenues in various ways. Would be happy to chat about it sometime. Your new project is cool too, but honestly the impact of gathering healthcare data heretofore not previously available would be much higher.
> doesn't really have anything to do with a moral or ethical system
Philosophical ethics typically divide into utility-based, rule-based and virtue-based ethics. OP is clearly lamenting some utility-based failures and claiming that virtue without any utility isn't a whole lot of good (but also no harm, so hey).
Great essay. The key is to judge yourself on visible outcomes that make the world better for others rather than progress metrics like delivering releases, passing laws, holding conferences, etc.
i believe this is not specific to social good, but instead is an implicit trait in human nature. an excellent piece was run by the wapo recently that examined this very issue within the us cdc in the early days of the pandemic. other countries turned around testing programs in short order based on the specific testing methodology distributed by the who, where the us cdc wasted 41 precious days trying to improve on it.
where it gets interesting is trying to tease out where it went wrong. is it simple vanity? do people feel obligated to live up to some standard created by the environment they operate within? is good enough ignored in light of exceptionalism? where does that come from and how do you (and should you) dispel it?
> is good enough ignored in light of exceptionalism?
I think this is the larger issue.
The FDA/CDC standards seem way too high, hindering provably useful solutions, and are actively harmful especially in the context of a fast-moving crisis.
Watch Lex Fridman interview Michael Mina about cheap rapid testing that's not available because of regulatory roadblocks. It's an infuriating listen.
But not only is it a very real substantive issue, it's a characteristic issue of the academic/expert institutional class. An obsession with the theoretical that ignores pragmatic reality. These are the experts that suggest a plan that requires 100% compliance and works against every human instinct. That's intelligent?
It's a group of people divorced from the experience of average people. E.g. all these advisors forcing everyone into isolation but breaking the rules for themselves and their own family (look at Birx, Newsom, etc., the list is horrendously long).
i think the ivory tower argument is related but somewhat separate. i think in the cdc case, you have scientists of status who feel compelled to justify their status and in doing so they skipped over simpler solutions. in the OP case, you have people chasing status perhaps at the detriment to their stated goal.
i think there may also be a flavor of this in the vaccine race. there are tried and proven vaccine technologies but it seems most of the attention is going to the more experimental (and difficult to deliver!) approach that has potential nobel prizes and wall st upside tied to it.
An organization that intends to "help people" or "make a difference" needs to specify how it plans to do so, how it defines change in the first place, and how it plans to measure change.
> The signature move of the morally incompetent is to be told about existing solutions that they were previously unaware of and then soldier on without any critical examination of any added value they're providing. Others working on the problem are ignored entirely or seen as a threat to their own solution.
Really hit the nail on the head for the non-profit industry (of which I am a part). A lot of non-profit leaders are totally insufferable because they take anything less than fawning over them to be an attack on their identity as savior.
You get some of these people everywhere, of course, but there's a way higher concentration in non-profits.
What is most interesting to me is that the business model he rejected[1] is not just the one of his app, but essentially the one used by almost all therapists.
[1] https://github.com/Flaque/quirk:
"Unfortunately, in order for the business to work and for us to pay ourselves, we needed folks to be subscribed for a fair amount of time. But that wasn't the case and we honestly should have predicted it given my own experience: as people did better, they unsubscribed. Unfortunately, the opposite was true as well, if folks weren't doing better, but were giving it a good shot, they would stay subscribed longer.
So in order to continue Quirk, a future Quirk would need to make people feel worse for longer, or otherwise not help the people we signed up to help. If the incentives of the business weren't aligned with the people, it would have been naive to assume that we could easily fix it as the organization grew. We didn't want to go down that path, so we pivoted the company."
Not at all! Therapists make more in a single session than a consumer subscription app would in an entire year, and they're limited to the amount a single therapist can accomplish. Most therapists have an entirely booked schedule; they don't have nearly the incentive to keep people longer that mobile apps do. They literally cannot handle more clients, so from a business perspective, they're much better off helping people, getting good reviews, and then charging more for the limited time they have.
I really appreciate you openly talking about these struggles. I built an app back in 2012 called iFeelio for emotional micro-journaling and one of the goals I set for it was for me (and then others) to get better at expressing how I felt so I didn't have to use the app anymore. As you expressed on Quirk's Github readme, that doesn't jive with a subscription-based model: if people improve, they stop paying. I also didn't want to make the app addictive, another thing that seems to clash with the subscription model.
One thing I contemplated but didn't have the courage to do was to charge a very high initial price to use the app. It was on Android at the time and I wanted to charge the max price, which I think was like ~$200, to download the app, one time.
Did you think about doing something similar? What alternatives did you contemplate to the subscription model? Is there a way to price it with an anticipated 3-month retention or other time-limited retention?
Although you are probably right that the situation is worse for apps, in my experience the incentive of actual therapists is still sufficient to prevent them admitting that their client would be better served elsewhere. Limited sample size, of course.
The author casts this in moral terms which is useful in grabbing attention (moral competence vs moral incompetence!), but I'm not sure how those two labels and concepts as he describes them are any different than the neither-moral-nor-immoral standard management practice of trying to focus oneself and others on outcomes rather than effort.
I am not sure "moral" is truly the right word for what the author is discussing... morality is not just about the outcome, it's about what you are aiming for. If you aim for the wrong thing, that is a moral failure, moral incompetence if you will. But that is not what the author is talking about. True, it is good to be effective and obtain a desired outcome, particularly one that you or others think is good, but I'd point out that this "moral competence" and what another early poster mentioned, "effective altruism", are not that dissimilar from earlier concepts and discussions of ... "wisdom".
I'd summarize his essay by saying that true service is about doing what's best for the situation, regardless of ego, while shiny silicon valley "service" is about getting paid in praise and adulation rather than the usual currency, money.
When you're doing real service it's usually largely thankless, anonymous for a good long while, and you work really hard and maybe move things 1 millimeter towards the goal. If you're traveling around doing photo-op's, then except in rare cases (like, you're a celebrity bringing attention to an under-recognized problem), you're not doing the right work.
Thought this was going to be about how competence is a moral virtue, which is a more common use of the phrase. However, if you aren't helping anyone, you aren't doing any good, so it's tangentially related.
I've thought about this in terms of heroes and helpers, how we often have a hero worship problem in society and jobs, and how it's usually better to try instead to be a helper rather than a hero.
If you want to do good, you actually have to help people.
Yep. Its a hard thing to look at the actual data coming out of something and see your good idea didn't translate into actually fixing a problem. Sadly, a lot of people, particularly in government, would rather die on that hill instead of moving on.
There is a fine line to walk though. Just because you didn't fix everything doesn't mean chuck it all in the trash. Being able to see the positives is important and will help with refining your strategy.
The term competence seems to be throwing people off, which I can understand but I think the point here is very insightful. It's important to separate those with a hero complex from those that are more morally sincere. Greater moral sincerity means you're more concerned with solving the problem than being the one to solve the problem. It's not about you, it's about the problem and seeing it solved. I think this is an especially relevant point in today's culture.
There have been a lot of "water from air" dehumidifier projects that basically amount to scams whether intentional or not.
No, people in arid locations have water problems, precisely because there is not enough water in the air. Nature already built a large scale dehumidifier called rain.
They only reason these "projects" are done is because they appear to be directly producing the solution (water in this case) without having to fix the underlying problem. The most perverse thing about them is that they never work out in the end. It's just a money pit and conversation starter.
The reason why even the poorest members of first world nations are wealthy relative to developing countries is because of infrastructure. Without roads, schools, electricity and internet we would be doing just as badly as them. Imagine living without indoor lighting. We take so many easy things for granted.
Everyone benefits from infrastructure. It doesn't matter if you are poor or rich. But that's the crux. People only want to help the poor. Once they cease to be poor nobody wants to help anymore but how is that supposed to work? If you remove support too early those people will fall back to poverty. It becomes a privately funded welfare trap in the end.
This reminds me of the concept of "karuna" in Bhuddism.
To me what the author is discussing is the difference between the direction translation of the word, often taken as "compassion" vs. the intention to seek truth and deeply act upon it. What Jay Garfield (Smith College) referred to as "sloppy sympathy" is what I think the author here is referring to as moral incompetence.
I have been thinking about this basic idea a bit, trying to reason how to design durable institutions that can help function as crisis mitigation organizations while also working on problems in a community, or put in terms of the article, how to structure an institution that is morally competent.
One problem to this is that many institution that have a morally competent start soon struggle with many of the problems that plague institutions, things like nepotism, political players attempting to coopt the organization for their own good, institutional survival over problem solving, and the like. While these problems can be mitigated by strong boards of directions who adhere to a larger mission, I think the better idea is to have many small organizations that promote problem solving while being part of a larger network that has the goal of solving the problem, and hope that the morally competent people can be spread about the network enough to encourage the solving of the problem, instead of the working on the problem.
Travelling to Haiti to building homes on the surface may seem like a very inefficient way to build homes, however perhaps the experience helps them truly understand the local needs and come up with a much more efficient solution. At the very least they could become a lifetime advocate that donates to more efficient programs.
You are entirely correct that this is helpful. But you don't need to go to Haiti.
3000 W. Madison in Chicago is two miles from the nation's second most important financial district and it is still burned out exactly as it was three days after Dr. King was assassinated.
OP's use of the term incompetence is correct. It certainly rises to that level when kids in Cambridge, MA and their senator think 50k student loan forgiveness is somehow progressive and moral. I mean compare the per capita benefit in Cambridge to that block of Chicago or to rural WV, MS, or some Reservations.
It’s so easy to chase busy work that feels like real work, get praised to be in meetings, write fancy docs, ship code, fight self inflicted incidents, work late nights and weekends.
The thing that ultimately matters is adding value to customers. Understanding what they want to get done and removing enough friction that they’d rather pay to your product than go to a competitor or do it themselves.
Surely docs and meetings are necessary but you have to ask yourself “how does this add value to the customer? Can we have avoided this and still achieved the same outcome?” Ask that week over week.
I spent 2 years of my life working insanely hard but at the end, customers didn’t really get value out of it and it was a dud. Then I spent another 6 months deprecating and killing the feature. It was a very hard lesson.
I'm unsure I've understood the premise: does it all boil down to "the morally incompetent considers problems as personal challenges, while the morally competent considers them as a societal responsibility" (which of course it includes him/herself)?
Thank you for sharing! Hadn’t heard of Effective Altruism or 80,000 Hours before.
This takes a tangent from the article, but it surprised me “Privacy” was absent from the primary paths[1], though the inverse “Surveillance” made the list and might be related. Likewise, Facebook and Google (along with Open AI and many others) are recommended for careers[2]. One might argue that the progress these giants already have means that’s the best place for folks should plug in and have an impact. I’m skeptical that centralizing all our efforts into giants that seem to run on ad revenue will continue to have good outcomes for humanity.
Thank you for posting this! This is something that has been mulling in the back of my head for years now and I didn’t know how to express it, but others have already fleshed it out. I have a lot of reading to do.
I'm not yrimaxi, and I'm very much in the EA camp, but I suspect a lot of people bounce off the implication that there isn't really such a thing as supererogation.
A lot of EA messaging tries to work around this by trying to avoid guilt trips or demanding that you do everything you possibly can, since asking too much is a good way to end up with nothing. See Giving What We Can and similar.
But a lot of the justifications for this kind of approach can very easily imply a much higher bar than anyone can meet. Taking global health as an example- millions of preventable deaths a year, and if you work yourself to the bone for years, you can only hope to partially mitigate it.
Even if the messaging says, "hey, just do what you can sustainably and don't worry about being perfect, because that's way better than nothing," the logic behind it is based on nothing more than observations of actions and their consequences. There's not a specific threshold where your job is actually done, and 'failure' to meet the impossible bar generally means mass death.
For someone outside EA, it is very easy to go from that observation to "these people are telling me I am basically a mass murderer for not helping, and even if I help, I'll still in practice be a mass murderer because I'll help suboptimally, gee thanks."
Even if very few identifying with the EA community would endorse that phrasing or the implied moral judgment, I can see how people could end up feeling that way. Not sure what to do about it.
Maybe you are pursuing a different point than mine, but my own point was about alienation, not supererogation. I don’t put EA proponents on such a lofty pedestal.
Yup, started writing my response before I saw yours, definitely a different point.
I do think that the implied burden is still alienating in a sense and that's what motivated my post (the philosophy can just feel really bad in some ways when you get into the nitty gritty, and many people don't respond well to that), but you were clearly talking about a different kind of alienation.
I guess it’s affirming to think that people reject one’s ethical outlook because it would necessitate such an awe-inspiring commitment or burden. Rather than for more analytic reasons.
It doesn't actually require awe-inspiring commitment and dramatic personal sacrifice, and such grand gestures would likely end up unsustainable. I just suspect that the ideas themselves will tend to sound really hoity toity and holier than thou, which isn't helpful in persuasion. Case in point- I was deliberately trying to avoid that, and it appears I failed badly.
(Edit: reading my first post, it's definitely the case that I did not make the connection here explicit- the rejection isn't merely 'oh no I can't handle the TRUTH', but more like, 'ugh, these guys'. A big part of it is social in nature, stacked on top of the underlying problem of asking too much.)
A huge part of EA is driven by analysis of effectiveness. If I want to be effective, I have to figure out how to be effective. If someone provides a valuable insight on how to be more effective, it's not a rejection of my ethical outlook, but rather useful information.
A scenario: you want to help people. You have the opportunity to get a very well-paying job on Wall Street (finance or whatever). You reckon that you would optimize your good-output by working more rather than helping directly (you don’t have the skills or know-how to help directly, you also reckon). So you get a well-paying job, work 60 hours a week in order to earn more money and advance your career, live frugally and donate 70% of your income to charity. At this point you are spent: you cannot do more good than working those hours without burning out and thus hurting your career long-term.
Now you are alienated from your charity: your work in finance indirectly helps other people, but you have no way of experiencing or connecting to that other than looking at statistics and numbers; the lifeworld of your charitable work is just some numbers that you get every quarter from the various charities that you donate to. You have no direct involvement in them.
That is the point though... would you rather “feel good” doing charity, or have a positive effect on society? Donating cash instead of physically participating in charity can be alienating, sure... but certainly more effective. Note that many food charities for decades now but even in the past year are like don’t send cans of food, send cash, because it has the most effect.
> Donating cash instead of physically participating in charity can be alienating, sure... but certainly more effective.
It can be more effective sure, especially if the physical participation is flying long haul to offer considerably less skilled construction labour than locally-based hungry people. But a division of labour between smart, driven people working outside the third sector to fund it and only people incapable of securing well paid jobs left inside the sector to spend it is also unlikely to lead to better resource allocation. There are certainly initiatives that could use a good software lead more than they could use a generous portion of a software lead's FAANG salary to hire some mediocre contractors, for example.
> Donating cash instead of physically participating in charity can be alienating, sure... but certainly more effective.
Do you have proof? Because whether it “feels good” (or rather, feels like you are actually doing something, rather than trying to motivate yourself by convincing yourself that the numbers are correct and you are (indirectly) doing something) will influence how much good you can do.
Some things—like only eating Soylent or maximizing your goodiness-output by working on Wall Street (or wherever else)—might only work well on paper.
In any case my point all along was the alienation factor. That’s the point, to me. Raise whatever other point you would like. (Of course an EA-enthusiast would only care about the supposed numbers.)
What I read in this is essentially an observation that "A scenario: you want to help people" does not necessarily (and possibly not even in most cases) imply "you want to maximize your good-output".
In many (perhaps most?) cases the feeling and motivation people have is the intuitive desire to feel good about helping others in their society personally, which is in many ways different from "true altruism". And it makes some sense (from e.g. evolutionary psychology perspective) to consider that our innate desire to "do good" is closely related to building social ties and status, and by helping people close to us in various ways (kinship, common "tribe-in-the-wide-sense-of-that-word", previous relationship history, expected future interactions) as opposed to simply helping abstract people as much as you can. True altruism is the exception (IMHO) rather than the norm, it certainly exists in some cases, but most helping others and charity and doing good is driven by "ordinary goodwill" that includes a mix of other motivations.
And, of course, if someone's goal is not really altruism, then doing more effective altruism won't help them achieve their goals. If we ("abstract we") want to optimize effective altruism, then instead we should help optimize people's efforts witin the area where true altruism overlaps with their actual goals to "do good" (whatever they mean by "do good" since that likely isn't exactly the same thing as true altruism); since once we get to the proposals which are more altruistic but contrary to their true personal goals/values, people are just going to reject the whole thing (as most do).
I keep getting impressed by the EA-proponents ability and eagerness to frame EA as “true altruism” (an actual quote in this case!), as if EA is some purely-rational approach with no philosophical baggage or assumptions. How utterly self-congratulatory.
Another problem I have with EA is how incredibly fragile it is: because it is so reductive and narrowly-focused, you are likely to optimize for the wrong thing (the map is never the territory) and might even do more harm than good. In the best case scenario you might do a lot of good, though.
Ah, I'm not really an EA-proponent, so please don't use my arguments as bad examples of their position, that wouldn't be fair. My use of "true altruism" isn't a proper term, I just needed to somehow contrast two different aspects of "altruism-as-understood-in-common-language", to differentiate the theoretical concept of fully unselfish concern for the welfare of others with the (IMHO more popular/realistic) concept that's somewhat like "general habit/desire/concer/action of doing good for others with limited (but still some) selfishness, because of a mix of motivation only part of which is actual altruism".
I mean, IMHO framing "effective altruism" as "true altruism" isn't that inaccurate as far as philosophy and definitions are concerned - my main criticism of EA is that actual altruism (according to a strict definition of altruism) is quite rare, so for most people effective altruism isn't personally relevant because most people (including me) simply aren't truly altruistic; it provides a guide on how to maximize something that most people (myself included) don't really want to maximize, they want to maximize other things which may have some overlap with altruism but diverge from it as you leave commonly accepted charity practices and approach various maximums.
I do concede that it's definitely good according to most value systems (including purely selfish ones) to have everyone else in your society to be a bit more altruistic, everything just works better that way, so facilitating various nudges towards altruism is generally a Good Thing no matter how altruistic you or I personally are.
Thank you for a concrete example. I think you will agree though, that the outcome you describe isn't inevitable or most-likely.
Over time you will likely figure out a good life-work-philanthropy balance. Much of the conversation within EA is about self-care and long-term planning. If giving 70% of your income to charity works for you, that's magnificent. But if you realize you need to give less and take more care of yourself so you don't burn out, that's the appropriate decision.
I gave 50% one year but it didn't work out long term. I've been giving 10% for almost a decade and intend to ramp up to 20% in the future. My wife does 10% too - it works out well for us.
Burn out wasn’t the point. I explicitly disregarded that by putting in the premise that the hypothetical person is not and will not get burned out with that philanthropy schedule.
I could have just as well have written that they made a million a year and only donated 5%—that’s besides (my/the) point.
I guess we are all so alienated these days that we don’t find working on Wall Street—or Main Street—in order to indirectly help other people with our money instead of helping directly not the least bit weird at all. Or to help people by working as a programmer and donating a lot of your salary to some school instead of just working there as a teacher, helping people directly (to use another example). (Oh, that reminds me. I need to take my vitamin D supplements right about now. I calculated that being in the Sun is not worth my time so I have to compensate a bit, you see.)
Well, in the end, do you want to avoid feeling weird, or to help people? Both of those are valid desires, but they don't always go together. Sometimes you can find something that does both and that's great, but not everyone can.
Describing non-alienation as “feeling good” (as another commenter did) or “not feeling weird” is a great way to pathologize my observation of how weird it is to work a six-figure, ad/surveillance-optimizing job at Google in order for there to be more malaria nets in Africa. Trust me: I don’t believe that EA is sound in any way (or “effective”, if you like), but in this thread I chose to focus on just one aspect of it, indeed its most bizarre feature, which apparently isn’t bizarre at all to all of the Soylent-drinking life-optimizers out there, so my point has been like, as they say, seeds on barren ground.
Sorry, I was using your own terms from the post I replied to. I didn't mean to sound patronizing (well, only a little bit :)). My point stands though: "non-alienation" and "actually helping" are orthogonal. But not usually treated as such.
You have a point about working in a net-negative occupations. You might be doing more harm over all.
I wonder what alternative you would propose. Presumably, not doing philanthropy is not an alternative, because that would be the most alienating approach.
If it's connection to people you want, why not "Purchase Fuzzes and Utilons Separately"? [0] Give enough to charities that make you not feel alienated, and then donate the rest to charities that are making a greater positive impact on the lives of others.
What alternative? How is that even a question? The answer is obvious: do good directly, with your own mind and hands, not indirectly. That’s the obvious alternative. Maybe not everyone has the opportunity to do that, just like not everyone has the opportunity to get a non-alienating job.
> Give enough to charities that make you not feel alienated, and then donate the rest to charities that are making a greater positive impact on the lives of others.
You see? Both of these things are still indirect do-gooding. Donating to charity? How about being good, doing good? Or are you only able to assess the moral weight of something if you can read about them in some spreadsheet?
I don't see how this is any different. Where does the proposition, "do good directly, with your own mind and hands", lead? Should the first step not be "consider who is most in need"? Should the next stop not be "consider how best to help them"? And should the last step not be "help them in that way"?
It almost seems like you're arguing against examining the problem at all. What would the world look like if everybody just quit their jobs so they could do good "directly"? They'd realize pretty quick they needs planes and ships to move people and things, and farmers to grow food. Not to mention doctors and chemists to develop and administer medicine. If they were smart about it, they'd end up allocating their own time to the things they were best at, and then liquidate and donate any excess they produce.
If you just follow your heart, in the most basal sense, you will probably do some good and you will likely feel very good about it. Which is great for you, and good for those you help. There's nothing wrong with that. But that approach will never help those afflicted with malaria, because your heart doesn't know about them. Your head has to hear about them. And then your head has to tell you not to fly down there yourself, because if everybody did that then there'd be nobody back here running air traffic control or formulating medicine.
You’ve set up a convenient dichotomy where one through pure reason alone arrives at the inevitable conclusion that one should “liquidate and donate any excess they produce”, or else one is merely being driven by pure sentiment/feel-goodiness. For some reason though it is only these tunnel-vision engineer types that seem to be sentimentally drawn towards this oh so obvious conclusion.
(As an example: a socialist will probably not think that making the most money possible and then giving a lot of it away is the most ethical thing to do.)
And yes, of course my obvious point is that everyone should just quit their jobs and travel to Africa.
I'm just trying to give examples, not set up a dichotomy. There's certainly a very wide range of ways to be charitable. For example, you could be charitable in this discussion by not lampooning those who disagree you as "tunnel-vision engineer types."
I'd like to hear more specifically how you think people should approach charity. Surely some people actually should travel to Africa? And some should not.
I'm not sure why you're being sarcastic about quitting one's job and traveling to Africa in a conversation about charity. It's not an insane thing to do. The point I was trying to make was that not everybody can do it, and that some people can actually do more good by just being excellent at their current job.
There are many ways to look at the problem we're discussing. One is to think about what the effect on the world would be if people follow one strategy over another.
The strategy of "help those around you" results in rich people who live in rich areas with multi-million-dollar-homes helping those that live in merely million dollar homes (because that's what's around). And if they venture too far geographically, they end up feeling alienated, apparently. Furthermore, these people, rather than taking the tremendous power of their wealth, do something "with their own minds and hands" - which is presumably serving some soup in a soup kitchen.
My observation is that it's really unfortunate that many people feel the need for a personal connection, and therefore do less good than they could otherwise. My response is to ignore the ill-fitting kluge that is the evolution-installed software I have.
When you know that a $3 donation protects 2 people from malaria for about 3 years, can you really think you can do more good with your hands and mind than to just protect those people with $3 you have?
You can get your warm feeling of having done good through doing something, and then use your money to give to cost-effective charities regardless of how "alienated" that makes you feel.
Perhaps the disconnect here is between fundamentally different ways of measuring "good" or "doing good".
Your argument here and EA arguments in general are based on an axiomatic assumption that good done to anyone is equally valuable, that all people worldwide now (and in some analyses, all hypothetical future people) have an equal claim on your help.
IMHO this axiom does not match the "built-in moral system" of most people. To start with an illustrative example (obviously you can imagine many less extreme comparisons), for most people, the welfare of their child is unquestionably much, much more important than the welfare of some other child across the globe. For most people saving the life of another child across the globe at the cost of the life of their child would not be a neutral exchange of things of equal value, it would be a horrifically unbalanced "trade". This is completely understandable even for undeniably good people doing lots of good. So this extreme establishes a baseline that the axiom of "saving every life is equally valuable" can not be accepted by most people (and accepting that axiom is not a requirement for "being good" or "doing good"), there is some difference, and the only question is about the scale and of that difference, what factors apply, etc.
And coming from an (incompatible) axiomatic assumption that it's plausible that helping someone in your community can be more valuable than protecting two people with no connection to you, all these other strategies start making some sense.
Looking at this from a Kantian 'moral duty' perspective, some people (perhaps including you) have an implied moral duty to care about everyone, equally. And some people have an implied moral duty to care about their community more than "strangers". Obviously those two approaches are incompatible, but IMHO both are frequently encountered, and I don't believe that "good people" and "people who do lots of good" always subscribe to the first concept of moral duty, there seems to be a lot of good works done based on the latter understanding.
Thank you for a beautiful description of what is likely happening. I think you're right about how many people think about morality.
I humbly try to encourage everyone (if the conversation strays this way) to reflect on the built-in moral system, and how wildly unprepared it is for dealing with the modern world. Given how much the world has changed (I can literally help people across the globe; people are interconnected deeply with others on the other side of the planet through everyday objects they use, etc) it is important to use reason and careful thinking when it comes to morality, not just our gut feelings.
The best writeup of all this comes from Moral Tribes by Joshua Greene - where he uses the photo camera analogy for morality: the automatic setting (gut feelings - great for usual scenarios evolution prepared us - like day to day courtesy with others), and the manual setting (slow and deliberate reasoning - essential for our complex world - anything to do with society beyond what our evolution prepared us for - I'm thinking like technology [mechanical and social] invented after 1500s).
> do good directly, with your own mind and hands, not indirectly.
How? I live in North America. The people need the most help in the world don't live in North America. How do I help them directly "with my own mind and hands"?
I could help those in my city/state/country, which isn't unreasonable by any means, and definitely commendable. But this will leave the global poor in just as bad as state as they are right now. Who will do good directly for them?
As a side note, maybe this is an issue of framing. We've been calling EA charity, but another way to view it is wealth distribution from rich countries to poor countries. I think from that lens, it becomes obvious (to me) that this is not only good, but necessary, because I don't think the current global inequality in wealth is fair at all. Telling people to stop donating to EA charities is effectively telling them to keep the wealth in rich countries, rather than having it flow to poor countries (who really need it).
It's odd that we're proxying competency by way of intent.
Would it make more sense to consider the actual actions of individuals?
If I work on a problem with the intention to solve it, I've done both, and thus (by this essay) am competent for my intent to solve the problem, but also incompetent for my intent to _work_ on solving the problem.
I don't know that I entirely disagree with the author's core points, but I don't think this was a very effective piece. Mostly because I have to take a wild guess at what those core points might be, and then try to tease them out myself.
I exchanged some 20 e-mails with Evan over the course of just 2 hours over an unfortunate bug with Quirk that nearly cost me everything I'd entered into the app up until that point. This was some 18 months ago.
He struck me as absolutely forthcoming and genuine. He fixed the issue and gave me a year's subscription for free for reporting the bug and helping him figuring it out.
I really wish Evan wouldn't be so hard on himself. His product actually helped me a lot, and he seemed to me like he was both empathetically capable and morally competent.
I'm glad you recognized that your business model (VC-backed startup) wasn't compatible with your goals. For-profit health has a lot of the same issues, from top to bottom.
But it seem like you built something that did good for some users. I see you've open sourced it, but did you consider pivoting the business model instead, to a non-profit or delivering the app through therapists and health providers?
Therapists have the same incentives, and yet they seem to make a business of it. They are happy when clients don't need them any more.
Entrepreneurs, and not just those doing things for social benefit, often find themselves in a situation where they are doing things, but not delivering value. Some of it is because they are building something new and don't know if the value is there, sometimes the solution just doesn't work correctly... yet. The problem all entrepreneurs have is knowing when to stop working on a dead end. There's a line between business failure and Theranos like fraud. Realizing you are crossing it is really hard.
When I click on a website and the first thing I see is a claim that, by using their product, I can build something in 15 minutes, I consider that company socially negative, not socially neutral.
I don't have a problem with the article itself, but this kind of stuff is really annoying. Instead of making me think "Wow!! How??" it just made me think your product is useless garbage and close the page.
I liked the disclaimer/preamble because it really did help me to read it as kinder and less harsh than I would have.
I would like to know what, specifically, led to this insight. Not generalized "people who..." description, but how Quirk itself was an example of moral incompetence
i tend to think that the approach here described as "incompetent" isn't necessarily bad as long as you maintain honesty about it.
for instance, it might be more effective to just donate money but there's nothing wrong with wanting to volunteer hands on, as long as you don't fool yourself.
of course from a very, very strict utilitarian perspective, making a somewhat less effective choice is in fact morally evil. but I don't think that viewpoint actually holds up to scrutiny -- few people would agree that volunteering at a food bank rather than donating money is morally equivalent to taking food away from hungry children.
Are you kidding? Habitat builds in the US - local people helping others in their same town to build a house. It's an Amish barn raising for our disconnected atomized individualistic society.
This article is wrong. Moral good comes from moral incompetence as much as competence (as per the author's definitions) because morality is intention, not result.
Fantastic essay. Articulates a thought I've long (in some oblique sense) had but never found the words for.
I hope this term enters the mainstream, because I find the mere fact of there being a term for something helps to anchor the concept in society's mind and help them to be mindful of it. Because we have a term for the Dunning-Kruger effect, we know to be aware of it, etc.
Of course only a small selection of society (the sort that like reading and digesting concepts) ever learn these terms, but that slice of society is very high-impact and arguably the most important slice for where such concepts need to take hold.
With the exception of regressive products like drugs and junk food, making a profit from a product or service is a good sign that you are helping people and improving their lives. Otherwise they wouldn't pay you for it. You've hit on the reason why capitalism has successfully raised billions out of poverty over the last century.
So if I gain a monopoly on a nation's water supply, and then charge the population exorbitant but enormously profitable prices for a litre of water, I must be "helping people and improving their lives" because without my activities they'd literally die?
Water certainly can't be in the category of "regressive products".
People paying someone for things they need isn't an indication that the seller is a moral/ethical actor. It's just an indication that they own valuable goods, and ownership has really nothing to do with personal ethics, unless you subscribe to a seriously flawed ethical belief system like Prosperity Theology.
Ok right, read that now. So you’ve added further exceptions, like no monopolies, no cronyism, and even more exceptions could be added, for example no market externalities.
I feel like if we kept going with the exceptions to find a form of theoretical capitalism with no exploitation only mutual benefit from exchange we’d end in a weird, not really capitalist place.
Without competition it's easy to end up in a situation where a monopoly can accrue too much power and exploit their customers or their workforce. With sufficient economic freedom competition can arise naturally. Unfortunately when a company becomes very powerful You will start to see behaviors like rent-seeking, regulatory capture, and cronyism. These artificially raise the barriers to entry for plucky startups. This is more a reflection of poor or corrupt governance than capitalism though.
One reason may be that there is sort of an existential question hanging at the edge of moral actions, namely does anything really matter? Moral action is the epitome of the belief that at least something matters, and matters a whole lot. And perhaps it is precisely through moral action that we get an experiential understanding of moral value. So, if through 'moral competence' we disengage from moral action, then we may lose the very reason we seek to make a moral difference in the first place.
this was hard to read nothing of substance. the author is defining terms to attempt some philosophical justification for their pivot and falls flat. good for you, one of many.
Examples: Habitat for humanity - for the cost of sending one incompetent american to build a house in the third world, an army of local builders can be hired. If you really want to make impact, donate rather than going in person.
Talented and capable people who end up working low impact non profit jobs at low pay. If you had a real job and donated half your income to the cause, both you and the cause would be better off.
Stupid advocacy. People who get excited about feel good slogans like "cancel rent" without considering impact on the medium and long term supply of housing, and thus end up amplifying the problem they believe they are fixing.
These are just a few examples where not doing the seemingly good/moral thing can be much better than doing it.