Perhaps the disconnect here is between fundamentally different ways of measuring "good" or "doing good".
Your argument here and EA arguments in general are based on an axiomatic assumption that good done to anyone is equally valuable, that all people worldwide now (and in some analyses, all hypothetical future people) have an equal claim on your help.
IMHO this axiom does not match the "built-in moral system" of most people. To start with an illustrative example (obviously you can imagine many less extreme comparisons), for most people, the welfare of their child is unquestionably much, much more important than the welfare of some other child across the globe. For most people saving the life of another child across the globe at the cost of the life of their child would not be a neutral exchange of things of equal value, it would be a horrifically unbalanced "trade". This is completely understandable even for undeniably good people doing lots of good. So this extreme establishes a baseline that the axiom of "saving every life is equally valuable" can not be accepted by most people (and accepting that axiom is not a requirement for "being good" or "doing good"), there is some difference, and the only question is about the scale and of that difference, what factors apply, etc.
And coming from an (incompatible) axiomatic assumption that it's plausible that helping someone in your community can be more valuable than protecting two people with no connection to you, all these other strategies start making some sense.
Looking at this from a Kantian 'moral duty' perspective, some people (perhaps including you) have an implied moral duty to care about everyone, equally. And some people have an implied moral duty to care about their community more than "strangers". Obviously those two approaches are incompatible, but IMHO both are frequently encountered, and I don't believe that "good people" and "people who do lots of good" always subscribe to the first concept of moral duty, there seems to be a lot of good works done based on the latter understanding.
Thank you for a beautiful description of what is likely happening. I think you're right about how many people think about morality.
I humbly try to encourage everyone (if the conversation strays this way) to reflect on the built-in moral system, and how wildly unprepared it is for dealing with the modern world. Given how much the world has changed (I can literally help people across the globe; people are interconnected deeply with others on the other side of the planet through everyday objects they use, etc) it is important to use reason and careful thinking when it comes to morality, not just our gut feelings.
The best writeup of all this comes from Moral Tribes by Joshua Greene - where he uses the photo camera analogy for morality: the automatic setting (gut feelings - great for usual scenarios evolution prepared us - like day to day courtesy with others), and the manual setting (slow and deliberate reasoning - essential for our complex world - anything to do with society beyond what our evolution prepared us for - I'm thinking like technology [mechanical and social] invented after 1500s).
Your argument here and EA arguments in general are based on an axiomatic assumption that good done to anyone is equally valuable, that all people worldwide now (and in some analyses, all hypothetical future people) have an equal claim on your help.
IMHO this axiom does not match the "built-in moral system" of most people. To start with an illustrative example (obviously you can imagine many less extreme comparisons), for most people, the welfare of their child is unquestionably much, much more important than the welfare of some other child across the globe. For most people saving the life of another child across the globe at the cost of the life of their child would not be a neutral exchange of things of equal value, it would be a horrifically unbalanced "trade". This is completely understandable even for undeniably good people doing lots of good. So this extreme establishes a baseline that the axiom of "saving every life is equally valuable" can not be accepted by most people (and accepting that axiom is not a requirement for "being good" or "doing good"), there is some difference, and the only question is about the scale and of that difference, what factors apply, etc.
And coming from an (incompatible) axiomatic assumption that it's plausible that helping someone in your community can be more valuable than protecting two people with no connection to you, all these other strategies start making some sense.
Looking at this from a Kantian 'moral duty' perspective, some people (perhaps including you) have an implied moral duty to care about everyone, equally. And some people have an implied moral duty to care about their community more than "strangers". Obviously those two approaches are incompatible, but IMHO both are frequently encountered, and I don't believe that "good people" and "people who do lots of good" always subscribe to the first concept of moral duty, there seems to be a lot of good works done based on the latter understanding.