<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2977-5930</journal-id>
<journal-title-group>
<journal-title>Free &amp; Equal: A Journal of Ethics and Public Affairs</journal-title>
</journal-title-group>
<issn pub-type="epub">2977-5930</issn>
<publisher>
<publisher-name>Open Library of Humanities</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.16995/fe.18681</article-id>
<article-categories>
<subj-group>
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Impartiality, Anonymity, and Caring Who</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0001-6175-5675</contrib-id>
<name>
<surname>Mu&#241;oz</surname>
<given-names>Daniel</given-names>
</name>
<email>daniel.munoz@unc.edu</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Philosophy, University of North Carolina at Chapel Hill</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-12-26">
<day>26</day>
<month>12</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>1</volume>
<issue>2</issue>
<fpage>472</fpage>
<lpage>499</lpage>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2025 The Author(s)</copyright-statement>
<copyright-year>2025</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://freeandequaljournal.org/articles/10.16995/fe.18681/"/>
<abstract>
<p>In the last 30 years, a range of powerful arguments have pushed ethics in a utilitarian direction by invoking the principle of Outcome Anonymity, which holds that two outcomes are equally good if they involve the same distribution of welfare, differing only in who is at which level. This principle is often presented as a minimal requirement of impartiality. I argue that it is not. Outcome Anonymity forbids more than partiality: it forbids caring who is who in a welfare distribution. After illustrating this point with four examples, I present a theorem that suggests that it is not Outcome Anonymity but a weaker principle, which May and Sen call simply &#8220;Anonymity,&#8221; that is the essence of impartial concern.</p>
</abstract>
</article-meta>
</front>
<body>
<p>IMPARTIALITY, ANONYMITY, AND CARING WHO</p>
<p>Daniel Mu&#241;oz</p>
<p>In the last 30 years, a range of powerful arguments have pushed ethics in a utilitarian direction by invoking the principle of Outcome Anonymity, which holds that two outcomes are equally good if they involve the same distribution of welfare, differing only in who is at which level. This principle is often presented as a minimal requirement of impartiality. I argue that it is not. Outcome Anonymity forbids more than partiality: it forbids caring who is who in a welfare distribution. After illustrating this point with four examples, I present a theorem that suggests that it is not Outcome Anonymity but a weaker principle, which May and Sen call simply &#8220;Anonymity,&#8221; that is the essence of impartial concern.</p>
<sec>
<title>I. Introduction</title>
<p>Most of us, most of the time, are partial: we care more about some people just because of who they are. But most ethicists think we should sometimes be impartial. Classical utilitarians take this to the extreme, holding that we should always maximize expected global happiness without giving special weight to ourselves or those close to us. &#8220;Each counts for one,&#8221; as the slogan goes.<xref ref-type="fn" rid="n1">1</xref></p>
<p>I have no objection to impartiality in itself. But I believe it has become a Trojan horse for classical utilitarians. In a range of arguments over the last 30 years, leading ethicists and economists have invoked impartiality in the guise of a principle that Richard Bradley calls:</p>
<p><italic>Outcome Anonymity</italic> If two outcomes <italic>A</italic> and <italic>B</italic> involve the same welfare distribution (that is, the number of people at each level of welfare), and they differ only in <italic>who</italic> is at which level of welfare, then <italic>A</italic> and <italic>B</italic> are equally good.<xref ref-type="fn" rid="n2">2</xref></p>
<p>To illustrate, suppose <italic>A</italic> is good for Alex and bad for Beth, while <italic>B</italic> is the reverse:</p>
<table-wrap id="T1">
<caption>
<p>Table 1: A two-person case</p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>A</italic></td>
<td align="center" valign="top"><italic>B</italic></td>
</tr>
<tr>
<td align="right" valign="top">Alex</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">&#8211;1</td>
</tr>
<tr>
<td align="right" valign="top">Beth</td>
<td align="center" valign="top">&#8211;1</td>
<td align="center" valign="top">1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Because <italic>A</italic> and <italic>B</italic> involve the same welfare distribution, Outcome Anonymity entails that they are equally good. There is also a useful way to visualize this principle, given a matrix like the one above: two outcomes must be equally good if the same numbers appear underneath them, only reordered.</p>
<p>Outcome Anonymity, though only one part of classical utilitarianism, has nevertheless been used to push ethics in a utilitarian direction. Frances Kamm and Iwao Hirose invoke Outcome Anonymity to argue for &#8220;counting the numbers&#8221; when choosing whether to rescue a large or small group from harm.<xref ref-type="fn" rid="n3">3</xref> Jacob Nebel appeals to a minimal version of the principle to argue against the &#8220;Person-Affecting Restriction,&#8221; which holds that <italic>A</italic> is better than <italic>B</italic> only if <italic>A</italic> is better than <italic>B</italic> for some individual person.<xref ref-type="fn" rid="n4">4</xref> John Broome uses Outcome Anonymity to establish his &#8220;same-number addition theorem,&#8221; which holds that <italic>A</italic> is better than <italic>B</italic> &#8220;if and only if it has a greater total of people&#8217;s wellbeing,&#8221; supposing <italic>B</italic> has the same number of people.<xref ref-type="fn" rid="n5">5</xref> Matthew Clark and Theron Pummer use Outcome Anonymity to rebut Larry Temkin&#8217;s &#8220;Disperse Additional Burdens&#8221; principle in favor of &#8220;effective altruism&#8221;&#8212;an approach to charity that, per William MacAskill, obeys Outcome Anonymity by definition.<xref ref-type="fn" rid="n6">6</xref> Zach Barnett uses Outcome Anonymity to argue more generally that a quantity of harm does not matter less if it is diffused across many people.<xref ref-type="fn" rid="n7">7</xref> Finally, Simon Blessenohl uses Outcome Anonymity to construct a dilemma for theories on which individuals may be differently risk-hungry (or risk-averse).<xref ref-type="fn" rid="n8">8</xref></p>
<p>Despite these profound implications, Outcome Anonymity is typically presented not as a controversial utilitarian doctrine, but as a straightforward consequence of impartiality.<xref ref-type="fn" rid="n9">9</xref> Nebel calls the principle a &#8220;requirement of impartiality,&#8221; explaining it as follows:</p>
<disp-quote>
<p>The intuition behind [Outcome Anonymity] is that, from an impartial perspective, we should care equally about each person, and therefore ought to be indifferent between distributions that are permutations of each other&#8230;<xref ref-type="fn" rid="n10">10</xref></p>
</disp-quote>
<p>Kamm implicitly relies on the principle and links it to the &#8220;impartial&#8221; point of view.<xref ref-type="fn" rid="n11">11</xref> Hirose simply calls it &#8220;impartiality.&#8221;<xref ref-type="fn" rid="n12">12</xref> MacAskill calls it &#8220;tentative impartiality.&#8221;<xref ref-type="fn" rid="n13">13</xref> Blessenohl considers it &#8220;such a plausible constraint on an impartial observer&#8221; that he treats it &#8220;as a fixed point,&#8221; at least in certain cases.<xref ref-type="fn" rid="n14">14</xref> Susumu Cato and Ken Oshani describe it as &#8220;a rather uncontroversial property for moral judgments,&#8221; which many regard as a &#8220;basic axiom.&#8221;<xref ref-type="fn" rid="n15">15</xref> Broome, who calls the principle &#8220;Impartiality Between People,&#8221; says &#8220;it is hard to think of any doubts that can plausibly be raised&#8221; against it, so long as &#8220;[e]veryone must count equally.&#8221;<xref ref-type="fn" rid="n16">16</xref></p>
<p>This attitude towards Outcome Anonymity has also been influential in public policy. In a leading policy textbook, Matthew Adler says that Outcome Anonymity &#8220;formalizes the ethical attitude of impartiality,&#8221; giving &#8220;equal weight&#8221; to &#8220;each person&#8217;s well-being.&#8221;<xref ref-type="fn" rid="n17">17</xref> He later calls Outcome Anonymity &#8220;the formal expression of ethical impartiality used in the [social welfare function] literature,&#8221;<xref ref-type="fn" rid="n18">18</xref> remarking that the principle is &#8220;very hard to dispute,&#8221; as it &#8220;crystallizes the ethical commitment to impartiality.&#8221;<xref ref-type="fn" rid="n19">19</xref></p>
<p>But is Outcome Anonymity truly a requirement of impartial concern?</p>
<p>I argue that it is <italic>not</italic>. Some impartial principles entail that permuting who is who can make things better (&#167;&#167;III, IV, VIII). Others entail that permuting who is who can lead to value incomparability (&#167;VI). From these two sorts of counterexamples, I think we should conclude that Outcome Anonymity demands something more than impartiality: in particular, it demands that we not care who is who in a distribution of welfare. Being partial is one way of &#8220;caring who,&#8221; but another way is to care about benefits and burdens to particular people, as some impartial moralists do.</p>
<p>To avoid the counterexamples, we could weaken Outcome Anonymity, as Campbell Brown suggests in an important article.<xref ref-type="fn" rid="n20">20</xref> But this risks making the principle redundant. There is a more minimal principle, originally known as &#8220;Anonymity,&#8221; which says that no person&#8217;s welfare <italic>as such</italic> should have a special influence on which outcomes are better.<xref ref-type="fn" rid="n21">21</xref> I prove that Anonymity entails the relevant weakening of Outcome Anonymity (&#167;IX).</p>
<p>My conclusion is that, fundamentally, impartial concern for individual welfare requires only Anonymity (&#167;X). Arguments that invoke Outcome Anonymity may <italic>seem</italic> like they are tracing out the demands of impartiality, but in fact they are describing what happens to ethics, economics, and public policy if we do not care who is who in any distribution, treating individuals&#8217; welfares as fungible goodies rather than distinct dimensions of value.</p>
<p>There may yet be sound reasons for adopting a more utilitarian social policy. For example, each member of society might rationally prefer a standing policy of &#8220;counting the numbers&#8221; in an emergency&#8212;say, a shortage of hospital beds in a pandemic&#8212;because such a policy has a higher expected value for each of them, given the lower chances of winding up among the few.<xref ref-type="fn" rid="n22">22</xref> A similar argument could justify a policy of providing a bigger total of diffuse benefits rather than a smaller sum of concentrated ones<xref ref-type="fn" rid="n23">23</xref>&#8212;when building a train line, for instance, a city might destroy one neighborhood to shave time off of millions of commutes.</p>
<p>But whatever the merits of the utilitarian approach, it cannot be derived from our ordinary, uncontroversial ideal of impartiality, since this ideal does not by itself entail Outcome Anonymity. Or so I shall argue.</p>
</sec>
<sec>
<title>II. Anonymity and Impartiality</title>
<p>Anonymity and Outcome Anonymity come to ethics from social choice theory, the subfield of economics that studies how individual votes or welfares can be aggregated. Confusingly, <italic>both</italic> principles have standardly gone by the name &#8220;Anonymity,&#8221;<xref ref-type="fn" rid="n24">24</xref> and sometimes the two principles are mixed up or combined in citations.<xref ref-type="fn" rid="n25">25</xref></p>
<p>But these principles are different, and the difference matters.</p>
<p>Anonymity was introduced by Kenneth May<xref ref-type="fn" rid="n26">26</xref> in his axiomatization of Majority Rule, a &#8220;minor classic&#8221; in social choice theory.<xref ref-type="fn" rid="n27">27</xref> The gist of May&#8217;s result is that, when people are voting on a pair of options, Majority Rule is the only rule that always delivers a result, counts every vote positively, and treats both voters and candidates equally. Anonymity is what guarantees the equal treatment of each voter.<xref ref-type="fn" rid="n28">28</xref></p>
<p>The basic idea of Anonymity is that, however much people&#8217;s preference orderings might matter, it shouldn&#8217;t matter <italic>who</italic> has which ordering. To illustrate, imagine an election where each voter reports their preferences on a ballot with their name on the top. Anonymity would say: the outcome of the election cannot change if we merely shuffle the names on the ballots. (So we cannot use a rule that ignores the ballots of people whose names begin with &#8220;T,&#8221; or a rule that gives complete priority to the ballot of a lucky &#8220;dictator.&#8221;) Since the names make no difference, we can just as well erase them&#8212;the voting is, in this way, &#8220;anonymous.&#8221;</p>
<p>There are several ways to formalize Anonymity depending on our choice of framework. Different frameworks use different objects to model individual preferences&#8212;orderings, choice functions, welfare functions, and so forth.<xref ref-type="fn" rid="n29">29</xref> Here, we will use welfare functions, and we will be concerned with ethical value rather than collective choice.</p>
<p>Let <bold>O</bold> be a finite set of outcomes, and let <bold>P</bold> be a finite set of individuals denoted 1, 2, &#8230;, <italic>n</italic>. For any <italic>i</italic> in <bold>P</bold>, we denote by <bold>W</bold><sub><italic>i</italic></sub> a welfare function&#8212;that is, a function from each <italic>x</italic> in <bold>O</bold> to a real number. We then let value relations be given by a reflexive relation &#8831; on <bold>O</bold> that is a function of an <italic>n</italic>-tuple of individual welfare functions (a &#8220;profile&#8221; <bold>U</bold>): in symbols, &#8831; = <italic>g</italic> (<bold>W</bold><sub>1</sub>, <bold>W</bold><sub>2</sub>, &#8230;, <bold>W</bold><italic><sub>n</sub></italic>), where the <italic>i</italic>th argument denotes the welfare function of person <italic>i</italic>. We call &#8831; &#8220;at least as good as&#8221; (in light of <bold>U</bold>). We define betterness (&#8827;) and equal goodness (&#8764;) as follows:</p>
<list list-type="simple">
<list-item><p><italic>x &#8827; y =<sub>df</sub> x &#8831; y</italic> and not <italic>y &#8831; x</italic>.</p></list-item>
<list-item><p><italic>x &#8764; y =<sub>df</sub> x &#8831; y</italic> and <italic>y &#8831; x</italic>.</p></list-item>
</list>
<p>We do <italic>not</italic> assume that &#8831; is transitive (<italic>x &#8831; y &#8831; z</italic> implies <italic>x &#8831; z</italic>) or complete (<italic>x &#8831; y</italic> or <italic>y &#8831; x</italic>).<xref ref-type="fn" rid="n30">30</xref></p>
<p>Now we can state Anonymity precisely:</p>
<p><italic>Anonymity</italic> If <italic>g</italic> maps a certain <italic>n</italic>-tuple of individual welfare functions to &#8831;, then <italic>g</italic> also maps any reordering of that <italic>n</italic>-tuple to &#8831;.</p>
<p>For example, if <italic>g</italic> maps (<bold>W</bold><sub>1</sub>, <bold>W</bold><sub>2</sub>, &#8230;, <bold>W</bold><italic><sub>n</sub></italic>) to &#8831;, then <italic>g</italic> also maps to &#8831; if we reverse the order of the welfare functions&#8212;that is, <italic>g</italic>(<bold>W</bold><sub>n</sub>, <bold>W</bold><italic><sub>n &#8211; 1</sub></italic>, &#8230;, <bold>W</bold><sub>1</sub>) = &#8831;.</p>
<p>This definition is a bit technical, but there is an easy way to visualize Anonymity. Recall our matrix from above, with people in rows, outcomes in columns, and welfares in cells:</p>
<table-wrap id="T2">
<caption>
<p>Table 2: Numbered individuals</p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>A</italic></td>
<td align="center" valign="top"><italic>B</italic></td>
</tr>
<tr>
<td align="right" valign="top">Alex (1)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">&#8211;1</td>
</tr>
<tr>
<td align="right" valign="top">Beth (2)</td>
<td align="center" valign="top">&#8211;1</td>
<td align="center" valign="top">1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Anonymity states that, whatever the value relation between <italic>A</italic> and <italic>B</italic> in this case, the same relation must hold if we <italic>switch the rows of numbers</italic>, as in the following:</p>
<table-wrap id="T3">
<caption>
<p>Table 3: Permuted rows</p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>A</italic></td>
<td align="center" valign="top"><italic>B</italic></td>
</tr>
<tr>
<td align="right" valign="top">Alex (1)</td>
<td align="center" valign="top">&#8211;1</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="right" valign="top">Beth (2)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">&#8211;1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Equivalently, we could have switched the names of Alex and Beth, leaving the columns in place.</p>
<p>Outcome Anonymity, by contrast, has to do with permuting individuals&#8217; welfares in a certain pair of outcomes (rather than permuting welfare functions, which are defined over all outcomes):</p>
<p><italic>Outcome Anonymity</italic> If (<bold>W</bold><sub>1</sub>(<italic>x</italic>), <bold>W</bold><sub>2</sub>(<italic>x</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>x</italic>)) is a reordering of (<bold>W</bold><sub>1</sub>(<italic>y</italic>), <bold>W</bold><sub>2</sub>(<italic>y</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>y</italic>)), then <italic>x</italic> &#8764; <italic>y</italic>.</p>
<p>Thus <italic>A</italic> and <italic>B</italic>, which merely swap Alex&#8217;s welfare for Beth&#8217;s, must be equally good. Clearly, Anonymity and Outcome Anonymity are related. But they are not the same.<xref ref-type="fn" rid="n31">31</xref></p>
<p>Anonymity tells us that no one&#8217;s welfare has a special effect on what&#8217;s at least as good as what. Outcome Anonymity tells us that nobody&#8217;s welfare <italic>in a particular outcome</italic> can have any special effect on that outcome&#8217;s value. Thus we cannot say, &#8220;Alex is at welfare level &#8211;1 in <italic>B</italic>, so her being at level 1 in <italic>A</italic> has a special significance for <italic>A</italic>&#8217;s value.&#8221; Her being at that level in <italic>A</italic> must count the same as anybody else&#8217;s being at that level in <italic>A</italic>&#8212;thus we cannot care <italic>who</italic> is at any particular position of the distribution. Each person in an outcome is just a &#8220;somebody.&#8221;</p>
<p>One immediate problem for Outcome Anonymity, familiar from social choice theory,<xref ref-type="fn" rid="n32">32</xref> is that it allows us to derive facts about value merely from facts about individual welfare. This is problematic if value can depend on other factors besides welfare. For example, many would say that an outcome is better (all else equal) if it features more intrinsic beauty or fewer violations of rights. Imagine, then, that two outcomes have the same welfare distribution, but one brims with beautiful landscapes while the other teems with rights violations. Outcome Anonymity says the outcomes must be equally good. But isn&#8217;t the pretty one better?</p>
<p>There are two ways around this problem. First, we might make Outcome Anonymity more general, so that it says two outcomes are equally good if the <italic>only</italic> difference between them consists in the identities of those who feature in the outcome (see fn. 12, above). Beauty, rights, and everything else must be held fixed. But this is difficult to make precise, since on many views of personal identity, it is not possible to freely permute identities while holding <italic>everything</italic> else fixed: who I am may depend on where I come from and what I am like.<xref ref-type="fn" rid="n33">33</xref> Second, and more promising, we can just restrict our focus to the value of individual welfare. This becomes easier if we use examples where other dimensions of value are held fixed, as I shall do in this paper.</p>
<p>With all that in place, we can ask our central question. What is required for impartial concern for the welfare of each person: Anonymity, Outcome Anonymity, neither, or both?</p>
<p>Impartiality, I believe, clearly requires Anonymity. Any ethical theory that violates Anonymity must privilege or denigrate somebody&#8217;s welfare in some way. Consider, for example, a theory on which self-interest carries more weight than the interests of strangers,<xref ref-type="fn" rid="n34">34</xref> or consider ethical egoism, on which only one&#8217;s own welfare matters.</p>
<p>What about Outcome Anonymity? The &#8220;intuition&#8221; here is that, &#8220;from an impartial perspective, we should care equally about each person, and therefore ought to be indifferent between distributions that are permutations of each other&#8230;&#8221;<xref ref-type="fn" rid="n35">35</xref> This sounds reasonable. But let&#8217;s now turn to some potential counterexamples, starting with the first kind of case, where permutations make things better.<xref ref-type="fn" rid="n36">36</xref></p>
</sec>
<sec>
<title>III. Better Permutations: Majority Rule</title>
<p>Could there be an impartial rule that violates Outcome Anonymity? As it happens, there is one right under our noses: <italic>Majority Rule</italic>, the principle for which Anonymity was invented.<xref ref-type="fn" rid="n37">37</xref></p>
<p><italic>Majority Rule x</italic> &#8827; <italic>y</italic> if and only if the number of people better off in <italic>x</italic> than in <italic>y</italic> is higher than the number of people better off in <italic>y</italic> than in <italic>x</italic>.</p>
<p>Traditionally, Majority Rule has been conceived as a method for collective decisions. But it can also be viewed as an ethical principle,<xref ref-type="fn" rid="n38">38</xref> and that is how I propose to treat it here&#8212;not because it is especially plausible (it clearly isn&#8217;t), but because its formal properties make it a useful counterexample.</p>
<p>Suppose there are three outcomes <italic>C, D</italic>, and <italic>E</italic> which result in the following distributions of welfare for Cath, David, and Elise:</p>
<table-wrap id="T4">
<caption>
<p>Table 4: A three-person case</p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>C</italic></td>
<td align="center" valign="top"><italic>D</italic></td>
<td align="center" valign="top"><italic>E</italic></td>
</tr>
<tr>
<td align="right" valign="top">Cath (1)</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="right" valign="top">David (2)</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="right" valign="top">Elise (3)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">3</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Since these outcomes involve the same distribution of welfare, Outcome Anonymity entails that they must all be equally good. But Majority Rule says that no distinct pair is equally good&#8212;for example, <italic>D</italic> is deemed better than <italic>C</italic>, since <italic>D</italic> is better for both David and Elise.</p>
<p>Again, I concede that Majority Rule for ethics isn&#8217;t plausible. Not only does it ignore the sizes of benefits and losses to each individual: it allows for cycles of betterness. Look again to the example above. Just as a majority prefers <italic>D</italic> to <italic>C</italic>, so does a majority prefer <italic>E</italic> to <italic>D</italic> and <italic>C</italic> to <italic>E</italic>. This result, known as Condorcet&#8217;s Paradox, violates the transitivity of betterness: <italic>C</italic> &#8827; <italic>E</italic> &#8827; <italic>D</italic> &#8827; <italic>C</italic>. Most ethicists find such cycles intolerably implausible.<xref ref-type="fn" rid="n39">39</xref></p>
<p>But I&#8217;m not saying that Majority Rule is plausible, only that it&#8217;s <italic>impartial</italic>. Since Majority Rule is anonymous, it doesn&#8217;t give anybody&#8217;s welfare special weight: there is no privileged &#8220;dictator,&#8221; nor any &#8220;disenfranchised.&#8221; Majority Rule does, admittedly, dispense with certain bits of information that might plausibly be thought to be relevant. But the same complaint can be alleged against many other clearly impartial principles, including classical utilitarianism.<xref ref-type="fn" rid="n40">40</xref> Just as our criterion for deductive validity should apply to all arguments, not just the plausible ones, our criteria for impartiality should apply to all principles.</p>
<p>Majority Rule is our first counterexample: it violates Outcome Anonymity without being partial. The second counterexample can be illustrated using the very same case&#8212;just in reverse!</p>
</sec>
<sec>
<title>IV. Better Permutations: Weak Anti-Aggregation</title>
<p>Classical utilitarians always aggregate the welfares of different individuals. But some philosophers don&#8217;t aggregate small benefits (or harms) so as to outweigh large ones. Better to save one life than to cure one billion headaches they say, and better to save one person from electrocution even if this means denying many others fifteen minutes of amusement.<xref ref-type="fn" rid="n41">41</xref> &#8220;Limited aggregationists&#8221; believe we can aggregate benefits and harms only when they are &#8220;close enough&#8221; in size to the largest benefit or harm with which anyone is faced.</p>
<p>To simplify, let&#8217;s suppose that being &#8220;close enough&#8221; for the purposes of aggregation means being less than one unit of welfare apart. On this view, we can aggregate benefits of size 1 to outweigh a benefit of size 1.5, but not to outweigh a benefit of size 2 or greater. In symbols, we get:</p>
<p><italic>Weak Anti-Aggregation x</italic> &#8827; <italic>y</italic> if, for some <italic>i</italic> in <bold>P</bold>, <bold>W</bold><sub><italic>i</italic></sub>(<italic>x</italic>) &#8211; <bold>W</bold><sub><italic>i</italic></sub>(<italic>y</italic>) &#8805; <bold>W</bold><italic><sub>j</sub></italic>(<italic>y</italic>) &#8211; <bold>W</bold><italic><sub>j</sub></italic>(<italic>x</italic>) + 1 for any <italic>j</italic> in <bold>P</bold>.<xref ref-type="fn" rid="n42">42</xref></p>
<p>Now apply this view to the Condorcet-style example above. Outcome Anonymity says that <italic>C</italic> &#8764; <italic>D</italic> &#8764; <italic>E</italic> &#8764; <italic>C</italic>. But Weak Anti-Aggregation entails that <italic>C</italic> &#8827; <italic>D</italic> &#8827; <italic>E</italic> &#8827; <italic>C</italic>. For Cath, <italic>C</italic> is better than <italic>D</italic> by a margin of 2 units of welfare, while <italic>D</italic> is better for David and Elise by only 1 unit each. Since the difference between the two degrees of benefit is at least 1 unit, we do not aggregate, and Cath&#8217;s bigger benefit wins out. By the same reasoning, David&#8217;s 2-unit benefit makes <italic>D</italic> better than <italic>E</italic>, and Elise&#8217;s 2-unit benefit makes <italic>E</italic> better than <italic>C</italic>.</p>
<p>(Visually, we can imagine the three people rotating right-to-left along an Olympic platform, with the silver medalist on the left, gold medalist in the center, and bronze medalist on the right, with medals corresponding to welfare levels. The rise from bronze to gold outweighs the drops from gold to silver and silver to bronze, given Weak Anti-Aggregation. This is the reverse of Majority Rule, which holds that two drops always outweigh a single rise!)<xref ref-type="fn" rid="n43">43</xref></p>
<p>So Weak Anti-Aggregation conflicts with Outcome Anonymity. And yet it seems perfectly impartial, precisely because it obeys Anonymity. The first to notice this, to my knowledge, was Campbell Brown. As Brown puts it, those who accept a version of Weak Anti-Aggregation &#8220;may instead reject [Outcome Anonymity].&#8221;<xref ref-type="fn" rid="n44">44</xref> He continues:</p>
<disp-quote>
<p>[Outcome Anonymity] is intended to capture the ideal of &#8220;moral equality&#8221;: all individuals&#8217; interests should be given equal weight, and no one should be favoured simply because of who she is. But [Outcome Anonymity] actually requires more than this.<xref ref-type="fn" rid="n45">45</xref></p>
</disp-quote>
<p>As Brown explains, Weak Anti-Aggregation ought to count as impartial: the bigger benefit to Cath matters more &#8220;not because of who she is, but because of her situation.&#8221;<xref ref-type="fn" rid="n46">46</xref> Or, as Nebel would put it, the Weak Anti-Aggregationist &#8220;need not care more&#8221; about Cath, but &#8220;may simply care more about preventing severe harms&#8230;than preventing minor ones.&#8221;<xref ref-type="fn" rid="n47">47</xref></p>
<p>Again, I am not saying that Weak Anti-Aggregation is plausible, or that we should believe in cycles of betterness.<xref ref-type="fn" rid="n48">48</xref> I am just saying that this principle is <italic>impartial</italic>.</p>
<p>To sum up, we have seen two counterexamples to the widespread claim that impartiality requires Outcome Anonymity. Majority Rule and Weak Anti-Aggregation both entail that we can make things better or worse just by permuting who is at which welfare level. And both principles appear to be impartial precisely because they respect Anonymity, which suggests that Anonymity may be not only necessary but sufficient for impartiality.</p>
</sec>
<sec>
<title>V. Outcome Anonymity for Transpositions</title>
<p>Brown suggests a quite different requirement of partiality: &#8220;transposing&#8221; people in the welfare distribution always results in something equally good.</p>
<p>To generate the last two counterexamples, we had to rotate the three people through the various welfare levels. On each step from <italic>C</italic> to <italic>D</italic> to <italic>E</italic> to <italic>C</italic>, the person at level 3 drops to level 2, the person at level 2 drops to level 1, and the person at level 1 ascends to level 3. It takes three such rotations for people to end up back where they started. By contrast, some permutations, called <italic>transpositions</italic>, return people to their starting positions if performed twice in a row. We saw this earlier in our simple two-person case.</p>
<table-wrap id="T5">
<caption>
<p>Table 5: The two-person case, transposed</p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>A</italic></td>
<td align="center" valign="top"><italic>B</italic></td>
</tr>
<tr>
<td align="right" valign="top">Alex (1)</td>
<td align="center" valign="top">&#8211;1</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="right" valign="top">Beth (2)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">&#8211;1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>If we start with <italic>A</italic> and permute who is at 1 and who is at &#8211;1, the result is <italic>B</italic>. Do the same permutation again, and we return to <italic>A</italic>&#8212;so it&#8217;s a transposition.</p>
<p>The interesting thing about transpositions is that losses and gains are perfectly matched. Unlike rotations, transpositions can only swap the positions of duos&#8212;and whatever one member of the duo gains (or loses), the other member loses (or gains).</p>
<p>Brown&#8217;s proposal, then, is that impartiality requires:</p>
<p><italic>Outcome Anonymity for Transpositions</italic> If (<bold>W</bold><sub>1</sub>(<italic>x</italic>), <bold>W</bold><sub>2</sub>(<italic>x</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>x</italic>)) is a transposition of (<bold>W</bold><sub>1</sub>(<italic>y</italic>), <bold>W</bold><sub>2</sub>(<italic>y</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>y</italic>)), then <italic>x</italic> &#8764; <italic>y</italic>.</p>
<p>As Brown notes, Weak Anti-Aggregation is consistent with Outcome Anonymity for Transpositions. So is Majority Rule.</p>
<p>But is Brown&#8217;s principle a requirement of impartiality? To settle the question, let&#8217;s turn to counterexamples of the second kind, which involve transpositions of just two people.</p>
</sec>
<sec>
<title>VI. Incomparable Transpositions: Kamm&#8217;s Argument</title>
<p>We can start with Kamm&#8217;s &#8220;argument for best outcomes,&#8221; which tries to show that &#8220;the numbers count&#8221; when giving benefits to more or fewer people.<xref ref-type="fn" rid="n49">49</xref></p>
<p>Suppose Fumi, Gerry, and Heidi are dying from a rare illness, and there is only one dose left of the drug that can save them. We can give the dose to Fumi, who needs the whole dose to survive, or we could split it between Gerry and Heidi, who each need only one half. We also could, perversely, give Gerry one half and let Heidi die pointlessly. So, we have three possible outcomes:</p>
<table-wrap id="T6">
<caption>
<p>Table 6: A numbers case</p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>F</italic></td>
<td align="center" valign="top"><italic>G</italic></td>
<td align="center" valign="top"><italic>GH</italic></td>
</tr>
<tr>
<td align="right" valign="top">Fumi (1)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">0</td>
</tr>
<tr>
<td align="right" valign="top">Gerry (2)</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="right" valign="top">Heidi (3)</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Taurek would say that, in such a case, it is not better if both Gerry and Heidi survive than if only Fumi survives.<xref ref-type="fn" rid="n50">50</xref> But most ethicists disagree. They would insist that &#8220;the numbers should count,&#8221; so that two lives outweigh one.<xref ref-type="fn" rid="n51">51</xref> This common view certainly seems intuitive.</p>
<p>Kamm&#8217;s insight is that we can <italic>derive</italic> this intuitive view from deeper premises. In particular, we just need Outcome Anonymity and two other principles:</p>
<p><italic>Strong Pareto</italic> If <italic>x</italic> is better than <italic>y</italic> for someone, and <italic>x</italic> is at least as good as <italic>y</italic> for everyone else, then <italic>x</italic> is better than <italic>y</italic>.</p>
<p><italic>Substitution of Equals</italic> If <italic>x</italic> is better than <italic>y</italic>, and <italic>y</italic> is equal in goodness to <italic>z</italic>, then <italic>x</italic> is better than <italic>z</italic>.</p>
<p>Given Strong Pareto, we can say that it is better if Heidi survives in addition to Gerry: <italic>GH</italic> &#8827; <italic>G</italic>.<xref ref-type="fn" rid="n52">52</xref> Given Outcome Anonymity, we can say that Gerry&#8217;s survival and Fumi&#8217;s are equally good: <italic>F</italic> &#8764; <italic>G</italic>. And given Substitution of Equals, we can conclude that it is better for Gerry and Heidi to survive than only Fumi: <italic>GH</italic> &#8827; <italic>F</italic>. The same reasoning shows that, in general, the numbers count, in the sense that it&#8217;s better if more people receive benefits of a given size.</p>
<p>This argument crucially relies on Outcome Anonymity, which Kamm and others support by appealing to the &#8220;impartial point of view.&#8221; As we have seen (&#167;&#167;III&#8211;IV), impartiality does not in fact require Outcome Anonymity. But perhaps it requires a weaker form of the principle, such as Outcome Anonymity for Transpositions. Even if <italic>some</italic> permutations make things better, we might wonder how merely transposing two people in a distribution&#8212;Fumi and Gerry&#8212;could matter. Certainly, an impartial observer should not care more about Fumi&#8217;s welfare than Gerry&#8217;s, or vice versa. Does it follow that, on an impartial view, <italic>F</italic> and <italic>G</italic> are equally good?</p>
<p>Not necessarily. They might be <italic>incomparable</italic> in value, where this means that neither is better or worse than the other, nor are they equally good.<xref ref-type="fn" rid="n53">53</xref></p>
<p>Incomparability can arise when two dimensions of value conflict and neither determinately takes precedence.<xref ref-type="fn" rid="n54">54</xref> We can illustrate using an example from Chang.<xref ref-type="fn" rid="n55">55</xref></p>
<p>Who is the better artist&#8212;Michelangelo or Mozart? Though neither seems better overall, each is better in some respects: one is a visual genius, the other a brilliant composer of melodies and harmonies. So, it may seem inapt to call the two artists equally good, which would imply that they do not relevantly differ in value. Given this, the only option left is that the two artists are incomparable.</p>
<p>Being incomparable is a bit like being equally good. But equals can be substituted for one another in value relations, and the relation will still hold. We cannot say the same for incomparables. Sometimes, <italic>x</italic> is better than <italic>y</italic>, and <italic>y</italic> and <italic>z</italic> are incomparable, and yet <italic>x</italic> is <italic>not</italic> better than <italic>z</italic>.<xref ref-type="fn" rid="n56">56</xref></p>
<p>Imagine a slightly improved version of Mozart&#8212;call him Mozart+&#8212;who is just like the original, except slightly more creative. Since Mozart+ is better than the original in one way, and at least as good in all others, it seems clear that he is indeed a better artist than the original. But is he better than Michelangelo? Not obviously. If the improvement is small enough, it may not break the tie. Mozart+ may have more advantages over Michelangelo than did the original composer, but Michelangelo remains superior in some respects&#8212;Mozart+ couldn&#8217;t paint the Sistine Chapel! Given that we still have nearly as much conflict between the various competing dimensions of value, the two may be incomparable.</p>
<p>We can apply this same reasoning to Kamm&#8217;s argument. Saving Fumi and saving Gerry might be good in different ways, even if the two people are qualitatively alike, simply because the welfare of one person is not fungible with that of another.<xref ref-type="fn" rid="n57">57</xref> You can reasonably care <italic>who</italic> suffers the loss even if you do not care <italic>more</italic> about either person. That is why, depending on who is saved, you may have different reasonable regrets. (It is not just that you regret that <italic>someone or other</italic> died; you regret that <italic>this person</italic> died.) This suggests that <italic>F</italic> and <italic>G</italic> are not equally good but incomparable, which would immediately block Kamm&#8217;s argument. If <italic>F</italic> and <italic>G</italic> are not equally good, we cannot freely substitute them. In particular, we cannot infer from <italic>GH</italic> &#8827; <italic>G</italic> that <italic>GH</italic> &#8827; <italic>F</italic>. When two things are incomparable, an improvement on one might still be incomparable to the other. And so Taurek could insist that <italic>GH</italic> is incomparable to <italic>F</italic>, even though it is better than <italic>G</italic>.</p>
<p>Importantly, Taurek could say all of this without being <italic>partial</italic>. The point here is not that Fumi matters more than Gerry. The point is that a loss to <italic>any</italic> one person is incomparable in value to any number of equal-sized losses to others. The principle here, perfectly anonymous, is this:</p>
<p><italic>Taurekian Anti-Aggregation</italic> If max<sub><italic>i</italic></sub>(<bold>W</bold><sub><italic>i</italic></sub>(<italic>x</italic>) &#8211; <bold>W</bold><sub><italic>i</italic></sub>(<italic>y</italic>)) = max<sub><italic>i</italic></sub>(<bold>W</bold><sub><italic>i</italic></sub>(<italic>y</italic>) &#8211; <bold>W</bold><sub><italic>i</italic></sub>(<italic>x</italic>)), then <italic>x</italic> and <italic>y</italic> are incomparable in value.</p>
<p>More simply: <italic>x</italic> and <italic>y</italic> are incomparable if the biggest gain anyone receives from <italic>x</italic> over <italic>y</italic> is equal to the biggest gain anyone receives from the reverse. This principle, like Majority Rule, is dubious. But since it obeys Anonymity, we cannot accuse it of playing favorites with any person or group.</p>
</sec>
<sec>
<title>VII. Weakening Outcome Anonymity</title>
<p>We have found a new kind of counterexample. Taurekian Anti-Aggregation is impartial, and yet it conflicts with Outcome Anonymity: changing who is who might result in an <italic>incomparable</italic> option, rather than one that is equally good. This counterexample, moreover, does not require us to rotate three people&#8217;s positions. We need only transpose the welfares of two individuals&#8212;Fumi and Gerry, in our case. So, the example works even against weaker forms of Outcome Anonymity that are restricted to transpositions of a pair of people.</p>
<p>There is, however, another way to weaken Outcome Anonymity to avoid this counterexample. We could say that changing who is who must leave things equally good <italic>or incomparable</italic>. But this will not help with the earlier counterexamples of Majority Rule and Weak Anti-Aggregation, where changing who is who can make things better or worse.</p>
<p>What if we weaken both ways at once? What if transposing people&#8212;that is, swapping out pairs of individuals in the distribution of welfare&#8212;must leave things either equally good or incomparable? The result would be a very weak principle that avoids all of our counterexamples:</p>
<p><italic>Weak Outcome Anonymity for Transpositions</italic> If (<bold>W</bold><sub>1</sub>(<italic>x</italic>), <bold>W</bold><sub>2</sub>(<italic>x</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>x</italic>)) is a transposition of (<bold>W</bold><sub>1</sub>(<italic>y</italic>), <bold>W</bold><sub>2</sub>(<italic>y</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>y</italic>)), then either <italic>x</italic> &#8764; <italic>y</italic> or <italic>x</italic> and <italic>y</italic> are incomparable.</p>
<p>But there is a final counterexample that makes trouble for even this principle.</p>
</sec>
<sec>
<title>VIII. Better Transpositions: Paretian Dependence (on &#8220;Irrelevant Alternatives&#8221;)</title>
<p>I argued that Taurek can block Kamm&#8217;s argument by holding that different lives are incomparable in value, rather than being equally valuable in a sense that implies fungibility.</p>
<p>But there is another way of defending Taurek&#8212;and rejecting Outcome Anonymity&#8212;that does not invoke incomparability. In a choice between saving only Fumi (<italic>F</italic>), only Gerry (<italic>G</italic>), and both Gerry and Heidi (<italic>GH</italic>), Outcome Anonymity tells us that <italic>F</italic> is just as good as <italic>G</italic>. But some say that <italic>G</italic> is <italic>worse</italic> than <italic>F</italic>, because <italic>G</italic> involves letting Heidi die <italic>gratuitously</italic>.<xref ref-type="fn" rid="n58">58</xref> To be clear, the point is not just that <italic>G</italic> is worse than <italic>GH</italic>&#8212;which follows immediately from Strong Pareto. The point is rather that <italic>G</italic> becomes worse when <italic>GH</italic> is on the menu&#8212;and so <italic>G</italic> becomes worse relative to <italic>F</italic>.</p>
<p>This violates the aptly named:</p>
<p><italic>Independence of Irrelevant Alternatives</italic> For any outcomes <italic>x</italic> and <italic>y</italic> and any profiles <bold>U</bold> = (<bold>W</bold><sub>1</sub>, <bold>W</bold><sub>2</sub>, &#8230;, <bold>W</bold><italic><sub>n</sub></italic>) and <bold>U*</bold> = (<bold>W*</bold><sub>1</sub>, <bold>W*</bold><sub>2</sub>, &#8230;, <bold>W*</bold><italic><sub>n</sub></italic>) where &#8831; = <italic>g</italic>(<bold>U</bold>) and &#8831;* = <italic>g</italic>(<bold>U*</bold>), if <bold>W</bold><sub><italic>i</italic></sub>(<italic>x</italic>) = <bold>W*</bold><sub><italic>i</italic></sub>(<italic>x</italic>) and <bold>W</bold><sub><italic>i</italic></sub>(<italic>y</italic>) = <bold>W*</bold><sub><italic>i</italic></sub>(<italic>y</italic>) for <italic>i</italic> = 1, 2, &#8230;, <italic>n</italic>, then <italic>x</italic> &#8831; <italic>y</italic> if and only if <italic>x</italic> &#8831;* <italic>y</italic>.</p>
<p>While this is a mathematical mouthful, the idea behind &#8220;Independence&#8221; is simple: how two outcomes compare in value should only depend on people&#8217;s welfares <italic>in those two outcomes</italic>. We cannot say that <italic>F</italic> is better than <italic>G</italic> merely because of the presence of <italic>GH</italic>&#8212;an &#8220;irrelevant&#8221; alternative.</p>
<p>To be clear, this does not quite say that people&#8217;s welfares must matter in the same way given any pair of outcomes. To establish that, we need a stronger principle:</p>
<p><italic>Strong Neutrality</italic> For any outcomes <italic>w, x, y</italic>, and <italic>z</italic> and any profiles <bold>U</bold> = (<bold>W</bold><sub>1</sub>, <bold>W</bold><sub>2</sub>, &#8230;, <bold>W</bold><italic><sub>n</sub></italic>) and <bold>U*</bold> = (<bold>W*</bold><sub>1</sub>, <bold>W*</bold><sub>2</sub>, &#8230;, <bold>W*</bold><italic><sub>n</sub></italic>) where &#8831; = <italic>g</italic>(<bold>U</bold>) and &#8831;* = <italic>g</italic>(<bold>U*</bold>), if <bold>W</bold><sub><italic>i</italic></sub>(<italic>x</italic>) = <bold>W*</bold><sub><italic>i</italic></sub>(<italic>w</italic>) and <bold>W</bold><sub><italic>i</italic></sub>(<italic>y</italic>) = <bold>W*</bold><sub><italic>i</italic></sub>(<italic>z</italic>) for <italic>i</italic> = 1, 2, &#8230;, <italic>n</italic>, then <italic>x</italic> &#8831; <italic>y</italic> if and only if <italic>w</italic> &#8831;* <italic>z</italic>.<xref ref-type="fn" rid="n59">59</xref></p>
<p>This adds a kind of impartiality towards outcomes, ruling out the possibility that a welfare distribution might matter differently because it occurs in, say, <italic>G</italic> rather than in <italic>F</italic>.</p>
<p>If Taurek gives up Strong Neutrality, he can reject Outcome Anonymity, thereby blocking Kamm&#8217;s argument. This breaks with &#8220;Taurekian Anti-Aggregation,&#8221; as I called it, in favor of a quite different principle:</p>
<p><italic>Paretian Dependence (on &#8220;Irrelevant Alternatives&#8221;)</italic> If (<bold>W</bold><sub>1</sub>(<italic>x</italic>), <bold>W</bold><sub>2</sub>(<italic>x</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>x</italic>)) is a reordering of (<bold>W</bold><sub>1</sub>(<italic>y</italic>), <bold>W</bold><sub>2</sub>(<italic>y</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>y</italic>)), and there is some <italic>z</italic> such that <italic>z</italic> &#8827; <italic>y</italic> by Strong Pareto but no <italic>z</italic> such that <italic>z</italic> &#8827; <italic>x</italic> by Strong Pareto, then <italic>x</italic> &#8827; <italic>y</italic>.</p>
<p>In words, where Outcome Anonymity normally <italic>would</italic> create a tie, one outcome can be made worse if it alone loses to an &#8220;irrelevant alternative&#8221; in the set of outcomes <bold>O</bold> that is better for someone and as good for all others.</p>
<p>Taurek&#8217;s critics might reply that Paretian Dependence, which violates Outcome Anonymity (given a rich enough domain), must be partial. But why think that? Paretian Dependence is anonymous. No one is being denigrated or prioritized because of who they are. The principle just deprecates options that are gratuitously bad for someone, no matter the &#8220;someone.&#8221; In our example, <italic>G</italic> is worse than <italic>F</italic> given the presence of <italic>GH</italic>. But this is not an insult to Gerry. If the only alternative to <italic>F</italic> and <italic>G</italic> had been <italic>FH</italic>&#8212;that is, saving Fumi and Heidi&#8212;then <italic>G</italic> would have been better than <italic>F</italic>.</p>
<p>This is our final counterexample to the inference from impartiality to Outcome Anonymity. The example, moreover, violates even Weak Outcome Anonymity for Transpositions. Merely swapping Fumi and Gerry (a transposition) results in <italic>G</italic> instead of <italic>F</italic>, and <italic>G</italic> is worse, not equal or incomparable, in light of <italic>GH</italic>.</p>
</sec>
<sec>
<title>IX. Why Anonymity is More Fundamental</title>
<p>Could we weaken Outcome Anonymity a third time to deal with Paretian Dependence? This final counterexample relies on a violation of Strong Neutrality. So, a natural option is to add a condition to our doubly weakened principle: <italic>if</italic> Strong Neutrality holds, <italic>then</italic> so does Weak Outcome Anonymity for Transpositions.<xref ref-type="fn" rid="n60">60</xref></p>
<p>I believe this principle is weak enough to be a true requirement of impartiality. But it is too weak to be an <italic>independent</italic> requirement, because we can derive it from Anonymity, along with the minimal assumption that our function <italic>g</italic> has a Universal Domain&#8212;we do not, in other words, rule out any logically possible profiles as inadmissible.</p>
<p><italic>Theorem 1</italic> Given Strong Neutrality and Universal Domain, Anonymity implies Weak Outcome Anonymity for Transpositions. (See the Appendix for a proof.)</p>
<p>Outcome Anonymity, sufficiently weakened, turns out to be a shadow cast by Anonymity.</p>
<p>In light of this result, Anonymity seems to be the more fundamental requirement of impartiality, especially because Theorem 1&#8217;s converse does not hold (see Theorem 2 in the Appendix).</p>
<p>But this is just to say that Anonymity is the more basic necessary condition for impartiality. Is it sufficient? I don&#8217;t think Anonymity suffices for <italic>all</italic> kinds of impartiality. For example, Anonymity has little to do with the sort of &#8220;impartial justice&#8221; we expect from judges, who must treat like claims alike in the course of legal reasoning. Maybe legal impartiality is governed by an analogue of Anonymity. (E.g., no one&#8217;s claims should be accorded special weight.) But here I am endorsing only a more limited thesis: Anonymity guarantees impartial concern <italic>for individual welfare</italic>.</p>
<p>Nevertheless, as one reviewer urges, Anonymity may still be &#8220;much too weak&#8221; to support even this limited species of impartiality, since it does not say anything about &#8220;indifference to race.&#8221; Anonymity is defined relative to a framework that does not operate on information about race, gender, or other sorts of group characteristics. Don&#8217;t we need to add more to rule out bigotry?</p>
<p>Surprisingly, we might not: even by itself, Anonymity can block a racist ordering. To illustrate, suppose Theory R is like classical utilitarianism, but it gives White people&#8217;s welfare double the weight of Black people&#8217;s welfare. Now consider a profile featuring one white and one black person:</p>
<p>Theory R says that <italic>A</italic> is better than <italic>B</italic>, since 8 &gt; 7. But if we permute the rows (or the names in front of them), Theory R says the opposite: suddenly <italic>B</italic> is better than <italic>A</italic>&#8212;again, 8 &gt; 7. The racist Theory R thus violates Anonymity, which therefore manages to rule out a racist judgment without having to mention race explicitly.<xref ref-type="fn" rid="n61">61</xref></p>
<table-wrap id="T7">
<caption>
<p>Table 7: A case of bias</p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>A</italic></td>
<td align="center" valign="top"><italic>B</italic></td>
</tr>
<tr>
<td align="right" valign="top">White (1)</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">2</td>
</tr>
<tr>
<td align="right" valign="top">Black (2)</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">3</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>There may be other cases where Anonymity fails to guarantee impartial concern for welfare. I myself am inclined to think that Anonymity guarantees such impartiality even in more complex cases, including ones with infinite populations. But I won&#8217;t insist on that here. The important point for this paper is that Anonymity is <italic>necessary</italic> for impartiality, which is the kernel of truth in the too-strong claim that impartiality requires Outcome Anonymity.</p>
</sec>
<sec>
<title>X. Conclusion</title>
<p>The &#8220;intuition behind [Outcome Anonymity],&#8221; to quote Nebel once more, &#8220;is that, from an impartial perspective, we should care equally about each person, and therefore ought to be indifferent between distributions that are permutations of each other&#8230;&#8221;<xref ref-type="fn" rid="n62">62</xref> I have argued that this intuition has less force than many have thought.</p>
<p>In its usual form, Outcome Anonymity is too strong to be a requirement of impartiality: we can care about people&#8217;s welfares equally even if we think that some permutations are incomparable (as in the case of Taurekian Anti-Aggregation), or that some are better than others (as in the cases of Majority Rule, Weak Anti-Aggregation, and Paretian Dependence). Suitably weakened, however, Outcome Anonymity follows from little more than Anonymity, which thus has a claim to being the more fundamental requirement of impartiality.</p>
<p>To be impartial, we cannot care more about one person than another. But even an impartial moralist can care who is who in a welfare distribution. The majoritarian can ask, &#8220;How many people are better off in the alternative?&#8221; The weak anti-aggregationist can ask, &#8220;Does anyone suffer a loss greater than anybody&#8217;s gain?&#8221; The Taurekians can ask, &#8220;Does the alternative save a bigger set of people or a superset?&#8221; These theorists do not just care how many people are at which welfare level. They care <italic>who</italic> is at each level, because they think a person&#8217;s welfare level in an outcome has a different moral impact depending on how well off that person is in the alternative. They do not just care that somebody is at level 1. They care that <italic>somebody is at level 1 here who is at level 0 there</italic>.</p>
<p>Such &#8220;caring who&#8221; is what we lose sight of when we replace Anonymity with Outcome Anonymity; it is what distinguishes our concepts of benefit and complaint from those of absolute weal and woe;<xref ref-type="fn" rid="n63">63</xref> and it is part of what it means to care about persons&#8212;not as containers of welfare or abstract statistics, but as separate persons, who stand to gain or lose.</p>
<p>As I conceded at the start, there may well be strong reasons to take a more utilitarian approach to social policy, even beyond the reasons in favor of utilitarianism itself. Perhaps, in the end, we should &#8220;count the numbers,&#8221; sum up utilities, ignore individual risk profiles (Blessenohl, &#8220;Risk Attitudes&#8221;), and remain indifferent to the concentration or diffusion of harm. Perhaps we should not worry about how people are &#8220;affected&#8221; and should merely focus on total utility. I am happy to concede that such policies may enjoy strong support from various sources.</p>
<p>But I do <italic>not</italic> concede that we should implement these policies on the grounds that they are demanded by <italic>impartiality</italic>. The mere invocation of impartiality, in this context, has no force. Views that satisfy Anonymity, though not Outcome Anonymity, are perfectly impartial in the familiar and uncontroversial sense that they do not give priority to any individual&#8217;s welfare simply on the grounds of who that individual is.</p>
<p>There is nothing wrong if the utilitarians and sympathizers wish to develop their own, more demanding conception of impartiality, enriched with plausible though controversial assumptions (see the Appendix on &#8220;formal welfarism&#8221;). The trouble only starts when a utilitarian concoction is passed off as unadulterated common sense.</p>
</sec>
</body>
<back>
<fn-group>
<fn id="n1"><p>See Derek Parfit, &#8220;Innumerate Ethics,&#8221; <italic>Philosophy &amp; Public Affairs</italic> 7, no. 4 (1978): 301 and Derek Parfit, &#8220;Justifiability to Each Person,&#8221; <italic>Ratio</italic> 16, no. 4 (2003): 378, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1046/j.1467-9329.2003.00229.x">https://doi.org/10.1046/j.1467-9329.2003.00229.x</ext-link>, echoing Jeremy Bentham, <italic>The Works of Jeremy Bentham</italic>, ed. John Bowring (William Tait, 1838), 7:334.</p></fn>
<fn id="n2"><p>Richard Bradley, &#8220;Impartial Evaluation under Ambiguity,&#8221; <italic>Ethics</italic> 132, no. 3 (2022): 548, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1086/718081">http://doi.org/10.1086/718081</ext-link>.</p></fn>
<fn id="n3"><p>Frances Kamm, <italic>Morality, Mortality Volume I: Death and Whom to Save from It</italic> (Oxford University Press, 1993), 83, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1093/0195119118.001.0001">http://doi.org/10.1093/0195119118.001.0001</ext-link>; Frances Kamm, &#8220;Aggregation and Two Moral Methods,&#8221; <italic>Utilitas</italic> 17, no. 1 (2005): 4, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1017/S0953820804001372">http://doi.org/10.1017/S0953820804001372</ext-link>; Iwao Hirose, &#8220;Saving the Greater Number Without Combining Claims,&#8221; <italic>Analysis</italic> 61, no. 4 (2001): 341, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/1467-8284.00318">http://doi.org/10.1111/1467-8284.00318</ext-link>, 341.</p></fn>
<fn id="n4"><p>Jacob Nebel, &#8220;A Fixed-Population Problem for the Person-Affecting Restriction,&#8221; <italic>Philosophical Studies</italic> 177, no. 9 (2020): 2779&#8211;2787, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1007/s11098-019-01338-5">http://doi.org/10.1007/s11098-019-01338-5</ext-link>. See also Jamie Dreier, &#8220;Blessed Lives, Bright Prospects, and Incomplete Orderings,&#8221; in <italic>Oxford Studies in Normative Ethics, Volume 12</italic>, ed. Mark Timmons (Oxford University Press, 2022), 105&#8211;126, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1093/oso/9780192868886.003.0006">http://doi.org/10.1093/oso/9780192868886.003.0006</ext-link>.</p></fn>
<fn id="n5"><p>John Broome, <italic>Weighing Lives</italic> (Oxford University Press, 2004), 136, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1093/019924376X.001.0001">http://doi.org/10.1093/019924376X.001.0001</ext-link>. See also Johan Gustafsson, &#8220;Utilitarianism without Moral Aggregation,&#8221; <italic>Canadian Journal of Philosophy</italic> 51, no. 4 (2021): 256&#8211;269, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1017/can.2021.20">http://doi.org/10.1017/can.2021.20</ext-link>.</p></fn>
<fn id="n6"><p>Matthew Clark and Theron Pummer, &#8220;Each-We Dilemmas and Effective Altruism,&#8221; <italic>Journal of Practical Ethics</italic> 7, no. 1 (2019): 24&#8211;32, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.jpe.ox.ac.uk/papers/each-we-dilemmas-and-effective-altruism/">https://www.jpe.ox.ac.uk/papers/each-we-dilemmas-and-effective-altruism/</ext-link>; Larry Temkin, &#8220;Being Good in a World of Need: Some Empirical Worries and an Uncomfortable Philosophical Possibility,&#8221; <italic>Journal of Practical Ethics</italic> 7, no. 1 (2019): 1&#8211;24, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.jpe.ox.ac.uk/papers/being-good-in-a-world-of-need-some-empirical-worries-and-an-uncomfortable-philosophical-possibility/">https://www.jpe.ox.ac.uk/papers/being-good-in-a-world-of-need-some-empirical-worries-and-an-uncomfortable-philosophical-possibility/</ext-link>; William MacAskill, &#8220;Aid Scepticism and Effective Altruism,&#8221; <italic>Journal of Practical Ethics</italic> 7, no. 1 (2019): 56, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.jpe.ox.ac.uk/papers/aid-scepticism-and-effective-altruism/">https://www.jpe.ox.ac.uk/papers/aid-scepticism-and-effective-altruism/</ext-link>; William MacAskill, &#8220;The Definition of Effective Altruism,&#8221; in <italic>Effective Altruism: Philosophical Issues</italic>, eds. H. Greaves and T. Pummer (Oxford University Press, 2019), 14, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1093/oso/9780198841364.003.0001">http://doi.org/10.1093/oso/9780198841364.003.0001</ext-link>.</p></fn>
<fn id="n7"><p>Zach Barnett, &#8220;Diffuse Harm and Fortuna&#8217;s Wheel,&#8221; (unpublished manuscript).</p></fn>
<fn id="n8"><p>Simon Blessenohl, &#8220;Risk Attitudes and Social Choice,&#8221; <italic>Ethics</italic> 130, no. 4 (2020): 485&#8211;513, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1086/708011">http://doi.org/10.1086/708011</ext-link>.</p></fn>
<fn id="n9"><p>There are exceptions. Tomi Francis, &#8220;Anonymity and Non-Identity Cases,&#8221; <italic>Analysis</italic> 81, no. 4 (2021): 632, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1093/analys/anab031">http://doi.org/10.1093/analys/anab031</ext-link> calls Outcome Anonymity &#8220;substantive and powerful,&#8221; though &#8220;uncontroversial&#8221; in fixed-population cases (like all cases discussed below). See also the defense of Temkin, &#8220;Being Good&#8221; in David O&#8217;Brien, &#8220;Review of Larry S. Temkin, <italic>Being Good in a World of Need</italic>,&#8221; <italic>Notre Dame Philosophical Reviews</italic> (2025), <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://ndpr.nd.edu/reviews/being-good-in-a-world-of-need/">https://ndpr.nd.edu/reviews/being-good-in-a-world-of-need/</ext-link> and the following footnote on Nebel and Brown.</p></fn>
<fn id="n10"><p>Nebel, &#8220;Fixed-Population Problem,&#8221; 2783. Here Nebel is talking about a &#8220;minimal&#8221; version of Outcome Anonymity, which applies only in &#8220;fixed population cases in which no one would be better off than anyone else&#8221;&#8212;though he is &#8220;inclined&#8221; to accept Outcome Anonymity in general (ibid., note 6). In a recent paper, however, Nebel draws on Brown to argue that impartiality doesn&#8217;t require Outcome Anonymity (or rather, doesn&#8217;t require a choice-functional analogue). Jacob Nebel, &#8220;A Choice-Functional Characterization of Welfarism,&#8221; <italic>Journal of Economic Theory</italic> 222 (2024): 1&#8211;13, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1016/j.jet.2024.105918">http://doi.org/10.1016/j.jet.2024.105918</ext-link>; Campbell Brown, &#8220;Is Close Enough Good Enough?&#8221; <italic>Economics &amp; Philosophy</italic> 36, no. 1 (2020): 29&#8211;59, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1017/S0266267119000099">http://doi.org/10.1017/S0266267119000099</ext-link>; see &#167;3, below.</p></fn>
<fn id="n11"><p>Kamm, <italic>Morality, Mortality</italic>, 83&#8211;4; Kamm, &#8220;Aggregation,&#8221; 4.</p></fn>
<fn id="n12"><p>Hirose, &#8220;Saving the Greater Number,&#8221; 341; Iwao Hirose, <italic>Moral Aggregation</italic> (Oxford University Press, 2014), 162, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1093/acprof:oso/9780199933686.001.0001">https://doi.org/10.1093/acprof:oso/9780199933686.001.0001</ext-link>. Hirose also calls Outcome Anonymity &#8220;symmetry.&#8221; Iwao Hirose, &#8220;Aggregation and Numbers,&#8221; <italic>Utilitas</italic> 16, no. 1 (2004): 68, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1017/S0953820803001067">http://doi.org/10.1017/S0953820803001067</ext-link>. Elsewhere, he calls it &#8220;impartiality*,&#8221; reserving &#8220;impartiality&#8221; for a weaker principle, which says that <italic>A</italic> and <italic>B</italic> are &#8220;morally indifferent&#8221; if they differ only with respect to &#8220;the identities of people.&#8221; Hirose, &#8220;Moral Aggregation,&#8221; 36. Since this principle requires holding fixed <italic>all</italic> other factors (not just welfare distributions), Hirose remarks that he &#8220;cannot think of any philosophers who object&#8221; to it. Ibid., 37.</p></fn>
<fn id="n13"><p>MacAskill, &#8220;Definition of Effective Altruism.&#8221;</p></fn>
<fn id="n14"><p>Blessenohl, &#8220;Risk Attitudes,&#8221; 494.</p></fn>
<fn id="n15"><p>Susumu Cato and Ken Oshitani, &#8220;Positional Conditional Egalitarianism,&#8221; <italic>Inquiry</italic> (forthcoming): 6, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1080/0020174X.2025.2469255">https://doi.org/10.1080/0020174X.2025.2469255</ext-link>.</p></fn>
<fn id="n16"><p>Broome, <italic>Weighing Lives</italic>, 135.</p></fn>
<fn id="n17"><p>Matthew Adler, <italic>Measuring</italic> Social <italic>Welfare: An Introduction</italic> (Oxford University Press, 2019), 21, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1093/oso/9780190643027.001.0001">https://doi.org/10.1093/oso/9780190643027.001.0001</ext-link>.</p></fn>
<fn id="n18"><p>Adler, <italic>Measuring Social Welfare</italic>, 45, note 2.</p></fn>
<fn id="n19"><p>Adler, <italic>Measuring Social Welfare</italic>, 97. Outcome Anonymity is also considered &#8220;a basic fairness norm&#8221; by many welfare economists, such as Geir Asheim, Wolfgang Bucholz, and Bertil Tungodden, &#8220;Justifying Sustainability,&#8221; <italic>Journal of Environmental Economics and Management</italic> 41, no. 3 (2001): 255, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1006/jeem.2000.1137">http://doi.org/10.1006/jeem.2000.1137</ext-link>. They write that &#8220;[i]nvoking impartiality in this way is the cornerstone of ethical social choice theory reaching far beyond intergenerational comparisons.&#8221; Ibid., note 6.</p></fn>
<fn id="n20"><p>Campbell, &#8220;Is Close Enough Good Enough?&#8221;</p></fn>
<fn id="n21"><p>Kenneth May, &#8220;A Set of Independent Necessary and Sufficient Conditions for Simple Majority Decision,&#8221; <italic>Econometrica</italic> 20 (1952): 680&#8211;84, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.2307/1907651">https://doi.org/10.2307/1907651</ext-link>; Peter Hammond, &#8220;Equity, Arrow&#8217;s Conditions, and Rawls&#8217; Difference Principle,&#8221; <italic>Econometrica</italic> 44, no. 4 (1976): 793&#8211;804, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.2307/1913445">https://doi.org/10.2307/1913445</ext-link>; Amartya Sen, &#8220;On Weights and Measures: Informational Constraints in Social Welfare Analysis,&#8221; <italic>Econometrica</italic> 45, no. 7 (1977): 1539&#8211;1572, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.2307/1913949">https://doi.org/10.2307/1913949</ext-link>; Amartya Sen, <italic>Collective Choice and Social Welfare: An Expanded Edition</italic> (Harvard University Press, 2017), <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.2307/j.ctv2sp3dqx">http://doi.org/10.2307/j.ctv2sp3dqx</ext-link>.</p></fn>
<fn id="n22"><p>John Taurek, &#8220;Should the Numbers Count?&#8221; <italic>Philosophy &amp; Public Affairs</italic> 6, no. 4 (1977): 312.</p></fn>
<fn id="n23"><p>Thomas Schelling, <italic>Choice and Consequence</italic> (Harvard University Press, 1984), 10.</p></fn>
<fn id="n24"><p>By the 1970s, Anonymity was &#8220;well known&#8221; by that name (Claude d&#8217;Aspremont and Louis Gevers, &#8220;Equity and the Informational Basis of Collective Choice,&#8221; <italic>Review of Economic Studies</italic> 44, no. 2 (1977): 202, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.2307/2297061">https://doi.org/10.2307/2297061</ext-link>), while Outcome Anonymity was known as &#8220;Suppes Indifference&#8221; (Sen, &#8220;Weights and Measures,&#8221; 1554; Peter Hammond, &#8220;Equity in Two-Person Situations: Some Consequences,&#8221; <italic>Econometrica</italic> 47, no. 5 [1979]: 1127&#8211;1135, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.2307/1911953">https://doi.org/10.2307/1911953</ext-link>), or &#8220;Condition S&#8221; (Hammond, &#8220;Equity, Arrow&#8217;s Conditions, and Rawls&#8217; Difference Principle,&#8221; 797), a reference to Patrick Suppes, &#8220;Some Formal Models of Grading Principles,&#8221; <italic>Synthese</italic> 16, no. 3/4 [1966]: 284&#8211;306, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1007/BF00485084">http://doi.org/10.1007/BF00485084</ext-link>. More recently, &#8220;Anonymity&#8221; has become a standard name for Outcome Anonymity (Hirose, <italic>Moral Aggregation</italic>, 37; John Weymark, &#8220;Social Welfare Functions,&#8221; in <italic>The Oxford Handbook of Well-</italic>Being <italic>and Public Policy</italic>, eds. Matthew Adler and Mark Fleurbaey [Oxford University Press, 2016], &#167;5.11, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1093/oxfordhb/9780199325818.013.5">http://doi.org/10.1093/oxfordhb/9780199325818.013.5</ext-link>; Gustafsson, &#8220;Utilitarianism,&#8221; 258), though often there are variations, such as Blessenohl&#8217;s (&#8220;Risk Attitudes&#8221;) &#8220;Constant Anonymity.&#8221; See also Luc Van Liedekerke, &#8220;Should Utilitarians be Cautious About an Infinite Future?&#8221; <italic>Australasian Journal of Philosophy</italic> 73, no. 3 (1995): 405&#8211;407, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1080/00048409512346741">http://doi.org/10.1080/00048409512346741</ext-link> and David McCarthy et al., &#8220;Utilitarianism with and without Expected Utility,&#8221; <italic>Journal of Mathematical Economics</italic> 87 (2020): 77&#8211;113, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1016/j.jmateco.2020.01.001">http://doi.org/10.1016/j.jmateco.2020.01.001</ext-link>. (My thanks to Jake Nebel for helpful comments on these points.)</p></fn>
<fn id="n25"><p>For example, Jamie Dreier, &#8220;Blessed Lives,&#8221; 112, note 11 writes that Outcome Anonymity (which he calls &#8220;neutrality for distributions&#8221;) is known as &#8220;Anonymity&#8221; in social choice theory, citing Christian List, &#8220;Social Choice Theory,&#8221; in <italic>The Stanford Encyclopedia of Philosophy</italic> (Winter 2022 Edition), Edward N. Zalta &amp; Uri Nodelman (eds.), <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://plato.stanford.edu/archives/win2022/entries/social-choice/">https://plato.stanford.edu/archives/win2022/entries/social-choice/</ext-link>. But List calls Outcome Anonymity &#8220;optionwise anonymity&#8221; (following Robert Goodin and Christian List, &#8220;A Conditional Defense of Plurality Rule: Generalizing May&#8217;s Theorem in a Restricted Informational Environment,&#8221; <italic>American Journal of Political Science</italic> 50, no. 4 [2006]: 940&#8211;949, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/j.1540-5907.2006.00225.x">http://doi.org/10.1111/j.1540-5907.2006.00225.x</ext-link>), reserving &#8220;anonymity&#8221; for May&#8217;s axiom. Asheim et al. say that an intergenerational version of Outcome Anonymity is &#8220;sometimes also called <italic>weak anonymity</italic>&#8221; (Asheim et al., &#8220;Justifying Sustainability, &#8221; 255, emphasis original), citing, among others, Sen, <italic>Collective Choice</italic>, ch. 5. But in that reference, Sen discusses only May&#8217;s Anonymity, not Outcome Anonymity. The same is true for Gustafsson&#8217;s (&#8220;Utilitarianism,&#8221; 258) citation of Sen (Amartya Sen, &#8220;Informational Bases of Alternative Welfare Approaches,&#8221; <italic>Journal of Public Economics</italic> 3, no. 4 (1974): 391, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1016/0047-2727(74)90006-1">https://doi.org/10.1016/0047-2727(74)90006-1</ext-link>). Finally, Basu and Mitra (Kaushik Basu and Tapan Mitra, &#8220;Aggregating Infinite Utility Streams with Intergenerational Equity: The Impossibility of Being Paretian,&#8221; <italic>Econometrica</italic> 71, no. 5 (2003): 1559, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1111/1468-0262.00458">https://doi.org/10.1111/1468-0262.00458</ext-link>) dub a version of Outcome Anonymity the &#8220;Anonymity Axiom&#8221; and remark that the axiom &#8220;is stated&#8221; differently (i.e. as Anonymity) in the social choice theory literature, citing May, &#8220;A Set of Necessary and Sufficient Conditions&#8221; and Sen, &#8220;Weights and Measures.&#8221;</p></fn>
<fn id="n26"><p>May, &#8220;A Set of Necessary and Sufficient Conditions.&#8221;</p></fn>
<fn id="n27"><p>Brian Barry and Russell Hardin, <italic>Rational</italic> Man <italic>and Irrational Society?: An Introduction and Sourcebook</italic> (Sage Publications, 1982), 298.</p></fn>
<fn id="n28"><p>For a full statement of the axioms, see May, &#8220;A Set of Necessary and Sufficient Conditions&#8221; or Goodin and List, &#8220;A Conditional Defense,&#8221; 943.</p></fn>
<fn id="n29"><p>Sen, &#8220;Informational Bases,&#8221; 389, 391 distinguishes two versions of Anonymity depending on whether individual preferences are modeled as orderings or welfare functions.</p></fn>
<fn id="n30"><p>Because our function <italic>g</italic> has a domain of multiple profiles, this counts as a &#8220;multi-profile&#8221; framework. But as Harvey Lederman and Jake Nebel have suggested to me, it might be more natural in this paper to use a &#8220;single-profile&#8221; framework, where we take as given a set of welfare distributions and ask how these relate in value. (That is to say, instead of ranking outcomes in accordance with how welfare is distributed in them, we could just rank the distributions directly.) While I feel the pull towards this approach, I stick with the multi-profile framework to emphasize the historical continuity with May&#8217;s original Anonymity axiom. (In a single-profile framework, Anonymity itself is trivial, but an analogous principle of impartiality is &#8220;permutation invariance,&#8221; which holds that two pairs of outcomes must be similarly related if their distributions are related by a common permutation of individuals. Jacob Nebel, &#8220;Infinite Ethics and the Limits of Impartiality,&#8221; <italic>No&#251;s</italic> [forthcoming]: 6, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/nous.70010">http://doi.org/10.1111/nous.70010</ext-link>). Note that some authors I cite&#8212;such as Broome, <italic>Weighing Lives</italic>, Nebel, &#8220;A Fixed-Population Problem&#8221;, and Bradley, &#8220;Impartial Evaluation&#8221;&#8212;work in a single-profile framework, so when they invoke Outcome Anonymity it is technically <italic>distributions</italic> not <italic>outcomes</italic> at issue.</p></fn>
<fn id="n31"><p>Goodin and List, &#8220;A Conditional Defense,&#8221; 946.</p></fn>
<fn id="n32"><p>See, for example, Sen, &#8220;Weights and Measures,&#8221; 1559&#8211;62 on the restrictiveness of &#8220;welfarism.&#8221; See also the discussion of Strong Neutrality in &#167;VIII, below.</p></fn>
<fn id="n33"><p>For example, on Kripke&#8217;s view, you essentially come from a certain pair of gametes. Saul Kripke, <italic>Naming and Necessity</italic> (Harvard University Press, 1980).</p></fn>
<fn id="n34"><p>Derek Parfit, <italic>On What Matters, Vol. 1</italic> (Oxford University Press, 2011), <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1093/acprof:osobl/9780199572809.001.0001">https://doi.org/10.1093/acprof:osobl/9780199572809.001.0001</ext-link>.</p></fn>
<fn id="n35"><p>Nebel, &#8220;A Fixed-Population Problem,&#8221; 2783.</p></fn>
<fn id="n36"><p>Though my focus is on finite cases, here is a bonus infinitary counterexample. Amanda Askell, &#8220;Pareto Principles in Infinite Ethics,&#8221; (PhD diss., New York University, 2018), 24. Suppose in <italic>A</italic> there is one person at each integer welfare level, and in <italic>A</italic>+ everyone is one level higher. Given Outcome Anonymity, <italic>A</italic>+ and <italic>A</italic> should be equally good. But Weak <italic>Pareto</italic> says that <italic>A</italic>+ is better, since it&#8217;s better for everyone. Does it follow that Weak Pareto is partial? Clearly not. Weak Pareto violates Outcome Anonymity only because it tells us to care about making everybody better off, which requires caring <italic>who</italic> is at each welfare level. Specifically, Weak Pareto, tells us to care that the person at level <italic>n</italic> + 1 in <italic>A</italic>+ is the person who was at level <italic>n</italic> in <italic>A</italic>. For more on Outcome Anonymity in infinite populations, see Hong and Russell, whose &#8220;Finite Anonymity&#8221; is Outcome Anonymity restricted to finite permutations (note that Finite Anonymity is consistent with Weak Pareto). Frank Hong and Jeffrey Russell, &#8220;Paradoxes of Infinite Aggregation, <italic>No&#251;s</italic> 59, no. 3 (2025): 809&#8211;827, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1111/nous.12535">https://doi.org/10.1111/nous.12535</ext-link>, &#167;3.3. Goodman proves that Outcome Anonymity is equivalent even in the infinite context to permutation invariance (a single-profile analogue of Anonymity) given transitivity and completeness. Jeremy Goodman, &#8220;Permutation-invariant Social Welfare Orders are Anonymous,&#8221; <italic>Journal of Mathematical Economics</italic> 120, no. 103153 (2025), <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1016/j.jmateco.2025.103153">http://doi.org/10.1016/j.jmateco.2025.103153</ext-link>; see also Jeremy Goodman and Harvey Lederman, &#8220;Maximal Social Welfare Relations on Infinite Populations Satisfying Permutation Invariance,&#8221; preprint, arXiv, August 11, 2024, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.48550/arXiv.2408.05851">https://doi.org/10.48550/arXiv.2408.05851</ext-link> and Nebel, &#8220;Infinite Ethics.&#8221;</p></fn>
<fn id="n37"><p>Sen notes that Majority Rule conflicts with Outcome Anonymity, though he does not mention the issue of impartiality. Amartya Sen, &#8220;Welfare Inequalities and Rawlsian Axiomatics,&#8221; <italic>Theory and Decision</italic> 7 (1976): 253, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1007/BF00135080">https://doi.org/10.1007/BF00135080</ext-link>. (Strictly speaking he makes the point regarding the &#8220;extended grading principle.&#8221; See fn. 24 and references therein.) Elsewhere, Sen observes that Anonymity permits what I call caring who: &#8220;Anonymity as such does not rule out the use of information of the type that the same person (whoever he may be) who will enjoy welfare level <italic>u</italic> in the state <italic>x</italic> will also enjoy welfare level <italic>v</italic> in the state <italic>y</italic>.&#8221; Sen, &#8220;Weights and Measures,&#8221; 1560. See the Appendix for more.</p></fn>
<fn id="n38"><p>E.g. Susan Hurley, &#8220;Supervenience and the Possibility of Coherence,&#8221; <italic>Mind</italic> 94, no. 376 (1985): 505.</p></fn>
<fn id="n39"><p>Some exceptions include Larry Temkin, <italic>Rethinking the Good: Moral Ideals and the Nature of Practical Reasoning</italic> (Oxford University Press, 2012), <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1093/acprof:oso/9780199759446.001.0001">http://doi.org/10.1093/acprof:oso/9780199759446.001.0001</ext-link> and Stuart Rachels, &#8220;Counterexamples to the Transitivity of Better Than,&#8221; <italic>Australasian Journal of Philosophy</italic> 76, no. 1 (1998): 71&#8211;83, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1080/00048409812348201">http://doi.org/10.1080/00048409812348201</ext-link>. Hurley, &#8220;Supervenience,&#8221; 505 cites cycles in her objection to Majority Rule for ethical &#8220;criteria.&#8221; See fn. 48 for a way to avoid such cycles.</p></fn>
<fn id="n40"><p>The prioritarian, for example, might complain that classical utilitarianism treats a benefit as no more important when given to someone worse off. Derek Parfit, &#8220;Equality and Priority,&#8221; <italic>Ratio</italic> 10, no. 3 (1997): 202&#8211;221, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/1467-9329.00041">http://doi.org/10.1111/1467-9329.00041</ext-link>.</p></fn>
<fn id="n41"><p>Thomas Scanlon, <italic>What We Owe to</italic> Each <italic>Other</italic> (Harvard University Press, 1998), 235.</p></fn>
<fn id="n42"><p>Weak Anti-Aggregation is only the negative side of &#8220;limited aggregation.&#8221; The positive side is that we <italic>do</italic> aggregate when benefits and harms are &#8220;close enough,&#8221; in Parfit&#8217;s phrase. Parfit, &#8220;Justifiability,&#8221; 78.</p></fn>
<fn id="n43"><p>For a more dramatic version of this example, we could use Parfit&#8217;s &#8220;Musical Chairs,&#8221; which features 100 people rotating through welfare levels 1&#8211;100. Parfit, &#8220;Justifiability,&#8221; 16. We could then suppose that benefits at most 98 units apart in size are &#8220;close enough&#8221; to aggregate. See also Alex Voorhoeve, &#8220;How Should We Aggregate Competing Claims?&#8221; <italic>Ethics</italic> 125, no. 1 (2014): 64&#8211;87, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1086/677022">http://doi.org/10.1086/677022</ext-link>.</p></fn>
<fn id="n44"><p>Brown, &#8220;Is Close Enough Good Enough?&#8221; 39.</p></fn>
<fn id="n45"><p>Ibid.</p></fn>
<fn id="n46"><p>Ibid.</p></fn>
<fn id="n47"><p>Nebel, &#8220;A Choice-Functional Characterization,&#8221; 8. Brown makes his point using an example rather than explicitly invoking what I call Anonymity. Nebel calls his (choice-functional) version of Anonymity &#8220;Anonymous Invariance,&#8221; saying that it &#8220;really does seem a requirement of impartiality&#8221; proving several novel results. Ibid.</p></fn>
<fn id="n48"><p>Cycles are not, in fact, essential to two counterexamples. We could easily modify Weak Anti-Aggregation so that it converts all links in a would-be cycle into equal goodness. The same could be done for Majority Rule, as noted by Duncan Luce and Howard Raiffa, <italic>Games and Decisions</italic> (Wiley, 1957), 333. These modified rules would still be impartial counterexamples to Outcome Anonymity, but instead of violating transitivity, they would violate the Independence of Irrelevant Alternatives (&#167;VII).</p></fn>
<fn id="n49"><p>Kamm, Morality<italic>, Mortality</italic>, 83; Kamm, &#8220;Aggregation,&#8221; 4.</p></fn>
<fn id="n50"><p>Taurek, &#8220;Should the Numbers Count?&#8221;</p></fn>
<fn id="n51"><p>Since these benefits are all equal in size, even &#8220;limited aggregationists,&#8221; who accept Weak Anti-Aggregation (&#167;IV), would aggregate in this case.</p></fn>
<fn id="n52"><p>Robert Lawlor, &#8220;Taurek, Numbers and Probabilities,&#8221; <italic>Ethical Theory and Moral Practice</italic> 9, no. 2 (2006): 152, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1007/s10677-005-9004-4">http://doi.org/10.1007/s10677-005-9004-4</ext-link> says Taurek would reject Strong Pareto. But see Kamm, <italic>Morality, Mortality</italic>, 81.</p></fn>
<fn id="n53"><p>Related concepts include incommensurability (Henrik Andersson and Anders Herlitz, &#8220;Introduction&#8221; in <italic>Value Incommensurability: Ethics, Risk, and Decision-</italic>Making, eds. Henrik Andersson and Anders Herlitz [Routledge, 2022]: 1&#8211;25, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.4324/9781003148012-1">http://doi.org/10.4324/9781003148012-1</ext-link>) and parity (Ruth Chang, &#8220;The Possibility of Parity,&#8221; <italic>Ethics</italic> 112, no. 4 [2002]: 659&#8211;688, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1086/339673">http://doi.org/10.1086/339673</ext-link>; Chrisoula Andreou, &#8220;Parity Without Imprecise Equality&#8221; in <italic>Value Incommensurability: Ethics, Risk, and Decision-Making</italic>, eds. Henrik Andersson and Anders Herlitz [Routledge, 2022], <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.4324/9781003148012-5">http://doi.org/10.4324/9781003148012-5</ext-link>). Nebel, &#8220;Fixed-Population Problem&#8221; uses incomparable welfare levels to run an argument like Kamm&#8217;s; Dreier, &#8220;Blessed Lives&#8221; does the same, using incomparability between life and nonexistence. Nebel and Dreier&#8217;s arguments share the same structure as Kamm&#8217;s (including both Strong Pareto and Substitution of Equals). My objection to Nebel and Dreier is essentially the same as my objection to Kamm. I focus on her argument purely for the sake of keeping the formalism simple.</p></fn>
<fn id="n54"><p>Ronald De Sousa, &#8220;The Good and the True,&#8221; <italic>Mind</italic> 83, no. 332 (1974): 534&#8211;551, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1093/mind/LXXXIII.332.534">https://doi.org/10.1093/mind/LXXXIII.332.534</ext-link>; Walter Sinnott-Armstrong, &#8220;Moral Dilemmas and Incomparability,&#8221; <italic>American Philosophical Quarterly</italic> 22, no. 4 (1985): 321&#8211;329, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.jstor.org/stable/20014112">https://www.jstor.org/stable/20014112</ext-link>.</p></fn>
<fn id="n55"><p>Chang, &#8220;The Possibility of Parity,&#8221; 659.</p></fn>
<fn id="n56"><p>Raz calls this the &#8220;mark of incommensurability.&#8221; Joseph Raz, <italic>The Morality of Freedom</italic> (Oxford: 1986), 324&#8211;26, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1093/0198248075.001.0001">https://doi.org/10.1093/0198248075.001.0001</ext-link>.</p></fn>
<fn id="n57"><p>Such nonfungibility, reminiscent of &#8220;the separateness of persons&#8221; (John Rawls, <italic>A Theory of Justice</italic> [Harvard University Press, 1971], <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.2307/j.ctvjf9z6v">https://doi.org/10.2307/j.ctvjf9z6v</ext-link>), has become a touchstone in the ethics of rescue. Richard Yetter Chappell, &#8220;Value Receptacles,&#8221; <italic>No&#251;s</italic> 49, no. 2 (2015): 322&#8211;332, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/nous.12023">http://doi.org/10.1111/nous.12023</ext-link>; Kerah Gordon-Solmon and Theron Pummer, &#8220;Lesser-Evil Justifications: A Reply to Frowe,&#8221; <italic>Law and Philosophy</italic> 41 (2022): 639&#8211;646, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1007/s10982-022-09454-w">http://doi.org/10.1007/s10982-022-09454-w</ext-link>; Michael Rabenberg, &#8220;Imprecision in the Ethics of Rescue,&#8221; <italic>Analytic Philosophy</italic> 64, no. 3 (2023): 277&#8211;317, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/phib.12260">http://doi.org/10.1111/phib.12260</ext-link>; Theron Pummer, <italic>The Rules of Rescue: Cost, Distance, and Effective Altruism</italic> (Oxford University Press, 2023), <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1093/oso/9780190884147.001.0001">https://doi.org/10.1093/oso/9780190884147.001.0001</ext-link>.</p></fn>
<fn id="n58"><p>This move has been suggested by several of Taurek&#8217;s critics and defenders. Alexander Friedman, &#8220;Minimizing Harm: Three Problems in Moral Theory&#8221; (PhD diss., MIT, 2002), chap. 2; Alexander Friedman, &#8220;Intransitive Ethics,&#8221; <italic>Journal of Moral Philosophy</italic> 6, no. 3 (2009): 279, note 8, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1163/174552409X433391">https://doi.org/10.1163/174552409X433391</ext-link>; David Wasserman and Alan Strudler, &#8220;Can a Nonconsequentialist Count Lives?&#8221; <italic>Philosophy &amp; Public Affairs</italic> 31, no. 1 (2003): 74, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/j.1088-4963.2003.00071.x">http://doi.org/10.1111/j.1088-4963.2003.00071.x</ext-link>; Michael Otsuka, &#8220;Skepticism about Saving the Greater Number,&#8221; <italic>Philosophy &amp; Public Affairs</italic> 32, no. 4 (2004): 420, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/j.1088-4963.2004.00020.x">http://doi.org/10.1111/j.1088-4963.2004.00020.x</ext-link>; Weyma L&#252;bbe, &#8220;Taurek&#8217;s No Worse Claim,&#8221; <italic>Philosophy &amp; Public Affairs</italic> 36, no. 1 (2008): 69&#8211;85, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1111/j.1088-4963.2008.00124.x">http://doi.org/10.1111/j.1088-4963.2008.00124.x</ext-link>. But the version I give below will differ slightly. Rather than holding that an agent&#8217;s <italic>options</italic> may vary in value depending on what else the agent can do, the version below holds that some <italic>outcomes</italic> may compare differently in value depending on what welfares people have in other outcomes.</p></fn>
<fn id="n59"><p>My definition mostly follows John Weymark, &#8220;Welfarism on Economic Domains,&#8221; <italic>Mathematical Social Sciences</italic> 36, no. 3 (1998): 254, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1016/S0165-4896(98)00042-0">https://doi.org/10.1016/S0165-4896(98)00042-0</ext-link>. See May, &#8220;A Set of Necessary and Sufficient Conditions&#8221; for the original &#8220;neutrality&#8221; axiom.</p></fn>
<fn id="n60"><p>Why do we need Strong Neutrality? Because if we have only the Independence of Irrelevant Alternatives, we could treat some <italic>outcomes</italic> as special, which can violate Weak Outcome Anonymity for Transpositions without partiality towards people. Suppose we select a &#8220;status quo&#8221; outcome that wins if no alternative enjoys unanimous support. Or consider Sen&#8217;s Liberal Paradox, where each individual has the right to be decisive over a pair of outcomes (their private sphere). Sen, <italic>Collective Choice</italic>, ch. 6.</p></fn>
<fn id="n61"><p>More carefully: since Anonymity blocks <italic>any</italic> rule that treats some individual&#8217;s welfare as more important, it also blocks rules that treat the welfares of a <italic>subgroup</italic> of individuals as being more important. As for other manifestations of racial bias, Anonymity is silent. As a reviewer notes, it may be more natural to explicitly include information about race in the framework and define bias in terms of sensitivity to permutation of races. Partly for that reason&#8212;and partly due to excellent comments from Jonas Hertel and Jake Nebel&#8212;I suspect that this issue requires further treatment.</p></fn>
<fn id="n62"><p>Nebel, &#8220;Fixed-Population Problem,&#8221; 2783.</p></fn>
<fn id="n63"><p>Pietro Cibinel suggests a complaint-based version of Outcome Anonymity, according to which an impartial observer &#8220;is indifferent between outcomes that involve exactly the same legitimate complaints and only differ in who makes them.&#8221; Pietro Cibinel, &#8220;Risk Attitudes and Justifiability to Each,&#8221; <italic>Ethics</italic> 133, no. 1 (2022): 118, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1086/720777">http://doi.org/10.1086/720777</ext-link>. If we care about complaints, we care who is who in a welfare distribution&#8212;we have to know that the complainant is someone who would be better off in an alternative. This makes Cibinel&#8217;s principle a useful spin on Outcome Anonymity consistent with at least one kind of &#8220;caring who.&#8221;</p></fn>
<fn id="n64"><p>Sen, &#8220;Weights and Measures,&#8221; 1540&#8211;43.</p></fn>
<fn id="n65"><p>D&#8217;Aspremont and Gevers, &#8220;Equity,&#8221; 203; Robert Deschamps and Louis Gevers, &#8220;Leximin and Utilitarian Rules: A Joint Characterization,&#8221; <italic>Journal of Economic Theory</italic> 17 (1978): 149, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi.org/10.1016/0022-0531(78)90068-6">http://doi.org/10.1016/0022-0531(78)90068-6</ext-link>; Sen, &#8220;Weights and Measures,&#8221; 1554.</p></fn>
<fn id="n66"><p>Hammond, &#8220;Equity in Two-Person Situations,&#8221; 1330.</p></fn>
<fn id="n67"><p>See d&#8217;Aspremont and Gevers, &#8220;Equity&#8221;; Claude d&#8217;Aspremont and Louis Gevers, &#8220;Social Welfare Functionals and Interpersonal Comparability,&#8221; in <italic>Handbook of Social Choice and Welfare, Vol. 1</italic>, eds. Kenneth Arrow, Amartya Sen, and Kotaro Suzumura (Elsevier Science, 2002), 459&#8211;541, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.1016/S1574-0110(02)80014-5">https://doi.org/10.1016/S1574-0110(02)80014-5</ext-link>.</p></fn>
<fn id="n68"><p>Brian Hedden and Jacob Nebel, &#8220;Multidimensional Concepts and Scale Types,&#8221; <italic>Philosophical Review</italic> 133, no. 3 (2024): 265&#8211;308.</p></fn>
</fn-group>
<sec>
<title>Acknowledgements</title>
<p>My thanks to Tom Sinclair, Nir Eyal, Kaushik Basu, Daniel Star, Sam Fullhart, Cara Nine, Felipe Doria, Brian Hedden, Iwao Hirose, Pietro Cibinel, Harvey Lederman, Geir Asheim, Geoff Sayre-McCord, and audiences at the Boston University Ethics Seminar, Montreal Axiology Workshop, UNC PPE Retreat, Duke University Philosophy WIP Seminar, 2025 Conference of the American Association of Mexican Philosophers, MIT-ing of the Minds, 2025 PPE Society Conference, and Oxford Moral Philosophy Seminar. I am also delighted to thank Anna Stilz and the reviewers at <italic>Free &amp; Equal</italic> for pushing me to streamline the paper, clarify key points (especially the role of Anonymity), use a more descriptive title, and ask, &#8220;So what?&#8221; Finally, special thanks to Jake Nebel for his extraordinarily illuminating and detailed comments, which (among other things) helped me better appreciate the history of anonymity principles and how these principles are expressed differently in single-profile and multi-profile frameworks.</p>
</sec>
<sec>
<title>Competing Interests</title>
<p>The author has no competing interests to declare.</p>
</sec>
<sec>
<title>Appendix</title>
<p>First, we restate and prove:</p>
<p><italic>Theorem 1</italic> Given Strong Neutrality and Universal Domain, Anonymity implies Weak Outcome Anonymity for Transpositions.</p>
<p><italic>Proof</italic> Given any profile <bold>U</bold> = (<bold>W</bold><sub>1</sub>, <bold>W</bold><sub>2</sub>, &#8230;, <bold>W</bold><italic><sub>n</sub></italic>) and outcomes <italic>x</italic> and <italic>y</italic>, we show that <italic>x</italic> &#8764; <italic>y</italic> or <italic>x</italic> and <italic>y</italic> are incomparable if, for some transposition &#963; of {1, 2, &#8230;, <italic>n</italic>}, (<bold>W</bold><sub>&#963;</sub><sub>1</sub>(<italic>x</italic>), <bold>W</bold><sub>&#963;</sub><sub>2</sub>(<italic>x</italic>), &#8230;, <bold>W</bold><sub>&#963;</sub><italic><sub>n</sub></italic>(<italic>x</italic>)) = (<bold>W</bold><sub>1</sub>(<italic>y</italic>), <bold>W</bold><sub>2</sub>(<italic>y</italic>), &#8230;, <bold>W</bold><italic><sub>n</sub></italic>(<italic>y</italic>)). Assume for conditional proof that such a &#963; exists. Given Universal Domain, there must exist a profile <bold>U*</bold> = (<bold>W</bold><sub>&#963;</sub><sub>1</sub>, <bold>W</bold><sub>&#963;</sub><sub>2</sub>, &#8230;, <bold>W</bold><sub>&#963;</sub><italic><sub>n</sub></italic>). Since <bold>U*</bold> is a permutation of <bold>U</bold>, Anonymity ensures that (i) <italic>x</italic> &#8831; <italic>y</italic> if and only if <italic>x</italic> &#8831;* <italic>y</italic>. (Where &#8831; = <italic>g</italic>(<bold>U</bold>) and &#8831;* = <italic>g</italic>(<bold>U*</bold>).) By our starting assumption, <bold>W</bold><sub><italic>i</italic></sub>(<italic>y</italic>) = <bold>W</bold><sub>&#963;</sub><sub><italic>i</italic></sub>(<italic>x</italic>) for <italic>i</italic> = 1, 2, &#8230;, <italic>n</italic>. Since &#963; is a transposition, we also have the reverse: <bold>W</bold><sub><italic>i</italic></sub>(<italic>x</italic>) = <bold>W</bold><sub>&#963;</sub><sub><italic>i</italic></sub>(<italic>y</italic>). Given Strong Neutrality, it follows that (ii) <italic>y</italic> &#8831; <italic>x</italic> if and only if <italic>x</italic> &#8831;* <italic>y</italic>. From the biconditionals (i) and (ii), we can conclude that <italic>x</italic> &#8831; <italic>y</italic> if and only if <italic>y</italic> &#8831; <italic>x</italic>, which is equivalent to the desired claim that either <italic>x</italic> &#8764; <italic>y</italic> or <italic>x</italic> and <italic>y</italic> are incomparable. &#9633;</p>
<p>Against certain background conditions, we have shown that Anonymity implies Weak Outcome Anonymity for Transpositions. But does the converse implication hold? We prove that it does not.</p>
<p><italic>Theorem 2</italic> Given Strong Neutrality and Universal Domain, Weak Outcome Anonymity for Transpositions does not imply Anonymity.</p>
<p><italic>Proof</italic> We proceed by giving a counterexample&#8212;a variation on classical utilitarianism that allows for some incomparability, but only when two options have equal total welfare and a tradeoff for a certain chosen person&#8217;s welfare. Let S(<italic>x</italic>) be the sum of individual welfares in <italic>x</italic>. Define <italic>g</italic> so that (i) <italic>x</italic> &#8827; <italic>y</italic> if and only if S(<italic>x</italic>) &gt; S(<italic>y</italic>), and (ii) <italic>x</italic> &#8764; <italic>y</italic> if and only if S(<italic>x</italic>) = S(<italic>y</italic>) and <bold>W</bold><sub>1</sub>(<italic>x</italic>) = <bold>W</bold><sub>1</sub>(<italic>y</italic>). (Otherwise, <italic>x</italic> and <italic>y</italic> are incomparable.) Since permuting who is at which welfare level cannot change the sum of individual welfares, such permutations can only result in equal goodness or incomparability; <italic>g</italic> therefore obeys Weak Outcome Anonymity. Since whether <italic>x</italic> &#8831; <italic>y</italic> depends only on individual welfare levels in <italic>x</italic> and in <italic>y, g</italic> obeys Strong Neutrality. Now assume that <italic>g</italic> has a Universal Domain, and consider the profile <bold>U</bold> given by:</p>
<table-wrap id="T8">
<caption>
<p>Table 8: Profile <bold>U</bold></p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>A</italic></td>
<td align="center" valign="top"><italic>B</italic></td>
</tr>
<tr>
<td align="right" valign="top">1</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="right" valign="top">2</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0</td>
</tr>
<tr>
<td align="right" valign="top">3</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Since the sum of individual welfares is 2 in both outcomes, and <bold>W</bold><sub>1</sub>(<italic>A</italic>) = <bold>W</bold><sub>2</sub>(<italic>B</italic>), it follows that <italic>A</italic> &#8764; <italic>B</italic>. If <italic>g</italic> obeys Anonymity, then <italic>A</italic> and <italic>B</italic> would still be equally good given an otherwise similar profile that permutes the welfare functions of persons 1 and 2, as in <bold>U*</bold>, below:</p>
<table-wrap id="T9">
<caption>
<p>Table 9: Profile <bold>U*</bold>, obtained by permuting the first two rows</p>
</caption>
<table>
<tbody>
<tr>
<td align="right" valign="top"></td>
<td align="center" valign="top"><italic>A</italic></td>
<td align="center" valign="top"><italic>B</italic></td>
</tr>
<tr>
<td align="right" valign="top">1</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0</td>
</tr>
<tr>
<td align="right" valign="top">2</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">1</td>
</tr>
<tr>
<td align="right" valign="top">3</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Since person 1 does not have the same welfare level in <italic>A</italic> as in <italic>B</italic>, it is not the case that <italic>A</italic> &#8764;* <italic>B</italic>. Therefore, <italic>g</italic> violates Anonymity, so Anonymity does not follow from Strong Neutrality, Universal Domain, and Weak Outcome Anonymity for Transpositions.</p>
<p>In fact, this result can easily be strengthened. Our counterexample, the utilitarian variant, obeys Weak Outcome Anonymity in general, not just for transpositions. It also respects the following conditions on overall value relations:</p>
<p><italic>Transitivity (of &#8831;)</italic> If <italic>x</italic> &#8831; <italic>y</italic> &#8831; <italic>z</italic>, then <italic>x</italic> &#8831; <italic>z</italic>.</p>
<p><italic>Congruence of Incomparables (with respect to &#8827;)</italic> If <italic>x</italic> and <italic>y</italic> are incomparable, then <italic>z</italic> &#8827; <italic>x</italic> if and only if <italic>z</italic> &#8827; <italic>y</italic>, and <italic>x</italic> &#8827; <italic>z</italic> if and only if <italic>y</italic> &#8827; <italic>z</italic>.</p>
<p>We prove this in the form of a:</p>
<p><italic>Lemma</italic> Suppose (i) <italic>x</italic> &#8827; <italic>y</italic> if and only if S(<italic>x</italic>) &gt; S(<italic>y</italic>), and (ii) <italic>x</italic> &#8764; <italic>y</italic> if and only if S(<italic>x</italic>) = S(<italic>y</italic>) and <bold>W</bold><sub>1</sub>(<italic>x</italic>) = <bold>W</bold><sub>1</sub>(<italic>y</italic>). Then Transitivity and Congruence of Incomparables both obtain.</p>
<p><italic>Proof</italic> We begin with Transitivity. Assume <italic>x</italic> &#8831; <italic>y</italic> &#8831; <italic>z</italic>. Then S(<italic>x</italic>) &#8805; S(<italic>y</italic>) &#8805; S(<italic>z</italic>). There are two cases to consider. First, suppose either S(<italic>x</italic>) &gt; S(<italic>y</italic>) or S(<italic>y</italic>) &gt; S(<italic>z</italic>). Then S(<italic>x</italic>) &gt; S(<italic>z</italic>), and therefore <italic>x</italic> &#8827; <italic>z</italic>. Second, suppose that neither S(<italic>x</italic>) &gt; S(<italic>y</italic>) nor S(<italic>y</italic>) &gt; S(<italic>z</italic>). Then S(<italic>x</italic>) = S(<italic>y</italic>) = S(<italic>z</italic>), which implies <italic>x</italic> &#8764; <italic>y</italic> &#8764; <italic>z</italic>. So <bold>W</bold><sub>1</sub>(<italic>x</italic>) = <bold>W</bold><sub>1</sub>(<italic>y</italic>) = <bold>W</bold><sub>1</sub>(<italic>z</italic>), and by transitivity <bold>W</bold><sub>1</sub>(<italic>x</italic>) = <bold>W</bold><sub>1</sub>(<italic>z</italic>). So <italic>x</italic> &#8764; <italic>z</italic>. In either case, <italic>x</italic> &#8831; <italic>z</italic>, which concludes our proof of Transitivity. Next, we prove the Congruence of Incomparables. Assume that <italic>x</italic> and <italic>y</italic> are incomparable. Then S(<italic>x</italic>) = S(<italic>y</italic>). Therefore S(<italic>z</italic>) &gt; S(<italic>x</italic>) if and only if S(<italic>z</italic>) &gt; S(<italic>y</italic>), and S(<italic>x</italic>) &gt; S(<italic>z</italic>) if and only if S(<italic>y</italic>) &gt; S(<italic>z</italic>). So, <italic>z</italic> &#8827; <italic>x</italic> if and only if <italic>z</italic> &#8827; <italic>y</italic>, and <italic>x</italic> &#8827; <italic>z</italic> if and only if <italic>y</italic> &#8827; <italic>z</italic>. &#9633;</p>
<p>From this, we can immediately conclude:</p>
<p><italic>Theorem 3</italic> Even given Strong Neutrality, Universal Domain, Transitivity, and Congruence of Incomparables, Weak Outcome Anonymity does not imply Anonymity.</p>
<p>Our proofs of Theorems 2 and 3 rely on the possibility of incompleteness in the evaluative ranking&#8212;that is to say, failures of:</p>
<p><italic>Completeness (of &#8831;)</italic> Either <italic>x</italic> &#8831; <italic>y</italic> or <italic>y</italic> &#8831; <italic>x</italic>.</p>
<p>The combination of Transitivity and Completeness do not quite collapse Anonymity and Outcome Anonymity. (Notice that Paretian Dependence satisfies only Anonymity; we get the full collapse given also Strong Neutrality and Universal Domain.) Nevertheless, Transitivity and Completeness have important consequences for several of our principles.</p>
<p>With Transitivity and Completeness, the codomain of <italic>g</italic> becomes a set of orderings (transitive, complete), which makes <italic>g</italic> a &#8220;social welfare functional.&#8221;<xref ref-type="fn" rid="n64">64</xref> This allows us to exploit a number of important results, which reveal various connections between Anonymity, Outcome Anonymity, the Independence of Irrelevant Alternatives, and utilitarianism. See, for example, Theorem 3 of d&#8217;Aspremont and Gevers (which axiomatizes utilitarianism using Anonymity), Theorem 6 of Deschamps and Gevers (which links Anonymity to utilitarianism and &#8220;leximin&#8221; using other axioms), and Theorem 7 of Sen (which shows that, given Universal Domain and Outcome Anonymity, a social welfare functional <italic>g</italic> satisfies the Independence of Irrelevant Alternatives if and only if it satisfies Strong Anonymity;<xref ref-type="fn" rid="n65">65</xref> see also Hammond, who observes that the other assumptions imply not only Strong Anonymity but &#8220;Superstrong Anonymity&#8221;<xref ref-type="fn" rid="n66">66</xref>).</p>
<p>Perhaps most important is the result that, in the context of social choice, Anonymity and Outcome Anonymity are equivalent given <italic>formal welfarism</italic>, which holds of a social welfare functional if and only if that functional can be represented as an ordering over an <italic>n</italic>-dimensional vector space, each point of which encodes the welfares of each of the <italic>n</italic> persons.<xref ref-type="fn" rid="n67">67</xref> Formal welfarism is analogous to what some ethicists call <italic>dimensionalism</italic>, the view that overall value relations depend only on how good things are in each dimension.<xref ref-type="fn" rid="n68">68</xref> The results cited above, as well as the discussion in the main text, may be of interest to ethicists looking for a criterion of &#8220;impartiality&#8221; towards dimensions of value.</p>
</sec>
</back>
</article>