-
Notifications
You must be signed in to change notification settings - Fork 1
/
aises_9_2
254 lines (253 loc) · 15.7 KB
/
aises_9_2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
<h1 id="why-learn-about-ethics">A.2 Why Learn About Ethics?</h1>
<p>This chapter will help you understand ethics—both in the context of
this book and in public discourse about AI safety. Here, we cover the
most prominent theories in the history of ethical discourse. After
reading this chapter, you should have a solid foundation for
understanding ethics in AI discussions.<p>
Ethics is relevant to the field of AI for two key reasons. First, AI
systems are increasingly being integrated into various aspects of human
life, such as healthcare, education, finance, and transportation, and
they have the potential to significantly impact our lives and wellbeing.
As AI systems become increasingly intelligent and powerful, it is
crucial to ensure that they are designed, developed, and deployed in
ways that promote widely shared values and do not amplify existing
social biases or cause needless harms. Unfortunately, there are already
numerous examples of AI systems being designed in ways that failed to
adequately consider such risks, such as racially biased facial
recognition systems. In order to wisely manage the growing power of AI
systems, developers and users of AI systems need to understand the
ethical challenges that AI systems introduce or exacerbate.<p>
Second, AI systems raise a range of new ethical questions that are
unique to their technological nature and capabilities. For instance, AI
systems can generate, process, and analyze vast amounts of data—much
more than was previously possible. In what ways does this new technology
challenge traditional notions of privacy, consent, intellectual
property, and transparency? Another important set of questions relates
to the moral status of AI systems. This is likely to become more
pressing if AI systems become increasingly autonomous and able to
interact with human beings in ways that convince their users that they
have their own preferences and feelings. What should we do if AI systems
appear to meet some of the potential criteria for sentience or other
morally relevant features?<p>
Thirdly, as further explored in the Single Agent Safety and Machine Ethics chapters, it is challenging to
specify objectives or goals for highly powerful AI systems in ways that
do not lead in a predictable way to highly undesirable consequences. In
order to grasp why it is so challenging to specify these objectives, it
is helpful to understand the ethical theories that have been proposed.
Questions of what it means to act rightly or to live a good life have
been debated by many thinkers over several millennia, with strong
arguments advanced for a number of competing positions. These debates
can provide us with greater insight into the challenges that AI
developers will need to overcome in order to build increasingly powerful
AI systems in a beneficial way. Rather than attempting to bypass or
ignore such controversies, AI developers should accept that their design
decisions may raise difficult ethical questions that need to be
considered carefully.</p>
<h2 id="is-ethics-relative">A.2.1 Is Ethics “Relative?”</h2>
<p><strong>Even after millennia of deliberation, we do not agree on all
of morality.</strong> Philosophers have been thinking about and debating
moral principles for millennia, yet they have not achieved consensus on
many moral issues. Widespread disagreements remain in both philosophical
and public discourse, including about important topics like abortion,
assisted suicide, capital punishment, animal rights, and the effects of
human activity on natural ecosystems. One troubling idea is that these
disagreements are irresolvable because no moral principles or judgments
are absolutely or universally correct. In the case of AI, this may lead
AI developers to believe that they have no role to play in shaping how
AI systems behave.</p>
<p><strong>Cultural relativism claims there is no objective, culturally
independent standard of morality.</strong> Consider the principle that
consensual relationships between adults are acceptable regardless of
whether they are heterosexual or homosexual. A moral relativist would
suggest this principle is correct for people who belong to some cultures
where homosexuality is accepted, but incorrect for people who belong to
other cultures where homosexuality is criminalised or socially
stigmatized. These differences are systemic: many cultures have moral
standards that seem incompatible with others’ ideals, such as different
views on marriage, divorce, gender roles, freedom of speech, or
religious tolerance. These differences form the basis for arguments for
cultural relativism.</p>
<p><strong>Normative moral relativism vs. descriptive moral relativism
<span class="citation" data-cites="gowans2021moral">[1]</span>.</strong>
Moral relativism has various forms, but here we discuss two: descriptive
moral relativism and normative moral relativism. Descriptive moral
relativism is straightforward: it means that different societies around
the world have different sets of rules about what’s right and wrong,
much like they have unique cuisines, customs, and traditions.
Descriptive moral relativism makes no claims about which, if any, of
these rules is right or wrong. Normative moral relativism suggests that
one cannot say that something is right or wrong in general, but only
relative to a particular culture or set of norms. Normative moral
relativists conclude that morality itself is not something universal or
absolute. Strictly speaking, descriptive moral relativism and normative
moral relativism are independent of each other, although in practice
descriptive moral relativism is often treated as if it provides evidence
for normative moral relativism.</p>
<h3 id="objections-to-moral-relativism">Objections to Moral
Relativism</h3>
<p>A number of arguments can be advanced against descriptive and
normative moral relativism <span class="citation"
data-cites="gowans2021moral">[1]</span>, which we explore in this
subsection. We will explore the argument that cultural differences might
be overstated, which makes descriptive moral relativism harder to
uphold. Another argument is that proponents of normative moral
relativism often face challenges when confronted with instances of
extreme harm. For instance, while many would unequivocally agree that
torturing a child for entertainment is morally wrong, a normative moral
relativist might be required to argue that its morality is contingent
upon the cultural context. Extreme examples such as this suggest few
people are willing to be thoroughgoing moral relativists. We further
explore arguments for and against moral relativism in this section.</p>
<p><strong>Human moral systems appear to share some common
features.</strong> Some have argued that most or all societies share
some norms. For example, prohibitions against lying, stealing, or
killing human beings are common across cultures. Many cultures have some
form of reciprocity, which is the idea that people have a moral
obligation to repay the kindness or generosity they have received from
others or that people should treat others the way they wish to be
treated <span class="citation"
data-cites="curry2019cooperate">[2]</span>. This can be seen in the
widespread practice of exchanging gifts and in moral codes that
emphasize fairness and justice. Additionally, human cultures have
typically some concept of parenthood, which often involves a moral
obligation to care for one’s children, as well as broader obligations to
one’s family and group. These common features suggest that there are at
least a few universal aspects of morality that transcend cultural
boundaries.</p>
<p><strong>Moral relativism conflicts with common-sense morality <span
class="citation" data-cites="gowans2021moral">[1]</span>.</strong>
Consider controversial practices still prevalent in some cultures, such
as honor killings in parts of the Middle East. The honor of a family
depends on the “purity” of its women. If a woman is raped or is deemed
to have compromised her chastity in some way, the profound shame brought
upon her family may lead them to kill her in response. According to the
normative moral relativist, if such a practice is in line with the moral
standards of the society where it takes place, there is nothing wrong
with it. Even more disturbingly, on some versions of relativism, men in
these societies may even be considered morally in the wrong if they fail
to kill their wives, daughters or sisters for having worn the wrong
clothing, having premarital sex or being raped. Similarly, normative
moral relativism would require us to believe that the morality of owning
slaves was entirely dependent on the societal context. Moral
iconoclasts, such as early anti-slavery campaigners, would by definition
always be morally wrong. In practice, if required to accept that moral
standards that endorse honor killings or slavery are not wrong in a
general sense, many moral relativists may recoil from this.</p>
<p><strong>Cultural moral relativism denies the possibility of
meaningful moral debate or moral progress <span class="citation"
data-cites="gowans2021moral">[1]</span>.</strong> Moral relativism seems
to require us to accept contradictory claims. For example, moral
relativists might say that a supporter of gay marriage is correct in
saying that homosexuality is morally acceptable, while someone from a
different culture might be correct in saying that homosexuality is
morally wrong, provided that both claims are in line with the moral
standards of the cultures they respectively belong to. If moral
relativism requires assert to simultaneously assert and deny that
homosexuality is morally acceptable, and any theory that generates
contradictions should be rejected, this would appear to mean that we
should reject moral relativism. In order to resist this, moral
relativists typically reinterpret the way we usual moral language in a
way that can save it from contradiction. The relativist would say that
when we say “homosexuality is wrong”, what we really mean is
“Homosexuality is not approved by my society’s norms”. This means that
relativists have to deny the possibility of moral disagreement and claim
that anyone who engages in such debates does not understand the meaning
of what they are saying.</p>
<p><strong>Moral relativism does not necessarily promote tolerance <span
class="citation" data-cites="gowans2021moral">[1]</span>.</strong> Some
have argued that one of the attractions of moral relativism is that it
promotes tolerance. By recognizing cultural differences (descriptive
moral relativism), they may assert that everyone ought to do what their
culture says is right (normative moral relativism). However, in a
society that is deeply intolerant, cultural moral relativism cannot
support tolerance, as it cannot claim that this has any universal or
objective value. Moral relativism only recommends tolerance to cultures
where it is already accepted. Indeed, to be tolerant, one need not be a
normative moral relativist. There are alternatives views which can
accommodate tolerance and multiple perspectives, such as
cosmopolitanism, liberal principles, and value pluralism.</p>
<p><strong>In practice, moral relativism can shut down ethics
discussions <span class="citation"
data-cites="gowans2021moral">[1]</span>.</strong> It is important to
note that different cultures have different moral standards. However, AI
developers sometimes invoke this observation and side with normative
moral relativism to avoid considering the ethics of their AI design
choices. Moreover, suppose AI developers do not analyze the ethical
implications of their choices and avoid ethical discussions by noting
the lack of cross-cultural consensus. In that case, the default is for
AI development to be driven by amoral forces, such as self-interest or
what makes the most sense in a competitive market. Decisions driven by
other forces, such as commercial incentives, will not necessarily be
aligned with the broader interests of society. Moral relativism can be
unattractive from a pragmatic point of view, as it limits our ability to
engage in discussions that may sometimes lead to convergence on shared
principles. This quietist stance de-emphasizes moral arguments to the
benefit of economic incentives and self-interest.<p>
Why are these debates about moral relativism relevant to AI? People
commonly observe that different cultures have different beliefs when
discussing how to ensure that AIs promote human values. It is essential
not to conflate this observation with normative moral relativism and
conclude that AI developers have no ethical responsibilities. Instead,
they are responsible for ensuring that the values embodied in their AI
systems are beneficial. Rather than a barrier, cultural variation means
that making AIs ethical requires a broad, globally representative
approach.</p>
<h2 id="is-ethics-determined-by-religion">A.2.2 Is Ethics Determined by
Religion?</h2>
<p>Moral relativists may believe that studying ethics is futile because
ethical questions are irresolvable. On the other hand, some people
believe that studying ethics is futile because moral questions are
already solved. This position is most common among those who say that
religion is the source of morality.</p>
<h3 id="divine-command-theory">Divine Command Theory</h3>
<p><strong>Many believe morality depends on God’s will and
commands.</strong> The view called <em>divine command theory</em> says
whether an action is moral is determined solely by God’s commands rather
than any qualities of the action or its consequences. (We use the term
“God” inclusively to refer to the god or gods of any religion.) This
theory suggests that God has the power to create moral obligations and
can change them at will.<p>
While this book does not argue for or against any particular religion,
we do suggest that there are severe problems with equating religion and
morality. One problem is that it creates a problematic understanding of
God.<p>
If you believe there is a god, you likely believe he is more than just
an arbitrary authority figure. Many religious traditions view God as
inherently good. It is precisely because God is good that religion
compels us to follow God’s word. However, if you believe that we should
follow God’s word because God is good, then there must be some moral
qualities (like goodness) that exist independently of God’s rules—thus,
divine command theory is false <span class="citation"
data-cites="plato2004euthyphro">[3]</span>.<p>
To be clear, this is not an argument against believing in God or
religion. It is an argument against equating God or faith with morality.
Both religious people and irreligious people can behave morally or
immorally. That’s why everyone needs to understand the factors that
might make our actions right or wrong.</p>
<br>
<br>
<h3>References</h3>
<div id="refs" class="references csl-bib-body" data-entry-spacing="0"
role="list">
<div id="ref-gowans2021moral" class="csl-entry" role="listitem">
<div class="csl-left-margin">[1] C.
Gowans, <span>“<span>Moral Relativism</span>,”</span> in <em>The
<span>Stanford</span> encyclopedia of philosophy</em>,
<span>S</span>pring 2021., E. N. Zalta, Ed., <a
href="https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/"
class="uri">https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/</a>;
Metaphysics Research Lab, Stanford University, 2021.</div>
</div>
<div id="ref-curry2019cooperate" class="csl-entry" role="listitem">
<div class="csl-left-margin">[2] O.
S. Curry, D. A. Mullins, and H. Whitehouse, <span>“Is it good to
cooperate?: Testing the theory of morality-as-cooperation in 60
societies,”</span> <em>Current Anthropology</em>, vol. 60, no. 1, pp.
47–69, 2019, doi: <a
href="https://doi.org/10.1086/701478">10.1086/701478</a>.</div>
</div>
<div id="ref-plato2004euthyphro" class="csl-entry" role="listitem">
<div class="csl-left-margin">[3] C.
Plato, <em>Euthyphro</em>. Kessinger Publishing, 2014.</div>
</div>
</div>