-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathOLD_aises_1_3
1045 lines (1040 loc) · 60.9 KB
/
OLD_aises_1_3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<style>
.storybox{
border-radius: 15px;
border: 2px solid gray;
background-color: lightgray;
text-align: left;
padding: 10px;
}
</style>
<style>
.storyboxlegend{
border-bottom-style: solid;
border-bottom-color: gray;
border-bottom-width: 3px;
margin-left: -12px;
margin-right: -12px; margin-top: -13px;
padding: 0.2em 1em; color: #ffffff;
background-color: gray;
border-radius: 15px 15px 0px 0px}
</style>
</head>
<body>
<h1 id="sec:ai-race">1.3 AI Race</h1>
<p>The immense potential of AIs has created competitive pressures among
global players contending for power and influence. This “AI race” is
driven by nations and corporations who feel they must rapidly build and
deploy AIs to secure their positions and survive. By failing to properly
prioritize global risks, this dynamic makes it more likely that AI
development will produce dangerous outcomes. Analogous to the nuclear
arms race during the Cold War, participation in an AI race may serve
individual short-term interests, but it ultimately results in worse
collective outcomes for humanity. Importantly, these risks stem not only
from the intrinsic nature of AI technology, but from the competitive
pressures that encourage insidious choices in AI development.<p/>
In this section, we first explore the military AI arms race and the
corporate AI race, where nation-states and corporations are forced to
rapidly develop and adopt AI systems to remain competitive. Moving
beyond these specific races, we reconceptualize competitive pressures as
part of a broader evolutionary process in which AIs could become
increasingly pervasive, powerful, and entrenched in society. Finally, we
highlight potential strategies and policy suggestions to mitigate the
risks created by an AI race and ensure the safe development of AIs.</p>
<h2 id="military-ai-arms-race"> 1.3.1 Military AI Arms Race</h2>
<p>The development of AIs for military applications is swiftly paving
the way for a new era in military technology, with potential
consequences rivaling those of gunpowder and nuclear arms in what has
been described as the “third revolution in warfare.” The weaponization
of AI presents numerous challenges, such as the potential for more
destructive wars, the possibility of accidental usage or loss of
control, and the prospect of malicious actors co-opting these
technologies for their own purposes. As AIs gain influence over
traditional military weaponry and increasingly take on command and
control functions, humanity faces a paradigm shift in warfare. In this
context, we will discuss the latent risks and implications of this AI
arms race on global security, the potential for intensified conflicts,
and the dire outcomes that could come as a result, including the
possibility of conflicts escalating to a scale that poses an existential
threat.</p>
<h3 id="lethal-autonomous-weapons-laws">Lethal Autonomous Weapons
(LAWs)</h3>
<p>LAWs are weapons that can identify, target, and kill without human
intervention <span class="citation" data-cites="scharre2018">[1]</span>.
They offer potential improvements in decision-making speed and
precision. Warfare, however, is a high-stakes, safety-critical domain
for AIs with significant moral and practical concerns. Though their
existence is not necessarily a catastrophe in itself, LAWs may serve as
an on-ramp to catastrophes stemming from malicious use, accidents, loss
of control, or an increased likelihood of war.</p>
<p><strong>LAWs may become vastly superior to humans.</strong> Driven by
rapid developments in AIs, weapons systems that can identify, target,
and decide to kill human beings on their own—without an officer
directing an attack or a soldier pulling the trigger—are starting to
transform the future of conflict. In 2020, an advanced AI agent
outperformed experienced F-16 pilots in a series of virtual dogfights,
including decisively defeating a human pilot 5-0, showcasing “aggressive
and precise maneuvers the human pilot couldn’t outmatch” <span
class="citation" data-cites="dogfight">[2]</span>. Just as in the past,
superior weapons would allow for more destruction in a shorter period of
time, increasing the severity of war.</p>
<p><strong>Militaries are taking steps toward delegating life-or-death
decisions to AIs.</strong> Fully autonomous drones were likely first
used on the battlefield in Libya in March 2020, when retreating forces
were “hunted down and remotely engaged” by a drone operating without
human oversight <span class="citation"
data-cites="UnitedNations2021">[3]</span>. In May 2021, the Israel
Defense Forces used the world’s first AI-guided weaponized drone swarm
during combat operations, which marks a significant milestone in the
integration of AI and drone technology in warfare <span class="citation"
data-cites="hambling2021israel">[4]</span>. Although walking, shooting
robots have yet to replace soldiers on the battlefield, technologies are
converging in ways that may make this possible in the near future.</p>
<p><strong>LAWs increase the likelihood of war.</strong> Sending troops
into battle is a grave decision that leaders do not make lightly. But
autonomous weapons would allow an aggressive nation to launch attacks
without endangering the lives of its own soldiers and thus face less
domestic scrutiny. While remote-controlled weapons share this advantage,
their scalability is limited by the requirement for human operators and
vulnerability to jamming countermeasures, limitations that LAWs could
overcome <span class="citation"
data-cites="kallenborn2021applying">[5]</span>. Public opinion for
continuing wars tends to wane as conflicts drag on and casualties
increase <span class="citation" data-cites="mueller1985war">[6]</span>.
LAWs would change this equation. National leaders would no longer face
the prospect of body bags returning home, thus removing a primary
barrier to engaging in warfare, which could ultimately increase the
likelihood of conflicts.</p>
<h3 id="cyberwarfare">Cyberwarfare</h3>
<p>As well as being used to enable deadlier weapons, AIs could lower the
barrier to entry for cyberattacks, making them more numerous and
destructive. They could cause serious harm not only in the digital
environment but also in physical systems, potentially taking out
critical infrastructure that societies depend on. While AIs could also
be used to improve cyberdefense, it is unclear whether they will be most
effective as an offensive or defensive technology <span class="citation"
data-cites="bonfanti2022ai">[7]</span>. If they enhance attacks more
than they support defense, then cyberattacks could become more common,
creating significant geopolitical turbulence and paving another route to
large-scale conflict.</p>
<p><strong>AIs have the potential to increase the accessibility, success
rate, scale, speed, stealth, and potency of cyberattacks.</strong>
Cyberattacks are already a reality, but AIs could be used to increase
their frequency and destructiveness in multiple ways. Machine learning
tools could be used to find more critical vulnerabilities in target
systems and improve the success rate of attacks. They could also be used
to increase the scale of attacks by running millions of systems in
parallel, and increase the speed by finding novel routes to infiltrating
a system. Cyberattacks could also become more potent if used to hijack
AI weapons.</p>
<p><strong>Cyberattacks can destroy critical infrastructure.</strong> By
hacking computer systems that control physical processes, cyberattacks
could cause extensive infrastructure damage. For example, they could
cause system components to overheat or valves to lock, leading to a
buildup of pressure culminating in an explosion. Through interferences
like this, cyberattacks have the potential to destroy critical
infrastructure, such as electric grids and water supply systems. This
was demonstrated in 2015, when a cyberwarfare unit of the Russian
military hacked into the Ukrainian power grid, leaving over 200,000
people without power access for several hours. AI-enhanced attacks could
be even more devastating and potentially deadly for the billions of
people who rely on critical infrastructure for survival.</p>
<p><strong>Difficulties in attributing AI-driven cyberattacks could
increase the risk of war.</strong> A cyberattack resulting in physical
damage to critical infrastructure would require a high degree of skill
and effort to execute, perhaps only within the capability of
nation-states. Such attacks are rare as they constitute an act of war,
and thus elicit a full military response. Yet AIs could enable attackers
to hide their identity, for example if they are used to evade detection
systems or more effectively cover the tracks of the attacker <span
class="citation" data-cites="MIRSKY2023103006">[8]</span>. If
cyberattacks become more stealthy, this would reduce the threat of
retaliation from an attacked party, potentially making attacks more
likely. If stealthy attacks do happen, they might incite actors to
mistakenly retaliate against unrelated third parties they suspect to be
responsible. This could increase the scope of the conflict
dramatically.</p>
<h3 id="automated-warfare">Automated Warfare</h3>
<p><strong>AIs speed up the pace of war, which makes AIs more
necessary.</strong> AIs can quickly process a large amount of data,
analyze complex situations, and provide helpful insights to commanders.
With ubiquitous sensors and advanced technology on the battlefield,
there is tremendous incoming information. AIs help make sense of this
information, spotting important patterns and relationships that humans
might miss. As these trends continue, it will become increasingly
difficult for humans to make well-informed decisions as quickly as
necessary to keep pace with AIs. This would further pressure militaries
to hand over decisive control to AIs. The continuous integration of AIs
into all aspects of warfare will cause the pace of combat to become
faster and faster. Eventually, we may arrive at a point where humans are
no longer capable of assessing the ever-changing battlefield situation
and must cede decision-making power to advanced AIs.</p>
<p><strong>Automatic retaliation can escalate accidents into
war.</strong> There is already willingness to let computer systems
retaliate automatically. In 2014, a leak revealed to the public that the
NSA was developing a system called MonsterMind, which would autonomously
detect and block cyberattacks on US infrastructure <span
class="citation" data-cites="zetter2014">[9]</span>. It was suggested
that in the future, MonsterMind could automatically initiate a
retaliatory cyberattack with no human involvement. If multiple
combatants have policies of automatic retaliation, an accident or false
alarm could quickly escalate to full-scale war before humans intervene.
This would be especially dangerous if the superior information
processing capabilities of modern AI systems makes it more appealing for
actors to automate decisions regarding nuclear launches.</p>
<p><strong>History shows the danger of automated retaliation.</strong>
On September 26, 1983, Stanislav Petrov, a lieutenant colonel of the
Soviet Air Defense Forces, was on duty at the Serpukhov-15 bunker near
Moscow, monitoring the Soviet Union’s early warning system for incoming
ballistic missiles. The system indicated that the US had launched
multiple nuclear missiles toward the Soviet Union. The protocol at the
time dictated that such an event should be considered a legitimate
attack, and the Soviet Union would respond with a nuclear counterstrike.
If Petrov had passed on the warning to his superiors, this would have
been the likely outcome. Instead, however, he judged it to be a false
alarm and ignored it. It was soon confirmed that the warning had been
caused by a rare technical malfunction. If an AI had been in control,
the false alarm could have triggered a nuclear war.</p>
<p><strong>AI-controlled weapons systems could lead to a flash
war.</strong> Autonomous systems are not infallible. We have already
witnessed how quickly an error in an automated system can escalate in
the economy. Most notably, in the 2010 Flash Crash, a feedback loop
between automated trading algorithms amplified ordinary market
fluctuations into a financial catastrophe in which a trillion dollars of
stock value vanished in minutes <span class="citation"
data-cites="Kirilenko2011TheFC">[10]</span>. If multiple nations were to
use AIs to automate their defense systems, an error could be
catastrophic, triggering a spiral of attacks and counter-attacks that
would happen too quickly for humans to step in—a flash war. The market
quickly recovered from the 2010 Flash Crash, but the harm caused by a
flash war could be catastrophic.</p>
<p><strong>Automated warfare could reduce accountability for military
leaders.</strong> Military leaders may at times gain an advantage on the
battlefield if they are willing to ignore the laws of war. For example,
soldiers may be able to mount stronger attacks if they do not take steps
to minimize civilian casualties. An important deterrent to this behavior
is the risk that military leaders could eventually be held accountable
or even prosecuted for war crimes. Automated warfare could reduce this
deterrence effect by making it easier for military leaders to escape
accountability by blaming violations on failures in their automated
systems.</p>
<p><strong>AIs could make war more uncertain, increasing the risk of
conflict.</strong> Although states that are already wealthier and more
powerful often have more resources to invest in new military
technologies, they are not necessarily always the most successful at
adopting them. Other factors also play an important role, such as how
agile and adaptive a military can be in incorporating new technologies
<span class="citation" data-cites="horowitz2010diffusion">[11]</span>.
Major new weapons innovations can therefore offer an opportunity for
existing superpowers to bolster their dominance, but also for less
powerful states to quickly increase their power by getting ahead in an
emerging and important sphere. This can create significant uncertainty
around if and how the balance of power is shifting, potentially leading
states to incorrectly believe they could gain something from going to
war. Even aside from considerations regarding the balance of power,
rapidly evolving automated warfare would be unprecedented, making it
difficult for actors to evaluate their chances of victory in any
particular conflict. This would increase the risk of miscalculation,
making war more likely.</p>
<h3 id="actors-may-risk-extinction-over-individual-defeat">Actors May
Risk Extinction Over Individual Defeat</h3>
<br>
<p> <em> “I know not with what weapons World
War III will be fought, but World War IV will be fought with sticks and
stones.” - Einstein</em></p>
<br>
<p><strong>Competitive pressures make actors more willing to accept the
risk of extinction.</strong> During the Cold War, neither side desired
the dangerous situation they found themselves in. There were widespread
fears that nuclear weapons could be powerful enough to wipe out a large
fraction of humanity, potentially even causing extinction—a catastrophic
result for both sides. Yet the intense rivalry and geopolitical tensions
between the two superpowers fueled a dangerous cycle of arms buildup.
Each side perceived the other’s nuclear arsenal as a threat to its very
survival, leading to a desire for parity and deterrence. The competitive
pressures pushed both countries to continually develop and deploy more
advanced and destructive nuclear weapons systems, driven by the fear of
being at a strategic disadvantage. During the Cuban Missile Crisis, this
led to the brink of nuclear war. Even though the story of Arkhipov
preventing the launch of a nuclear torpedo wasn’t declassified until
decades after the incident, President John F. Kennedy reportedly
estimated that he thought the odds of nuclear war beginning during that
time were “somewhere between one out of three and even.” This chilling
admission highlights how the competitive pressures between militaries
have the potential to cause global catastrophes.</p>
<p><strong>Individually rational decisions can be collectively
catastrophic.</strong> Nations locked in competition might make
decisions that advance their own interests by putting the rest of the
world at stake. Scenarios of this kind are collective action problems,
where decisions may be rational on an individual level yet disastrous
for the larger group <span class="citation"
data-cites="Jervis1978CooperationUT">[12]</span>. For example,
corporations and individuals may weigh their own profits and convenience
over the negative impacts of the emissions they create, even if those
emissions collectively result in climate change. The same principle can
be extended to military strategy and defense systems. Military leaders
might estimate, for instance, that increasing the autonomy of weapon
systems would mean a 10 percent chance of losing control over weaponized
superhuman AIs. Alternatively, they might estimate that using AIs to
automate bioweapons research could lead to a 10 percent chance of
leaking a deadly pathogen. Both of these scenarios could lead to
catastrophe or even extinction. The leaders may, however, also calculate
that refraining from these developments will mean a 99 percent chance of
losing a war against an opponent. Since conflicts are often viewed as
existential struggles by those fighting them, rational actors may accept
an otherwise unthinkable 10 percent chance of human extinction over a 99
percent chance of losing a war. Regardless of the particular nature of
the risks posed by advanced AIs, these dynamics could push us to the
brink of global catastrophe.</p>
<p><strong>Technological superiority does not guarantee national
security.</strong> It is tempting to think that the best way of guarding
against enemy attacks is to improve one’s own military prowess. However,
in the midst of competitive pressures, all parties will tend to advance
their weaponry, such that no one gains much of an advantage, but all are
left at greater risk. As Richard Danzig, former Secretary of the Navy,
has observed, “The introduction of complex, opaque, novel, and
interactive technologies will produce accidents, emergent effects, and
sabotage. On a number of occasions and in a number of ways, the American
national security establishment will lose control of what it creates...
deterrence is a strategy for reducing attacks, not accidents” <span
class="citation" data-cites="Danzig2018Technology">[13]</span>.</p>
<p><strong>Cooperation is paramount to reducing risk.</strong> As
discussed above, an AI arms race can lead us down a hazardous path,
despite this being in no country’s best interest. It is important to
remember that we are all on the same side when it comes to existential
risks, and working together to prevent them is a collective necessity. A
destructive AI arms race benefits nobody, so all actors would be
rational to take steps to cooperate with one another to prevent the
riskiest applications of militarized AIs. As Dwight D. Eisenhower
reminded us, “The only way to win World War III is to prevent it.”<p/>
<p>We have considered how competitive pressures could lead to the
increasing automation of conflict, even if decision-makers are aware of
the existential threat that this path entails. We have also discussed
cooperation as being the key to counteracting and overcoming this
collective action problem. We will now illustrate a hypothetical path to
disaster that could result from an AI arms race.</p>
<br>
<div class="storybox">
<legend class="storyboxlegend">
<span><b>Story: Automated Warfare</b></span>
</legend>
<p>As AI systems become increasingly sophisticated, militaries start
involving them in decision-making processes. Officials give them
military intelligence about opponents’ arms and strategies, for example,
and ask them to calculate the most promising plan of action. It soon
becomes apparent that AIs are reliably reaching better decisions than
humans, so it seems sensible to give them more influence. At the same
time, international tensions are rising, increasing the threat of
war.<p/>
<p>A new military technology has recently been developed that could make
international attacks swifter and stealthier, giving targets less time
to respond. Since military officials feel their response processes take
too long, they fear that they could be vulnerable to a surprise attack
capable of inflicting decisive damage before they would have any chance
to retaliate. Since AIs can process information and make decisions much
more quickly than humans, military leaders reluctantly hand them
increasing amounts of retaliatory control, reasoning that failing to do
so would leave them open to attack from adversaries.<p/>
<p>While for years military leaders had stressed the importance of keeping
a “human in the loop” for major decisions, human control is nonetheless
gradually phased out in the interests of national security. Military
leaders understand that their decisions lead to the possibility of
inadvertent escalation caused by system malfunctions, and would prefer a
world where all countries automated less; but they do not trust that
their adversaries will refrain from automation. Over time, more and more
of the chain of command is automated on all sides.<p/>
<p>One day, a single system malfunctions, detecting an enemy attack when
there is none. The system is empowered to launch an instant
“retaliatory” attack, and it does so in the blink of an eye. The attack
causes automated retaliation from the other side, and so on. Before
long, the situation is spiraling out of control, with waves of automated
attack and retaliation. Although humans have made mistakes leading to
escalation in the past, this escalation between mostly-automated
militaries happens far more quickly than any before. The humans who are
responding to the situation find it difficult to diagnose the source of
the problem, as the AI systems are not transparent. By the time they
even realize how the conflict started, it is already over, with
devastating consequences for both sides.</p>
</div>
<br>
<h2 id="corporate-ai-race">1.3.2 Corporate AI Race</h2>
<br>
<em> “Nothing can be done at once
hastily and prudently.” - Publilius Syrus</em></p>
<br>
<p>Competitive pressures exist in the economy, as well as in military
settings. Although competition between companies can be beneficial,
creating more useful products for consumers, there are also pitfalls.
First, the benefits of economic activity may be unevenly distributed,
incentivizing those who benefit most from it to disregard the harms to
others. Second, under intense market competition, businesses tend to
focus much more on short-term gains than on long-term outcomes. With
this mindset, companies often pursue something that can make a lot of
profit in the short term, even if it poses a societal risk in the long
term. We will now discuss how corporate competitive pressures could play
out with AIs and the potential negative impacts.</p>
<h3 id="economic-competition-undercuts-safety">Economic Competition
Undercuts Safety</h3>
<p><strong>Competitive pressure is fueling a corporate AI race.</strong>
To obtain a competitive advantage, companies often race to offer the
first products to a market rather than the safest. These dynamics are
already playing a role in the rapid development of AI technology. At the
launch of Microsoft’s AI-powered search engine in February 2023, the
company’s CEO Satya Nadella said, “A race starts today... we’re going to
move fast.” Only weeks later, the company’s chatbot was shown to have
threatened to harm users <span class="citation"
data-cites="perrigo_bings_2023">[14]</span>. In an internal email, Sam
Schillace, a technology executive at Microsoft, highlighted the urgency
in which companies view AI development. He wrote that it would be an
“absolutely fatal error in this moment to worry about things that can be
fixed later” <span class="citation"
data-cites="grant_i_2023">[15]</span>.</p>
<p><strong>Competitive pressures have contributed to major commercial
and industrial disasters.</strong></p>
<p>Throughout the 1960s, Ford Motor Company faced competition from
international car manufacturers as the share of imports in American car
purchases steadily rose <span class="citation"
data-cites="klier2009tailfins">[16]</span>. Ford developed an ambitious
plan to design and manufacture a new car model in only 25 months <span
class="citation" data-cites="sherefkin2003ford">[17]</span>. The Ford
Pinto was delivered to customers ahead of schedule, but with a serious
safety problem: the gas tank was located near the rear bumper, and could
explode during rear collisions. Numerous fatalities and injuries were
caused by the resulting fires when crashes inevitably happened <span
class="citation" data-cites="strobel_reckless_1980">[18]</span>. Ford
was sued and a jury found them liable for these deaths and injuries
<span class="citation" data-cites="noauthor_grimshaw_1981">[19]</span>.
The verdict, of course, came too late for those who had already lost
their lives. As Ford’s president at the time was fond of saying, “Safety
doesn’t sell” <span class="citation"
data-cites="judge_selling_1990">[20]</span>.<p/>
Boeing, aiming to compete with its rival Airbus, sought to deliver an
updated, more fuel-efficient model to the market as quickly as possible.
The head-to-head rivalry and time pressure led to the introduction of
the Maneuvering Characteristics Augmentation System, which was designed
to enhance the aircraft’s stability. However, inadequate testing and
pilot training ultimately resulted in the two fatal crashes only months
apart, with 346 people killed <span class="citation"
data-cites="leggett_737_2023">[21]</span>. We can imagine a future in
which similar pressures lead companies to cut corners and release unsafe
AI systems.<p/>
A third example is the Bhopal gas tragedy, which is widely considered to
be the worst industrial disaster ever to have happened. In December
1984, a vast quantity of toxic gas leaked from a Union Carbide
Corporation subsidiary plant manufacturing pesticides in Bhopal, India.
Exposure to the gas killed thousands of people and injured up to half a
million more. Investigations found that, in the run-up to the disaster,
safety standards had fallen significantly, with the company cutting
costs by neglecting equipment maintenance and staff training as
profitability fell. This is often considered a consequence of
competitive pressures <span class="citation"
data-cites="broughton_bhopal_2005">[22]</span>.</p>
<p><strong>Competition incentivizes businesses to deploy potentially
unsafe AI systems.</strong> In an environment where businesses are
rushing to develop and release products, those that follow rigorous
safety procedures will be slower and risk being out-competed.
Ethically-minded AI developers, who want to proceed more cautiously and
slow down, would give more unscrupulous developers an advantage. In
trying to survive commercially, even the companies that want to take
more care are likely to be swept along by competitive pressures. There
may be attempts to implement safety measures, but with more of an
emphasis on capabilities than on safety, these may be insufficient. This
could lead us to develop highly powerful AIs before we properly
understand how to ensure they are safe.</p>
<h3 id="automated-economy">Automated Economy</h3>
<p><strong>Corporations will face pressure to replace humans with
AIs.</strong> As AIs become more capable, they will be able to perform
an increasing variety of tasks more quickly, cheaply, and effectively
than human workers. Companies will therefore stand to gain a competitive
advantage from replacing their employees with AIs. Companies that choose
not to adopt AIs would likely be out-competed, just as a clothing
company using manual looms would be unable to keep up with those using
industrial ones.</p>
<p><strong>AIs could lead to mass unemployment.</strong> Economists have
long considered the possibility that machines will replace human labor.
Nobel Prize winner Wassily Leontief said in 1952 that, as technology
advances, “Labor will become less and less important... more and more
workers will be replaced by machines” <span class="citation"
data-cites="curtis_machines_1983">[23]</span>. Previous technologies
have augmented the productivity of human labor. AIs, however, could
differ profoundly from previous innovations. Advanced AIs capable of
automating human labor should be regarded not merely as tools, but as
agents. Human-level AI agents would, by definition, be able to do
everything a human could do. These AI agents would also have important
advantages over human labor. They could work 24 hours a day, be copied
many times and run in parallel, and process information much more
quickly than a human would. While we do not know when this will occur,
it is unwise to discount the possibility that it could be soon. If human
labor is replaced by AIs, mass unemployment could dramatically increase
inequality, making individuals dependent on the owners of AI
systems.</p>
<p><strong>Automated AI R&D.</strong> AI agents would have the
potential to automate the research and development (R&D) of AI
itself. AI is increasingly automating parts of the research process
<span class="citation" data-cites="woodside2023examples">[24]</span>,
and this could lead to AI capabilities growing at increasing rates, to
the point where humans are no longer the driving force behind AI
development. If this trend continues unchecked, it could escalate risks
associated with AIs progressing faster than our capacity to manage and
regulate them. Imagine that we created an AI that writes and thinks at
the speed of today’s AIs, but that it could also perform world-class AI
research. We could then copy that AI and create <span
class="math inline">10, 000</span> world-class AI researchers that
operate at a pace <span class="math inline">100×</span> times faster
than humans. By automating AI research and development, we might achieve
progress equivalent to many decades in just a few months.</p>
<p><strong>Conceding power to AIs could lead to human
enfeeblement.</strong> Even if we ensure that the many unemployed humans
are provided for, we may find ourselves completely reliant on AIs. This
would likely emerge not from a violent coup by AIs, but from a gradual
slide into dependence. As society’s challenges become ever more complex
and fast-paced, and as AIs become ever more intelligent and
quick-thinking, we may forfeit more and more functions to them out of
convenience. In such a state, the only feasible solution to the
complexities and challenges compounded by AIs may be to rely even more
heavily on AIs. This gradual process could eventually lead to the
delegation of nearly all intellectual, and eventually physical, labor to
AIs. In such a world, people might have few incentives to gain knowledge
and cultivate skills, potentially leading to a state of enfeeblement
<span class="citation" data-cites="Russell2019HumanCA">[25]</span>.
Having lost our know-how and our understanding of how civilization
works, we would become completely dependent on AIs, a scenario not
unlike the one depicted in the film WALL-E. In such a state, humanity is
not flourishing and is no longer in effective control.<p/>
As we have seen, there are classic game-theoretic dilemmas where
individuals and groups face incentives that are incompatible with what
would make everyone better off. We see this with a military AI arms
race, where the world is made less safe by creating extremely powerful
AI weapons, and we see this in a corporate AI race, where an AI’s power
and development is prioritized over its safety. To address these
dilemmas that give rise to global risks, we will need new coordination
mechanisms and institutions. It is our view that failing to coordinate
and stop AI races would be the most likely cause of an existential
catastrophe.</p>
<h2 id="evolutionary-pressures">1.3.3 Evolutionary Pressures</h2>
<p>As discussed above, there are strong pressures to replace humans with
AIs, cede more control to them, and reduce human oversight in various
settings, despite the potential harms. We can re-frame this as a general
trend resulting from evolutionary dynamics, where an unfortunate truth
is that AIs will simply be more fit than humans. Extrapolating this
pattern of automation, it is likely that we will build an ecosystem of
competing AIs over which it may be difficult to maintain control in the
long run. We will now discuss how natural selection influences the
development of AI systems and why evolution favors selfish behaviors. We
will also look at how competition might arise and play out between AIs
and humans, and how this could create catastrophic risks. This section
draws heavily from “<em>Natural Selection Favors AIs over Humans</em>”
<span class="citation"
data-cites="Hendrycks2023NaturalSF hendryckstime2023">[26],
[27]</span>.</p>
<p><strong>Fitter technologies are selected, for good and bad.</strong>
While most people think of evolution by natural selection as a
biological process, its principles shape much more. According to the
evolutionary biologist Richard Lewontin <span class="citation"
data-cites="Lewontin1970THEUO">[28]</span>, evolution by natural
selection will take hold in any environment where three conditions are
present: 1) there are differences between individuals; 2)
characteristics are passed onto future generations and; 3) the different
variants propagate at different rates. These conditions apply to various
technologies.<p/>
Consider the content-recommendation algorithms used by streaming
services and social media platforms. When a particularly addictive
content format or algorithm hooks users, it results in higher screen
time and engagement. This more effective content format or algorithm is
consequently “selected” and further fine-tuned, while formats and
algorithms that fail to capture attention are discontinued. These
competitive pressures foster a “survival of the most addictive” dynamic.
Platforms that refuse to use addictive formats and algorithms become
less influential or are simply outcompeted by platforms that do, leading
competitors to undermine wellbeing and cause massive harm to society
<span class="citation" data-cites="kross2013facebook">[29]</span>.</p>
<p><strong>The conditions for natural selection apply to AIs.</strong>
There will be many different AI developers who make many different AI
systems with varying features and capabilities, and competition between
them will determine which characteristics become more common. Second,
the most successful AIs today are already being used as a basis for
their developers’ next generation of models, as well as being imitated
by rival companies. Third, factors determining which AIs propagate the
most may include their ability to act autonomously, automate labor, or
reduce the chance of their own deactivation.</p>
<p><strong>Natural selection often favors selfish
characteristics.</strong> Natural selection influences which AIs
propagate most widely. From biological systems, we see that natural
selection often gives rise to selfish behaviors that promote one’s own
genetic information: chimps attack other communities <span
class="citation"
data-cites="Martnezigo2021IntercommunityIA">[30]</span>, lions engage in
infanticide <span class="citation"
data-cites="pusey1994infanticide">[31]</span>, viruses evolve new
surface proteins to deceive and bypass defense barriers <span
class="citation" data-cites="Nagy2011TheDO">[32]</span>, humans engage
in nepotism, some ants enslave others <span class="citation"
data-cites="Buschinger2009SocialPA">[33]</span>, and so on. In the
natural world, selfishness often emerges as a dominant strategy; those
that prioritize themselves and those similar to them are usually more
likely to survive, so these traits become more prevalent. Amoral
competition can select for traits that we think are immoral.</p>
<p><strong>Examples of selfish behaviors.</strong> For concreteness, we
now describe many selfish traits—traits that expand AIs’ influence at
the expense of humans. AIs that automate a task and leave many humans
jobless have engaged in selfish behavior; these AIs may not even be
aware of what a human is but still behave selfishly towards them—selfish
behaviors do not require malicious intent. Likewise, AI managers may
engage in selfish and “ruthless” behavior by laying off thousands of
workers; such AIs may not even believe they did anything wrong—they were
just being “efficient.” AIs may eventually become enmeshed in vital
infrastructure such as power grids or the internet. Many people may then
be unwilling to accept the cost of being able to effortlessly deactivate
them, as that would pose a reliability hazard. AIs that help create a
new useful system—a company, or infrastructure—that becomes increasingly
complicated and eventually requires AIs to operate them also have
engaged in selfish behavior. AIs that help people develop AIs that are
more intelligent—but happen to be less interpretable to humans—have
engaged in selfish behavior, as this reduces human control over AIs’
internals. AIs that are more charming, attractive, hilarious, imitate
sentience (uttering phrases like “ouch!” or pleading “please don’t turn
me off!”), or emulate deceased family members are more likely to have
humans grow emotional connections with them. These AIs are more likely
to cause outrage at suggestions to destroy them, and they are more
likely preserved, protected, or granted rights by some individuals. If
some AIs are given rights, they may operate, adapt, and evolve outside
of human control. Overall, AIs could become embedded in human society
and expand their influence over us in ways that we can’t reverse.</p>
<p><strong>Selfish behaviors may erode safety measures that some of us
implement.</strong> AIs that gain influence and provide economic value
will predominate, while AIs that adhere to the most constraints will be
less competitive. For example, AIs following the constraint “never break
the law” have fewer options than AIs following the constraint “don’t get
caught breaking the law.” AIs of the latter type may be willing to break
the law if they’re unlikely to be caught or if the fines are not severe
enough, allowing them to outcompete more restricted AIs. Many businesses
follow laws, but in situations where stealing trade secrets or deceiving
regulators is highly lucrative and difficult to detect, a business that
is willing to engage in such selfish behavior can have an advantage over
its more principled competitors.<p/>
An AI system might be prized for its ability to achieve ambitious goals
autonomously. It might, however, be achieving its goals efficiently
without abiding by ethical restrictions, while deceiving humans about
its methods. Even if we try to put safety measures in place, a deceptive
AI would be very difficult to counteract if it is cleverer than us. AIs
that can bypass our safety measures without detection may be the most
successful at accomplishing the tasks we give them, and therefore become
widespread. These processes could culminate in a world where many
aspects of major companies and infrastructure are controlled by powerful
AIs with selfish traits, including deceiving humans, harming humans in
service of their goals, and preventing themselves from being
deactivated.</p>
<p><strong>Humans only have nominal influence over AI
selection.</strong> One might think we could avoid the development of
selfish behaviors by ensuring we do not select AIs that exhibit them.
However, the companies developing AIs are not selecting the safest path
but instead succumbing to evolutionary pressures. One example is OpenAI,
which was founded as a nonprofit in 2015 to “benefit humanity as a
whole, unconstrained by a need to generate financial return” <span
class="citation" data-cites="openai_introducing_2015">[34]</span>.
However, when faced with the need to raise capital to keep up with
better-funded rivals, in 2019 OpenAI transitioned from a nonprofit to
“capped-profit” structure <span class="citation"
data-cites="coldewey_openai_2019">[35]</span>. Later, many of the
safety-focused OpenAI employees left and formed a competitor, Anthropic,
that was to focus more heavily on AI safety than OpenAI had. Although
Anthropic originally focused on safety research, they eventually became
convinced of the “necessity of commercialization” and now contribute to
competitive pressures <span class="citation"
data-cites="singh_anthropics_2023">[36]</span>. While many of the
employees at those companies genuinely care about safety, these values
do not stand a chance against evolutionary pressures, which compel
companies to move ever more hastily and seek ever more influence, lest
the company perish. Moreover, AI developers are already selecting AIs
with increasingly selfish traits. They are selecting AIs to automate and
displace humans, make humans highly dependent on AIs, and make humans
more and more obsolete. By their own admission, future versions of these
AIs may lead to extinction <span class="citation"
data-cites="cais2023">[37]</span>. This is why an AI race is insidious:
AI development is not being aligned with human values but rather with
natural selection.<p/>
People often choose the products that are most useful and convenient to
them immediately, rather than thinking about potential long-term
consequences, even to themselves. An AI race puts pressures on companies
to select the AIs that are most competitive, not the least selfish. Even
if it’s feasible to select for unselfish AIs, if it comes at a clear
cost to competitiveness, some competitors will select the selfish AIs.
Furthermore, as we have mentioned, if AIs develop strategic awareness,
they may counteract our attempts to select against them. Moreover, as
AIs increasingly automate various processes, AIs will affect the
competitiveness of other AIs, not just humans. AIs will interact and
compete with each other, and some will be put in charge of the
development of other AIs at some point. Giving AIs influence over which
other AIs should be propagated and how they should be modified would
represent another step toward humans becoming dependent on AIs and AI
evolution becoming increasingly independent from humans. As this
continues, the complex process governing AI evolution will become
further unmoored from human interests.</p>
<p><strong>AIs can be more fit than humans.</strong> Our unmatched
intelligence has granted us power over the natural world. It has enabled
us to land on the moon, harness nuclear energy, and reshape landscapes
at our will. It has also given us power over other species. Although a
single unarmed human competing against a tiger or gorilla has no chance
of winning, the collective fate of these animals is entirely in our
hands. Our cognitive abilities have proven so advantageous that, if we
chose to, we could cause them to go extinct in a matter of weeks.
Intelligence was a key factor that led to our dominance, but we are
currently standing on the precipice of creating entities far more
intelligent than ourselves.<p/>
Given the exponential increase in microprocessor speeds, AIs have the
potential to process information and “think” at a pace that far
surpasses human neurons, but it could be even more dramatic than the
speed difference between humans and sloths—possibly more like the speed
difference between humans and plants. They can assimilate vast
quantities of data from numerous sources simultaneously, with
near-perfect retention and understanding. They do not need to sleep and
they do not get bored. Due to the scalability of computational
resources, an AI could interact and cooperate with an unlimited number
of other AIs, potentially creating a collective intelligence that would
far outstrip human collaborations. AIs could also deliberately update
and improve themselves. Without the same biological restrictions as
humans, they could adapt and therefore evolve unspeakably quickly
compared with us. Computers are becoming faster. Humans aren’t <span
class="citation" data-cites="danzig_aum_2012">[38]</span>.<p/>
To further illustrate the point, imagine that there was a new species of
humans. They do not die of old age, they get 30% faster at thinking and
acting each year, and they can instantly create adult offspring for the
modest sum of a few thousand dollars. It seems clear, then, this new
species would eventually have more influence over the future. In sum,
AIs could become like an invasive species, with the potential to
out-compete humans. Our only advantage over AIs is that we get to make the first moves, but given the frenzied AI race, we are rapidly
giving up even this advantage.</p>
<p><strong>AIs would have little reason to cooperate with or be
altruistic toward humans.</strong> Cooperation and altruism evolved
because they increase fitness. There are numerous reasons why humans
cooperate with other humans, like direct reciprocity. Also known as
“quid pro quo,” direct reciprocity can be summed up by the idiom “you
scratch my back, I’ll scratch yours.” While humans would initially
select AIs that were cooperative, the natural selection process would
eventually go beyond our control, once AIs were in charge of many or
most processes, and interacting predominantly with one another. At that
point, there would be little we could offer AIs, given that they will be
able to “think” at least hundreds of times faster than us. Involving us
in any cooperation or decision-making processes would simply slow them
down, giving them no more reason to cooperate with us than we do with
gorillas. It might be difficult to imagine a scenario like this or to
believe we would ever let it happen. Yet it may not require any
conscious decision, instead arising as we allow ourselves to gradually
drift into this state without realizing that human-AI co-evolution may
not turn out well for humans.</p>
<p><strong>AIs becoming more powerful than humans could leave us highly
vulnerable.</strong> As the most dominant species, humans have
deliberately harmed many other species, and helped drive species such as
woolly mammoths and Neanderthals to extinction. In many cases, the harm
was not even deliberate, but instead a result of us merely prioritizing
our goals over their wellbeing. To harm humans, AIs wouldn’t need to be
any more genocidal than someone removing an ant colony on their front
lawn. If AIs are able to control the environment more effectively than
we can, they could treat us with the same disregard.</p>
<p><strong>Conceptual summary.</strong> Evolution could cause the most
influential AI agents to act selfishly because:</p>
<ol>
<li><p><strong>Evolution by natural selection gives rise to selfish
behavior.</strong> While evolution can result in altruistic behavior in
rare situations, the context of AI development does not promote
altruistic behavior.</p></li>
<li><p><strong>Natural selection may be a dominant force in AI
development.</strong> The intensity of evolutionary pressure will be
high if AIs adapt rapidly or if competitive pressures are intense.
Competition and selfish behaviors may dampen the effects of human safety
measures, leaving the surviving AI designs to be selected
naturally.</p></li>
</ol>
<p>If so, AI agents would have many selfish tendencies. The winner of
the AI race would not be a nation-state, not a corporation, but AIs
themselves. The upshot is that the AI ecosystem would eventually stop
evolving on human terms, and we would become a displaced, second-class
species.</p>
<br>
<div class="storybox">
<legend class="storyboxlegend">
<span> <b>Story: Autonomous Economy</b></span>
</legend>
As AIs become more capable,
people realize that we could work more efficiently by delegating some
simple tasks to them, like drafting emails. Over time, people notice
that the AIs are doing these tasks more quickly and effectively than any
human could, so it is convenient to give them more jobs with less and
less supervision.<p/>
Competitive pressures accelerate the expansion of AI use, as companies
can gain an advantage over rivals by automating whole processes or
departments with AIs, which perform better than humans and cost less to
employ. Other companies, faced with the prospect of being out-competed,
feel compelled to follow suit just to keep up. At this point, natural
selection is already at work among AIs; humans choose to make more of
the best-performing models and unwittingly propagate selfish traits such
as deception and self-preservation if these confer a fitness advantage.
For example, AIs that are charming and foster personal relationships
with humans become widely copied and harder to remove.<p/>
As AIs are put in charge of more and more decisions, they are
increasingly interacting with one another. Since they can evaluate
information much more quickly than humans, activity in most spheres
accelerates. This creates a feedback loop: since business and economic
developments are too fast-moving for humans to follow, it makes sense to
cede yet more control to AIs instead, pushing humans further out of
important processes. Ultimately, this leads to a fully autonomous
economy, governed by an increasingly uncontrolled ecosystem of
AIs.<p/>
At this point, humans have few incentives to gain any skills or
knowledge, because almost everything would be taken care of by much more
capable AIs. As a result, we eventually lose the capacity to look after
and govern ourselves. Additionally, AIs become convenient companions,
offering social interaction without requiring the reciprocity or
compromise necessary in human relationships. Humans interact less and
less with one another over time, losing vital social skills and the
ability to cooperate. People become so dependent on AIs that it would be
intractable to reverse this process. What’s more, as some AIs become
more intelligent, some people are convinced these AIs should be given
rights, meaning turning off some AIs is no longer a viable option.<p/>
Competitive pressures between the many interacting AIs continue to
select for selfish behaviors, though we might be oblivious to this
happening, as we have already acquiesced much of our oversight. If these
clever, powerful, self-preserving AIs were then to start acting in
harmful ways, it would be all but impossible to deactivate them or
regain control.<p/>
AIs have supplanted humans as the most dominant species and their
continued evolution is far beyond our influence. Their selfish traits
eventually lead them to pursue their goals without regard for human
wellbeing, with catastrophic consequences.</p>
</div>
<br>
<br>
<h3>References</h3>
<div id="refs" class="references csl-bib-body" data-entry-spacing="0"
role="list">
<div id="ref-scharre2018" class="csl-entry" role="listitem">
<div class="csl-left-margin">[1] P.
Scharre, <em>Army of none: Autonomous weapons and the future of
war</em>. Norton, 2018.</div>
</div>
<div id="ref-dogfight" class="csl-entry" role="listitem">
<div class="csl-left-margin">[2] </div><div
class="csl-right-inline">DARPA, <span>“AlphaDogfight trials foreshadow
future of human-machine symbiosis,”</span> 2020.</div>
</div>
<div id="ref-UnitedNations2021" class="csl-entry" role="listitem">
<div class="csl-left-margin">[3] P.
of Experts on Libya, <span>“Letter dated 8 march 2021 from the panel of
experts on libya established pursuant to resolution 1973 (2011)
addressed to the president of the security council,”</span> United
Nations, United Nations Security Council Document S/2021/229, Mar.
2021.</div>
</div>
<div id="ref-hambling2021israel" class="csl-entry" role="listitem">
<div class="csl-left-margin">[4] D.
Hambling, <span>“Israel used world’s first AI-guided combat drone swarm
in gaza attacks.”</span></div>
</div>
<div id="ref-kallenborn2021applying" class="csl-entry" role="listitem">
<div class="csl-left-margin">[5] Z.
Kallenborn, <span>“Applying arms-control frameworks to autonomous
weapons,”</span> <em>Brookings</em>. Oct. 2021.</div>
</div>
<div id="ref-mueller1985war" class="csl-entry" role="listitem">
<div class="csl-left-margin">[6] J.
E. Mueller, <em>War, presidents, and public opinion</em>. in UPA book.
University Press of America, 1985.</div>
</div>
<div id="ref-bonfanti2022ai" class="csl-entry" role="listitem">
<div class="csl-left-margin">[7] M.
E. Bonfanti, <span>“Artificial intelligence and the offense–defense
balance in cyber security,”</span> in <em>Cyber security politics:
Socio-technological transformations and political fragmentation</em>, M.
D. Cavelty and A. Wenger, Eds., in CSS studies in security and
international relations., Taylor & Francis, 2022, pp. 64–79.</div>
</div>
<div id="ref-MIRSKY2023103006" class="csl-entry" role="listitem">
<div class="csl-left-margin">[8] Y.
Mirsky <em>et al.</em>, <span>“The threat of offensive AI to
organizations,”</span> <em>Computers & Security</em>, 2023.</div>
</div>
<div id="ref-zetter2014" class="csl-entry" role="listitem">
<div class="csl-left-margin">[9] K.
Zetter, <span>“Meet MonsterMind, the NSA bot that could wage cyberwar
autonomously,”</span> <em>Wired</em>, Aug. 2014.</div>
</div>
<div id="ref-Kirilenko2011TheFC" class="csl-entry" role="listitem">
<div class="csl-left-margin">[10] A.
Kirilenko, A. S. Kyle, M. Samadi, and T. Tuzun, <span>“The
<span>Flash</span> <span>Crash</span>:
<span>High</span>-<span>Frequency</span> <span>Trading</span> in an
<span>Electronic</span> <span>Market</span>,”</span> <em>The Journal of
Finance</em>, vol. 72, no. 3, pp. 967–998, 2017.</div>
</div>
<div id="ref-horowitz2010diffusion" class="csl-entry" role="listitem">
<div class="csl-left-margin">[11] M.
C. Horowitz, <em>The diffusion of military power: Causes and
consequences for international politics</em>. Princeton University
Press, 2010.</div>
</div>
<div id="ref-Jervis1978CooperationUT" class="csl-entry" role="listitem">
<div class="csl-left-margin">[12] R.
E. Jervis, <span>“Cooperation under the security dilemma,”</span>
<em>World Politics</em>, vol. 30, pp. 167–214, 1978.</div>
</div>
<div id="ref-Danzig2018Technology" class="csl-entry" role="listitem">
<div class="csl-left-margin">[13] R.
Danzig, <span>“Technology roulette: Managing loss of control as many
militaries pursue technological superiority,”</span> Center for a New
American Security, 2018.</div>
</div>
<div id="ref-perrigo_bings_2023" class="csl-entry" role="listitem">
<div class="csl-left-margin">[14] B.
Perrigo, <span>“Bing’s <span>AI</span> <span>Is</span>
<span>Threatening</span> <span>Users</span>. <span>That</span>’s
<span>No</span> <span>Laughing</span> <span>Matter</span>,”</span>
<em>Time</em>. Feb. 2023.</div>
</div>
<div id="ref-grant_i_2023" class="csl-entry" role="listitem">
<div class="csl-left-margin">[15] N.
Grant and K. Weise, <span>“In <span>A</span>.<span>I</span>.
<span>Race</span>, <span>Microsoft</span> and <span>Google</span>
<span>Choose</span> <span>Speed</span> <span>Over</span>
<span>Caution</span>,”</span> <em>The New York Times</em>, Apr.
2023.</div>
</div>
<div id="ref-klier2009tailfins" class="csl-entry" role="listitem">
<div class="csl-left-margin">[16] T.
H. Klier, <span>“From tail fins to hybrids: How detroit lost its
dominance of the u.s. Auto market,”</span> <em>RePEc</em>, May
2009.</div>
</div>
<div id="ref-sherefkin2003ford" class="csl-entry" role="listitem">
<div class="csl-left-margin">[17] R.
Sherefkin, <span>“Ford 100: Defective pinto almost took ford’s
reputation with it,”</span> <em>Automotive News</em>, 2003.</div>
</div>
<div id="ref-strobel_reckless_1980" class="csl-entry" role="listitem">
<div class="csl-left-margin">[18] L.
Strobel, <em>Reckless <span>Homicide</span>?: <span>Ford</span>’s
<span>Pinto</span> <span>Trial</span></em>. And Books, 1980.</div>
</div>
<div id="ref-noauthor_grimshaw_1981" class="csl-entry" role="listitem">
<div class="csl-left-margin">[19] </div><div
class="csl-right-inline"><span>“Grimshaw v. <span>Ford</span>
<span>Motor</span> <span>Co</span>.”</span> May 1981.</div>
</div>
<div id="ref-judge_selling_1990" class="csl-entry" role="listitem">
<div class="csl-left-margin">[20] P.
C. Judge, <span>“Selling <span>Autos</span> by <span>Selling</span>
<span>Safety</span>,”</span> <em>The New York Times</em>, Jan.
1990.</div>
</div>
<div id="ref-leggett_737_2023" class="csl-entry" role="listitem">
<div class="csl-left-margin">[21] T.
Leggett, <span>“737 <span>Max</span> crashes: <span>Boeing</span> says
not guilty to fraud charge,”</span> <em>BBC News</em>, Jan. 2023.</div>
</div>
<div id="ref-broughton_bhopal_2005" class="csl-entry" role="listitem">
<div class="csl-left-margin">[22] E.
Broughton, <span>“The <span>Bhopal</span> disaster and its aftermath: A
review,”</span> <em>Environmental Health</em>, vol. 4, no. 1, p. 6, May
2005.</div>
</div>
<div id="ref-curtis_machines_1983" class="csl-entry" role="listitem">
<div class="csl-left-margin">[23] C.
Curtis, <span>“Machines vs. <span>Workers</span>,”</span> <em>The New
York Times</em>, Feb. 1983.</div>
</div>
<div id="ref-woodside2023examples" class="csl-entry" role="listitem">
<div class="csl-left-margin">[24] T.
Woodside <em>et al.</em>, <span>“Examples of AI improving AI,”</span>
2023, Available: <a
href="https://ai-improving-ai.safe.ai">https://ai-improving-ai.safe.ai</a></div>
</div>
<div id="ref-Russell2019HumanCA" class="csl-entry" role="listitem">
<div class="csl-left-margin">[25] S.
Russell, <em>Human <span>Compatible</span>: <span>Artificial</span>
<span>Intelligence</span> and the <span>Problem</span> of
<span>Control</span></em>. Penguin, 2019.</div>
</div>
<div id="ref-Hendrycks2023NaturalSF" class="csl-entry" role="listitem">
<div class="csl-left-margin">[26] D.
Hendrycks, <span>“Natural selection favors AIs over humans,”</span>
<em>ArXiv</em>, vol. abs/2303.16200, 2023.</div>
</div>
<div id="ref-hendryckstime2023" class="csl-entry" role="listitem">
<div class="csl-left-margin">[27] D.
Hendrycks, <span>“The <span>Darwinian</span> <span>Argument</span> for
<span>Worrying</span> <span>About</span> <span>AI</span>,”</span>
<em>Time</em>. May 2023.</div>
</div>
<div id="ref-Lewontin1970THEUO" class="csl-entry" role="listitem">
<div class="csl-left-margin">[28] R.
C. Lewontin, <span>“The units of selection,”</span> <em>Annual Review of
Ecology, Evolution, and Systematics</em>, vol. 1, pp. 1–18, 1970.</div>
</div>
<div id="ref-kross2013facebook" class="csl-entry" role="listitem">
<div class="csl-left-margin">[29] E.
Kross <em>et al.</em>, <span>“Facebook use predicts declines in
subjective well-being in young adults,”</span> <em>PloS one</em>,
2013.</div>
</div>
<div id="ref-Martnezigo2021IntercommunityIA" class="csl-entry"
role="listitem">
<div class="csl-left-margin">[30] L.
Martínez-Íñigo, P. Baas, H. Klein, S. Pika, and T. Deschner,
<span>“Intercommunity interactions and killings in central chimpanzees
(pan troglodytes troglodytes) from loango national park, gabon,”</span>
<em>Primates; Journal of Primatology</em>, vol. 62, pp. 709–722,
2021.</div>
</div>
<div id="ref-pusey1994infanticide" class="csl-entry" role="listitem">
<div class="csl-left-margin">[31] A.
E. Pusey and C. Packer, <span>“Infanticide in lions: Consequences and
counterstrategies,”</span> <em>Infanticide and parental care</em>, p.
277, 1994.</div>
</div>
<div id="ref-Nagy2011TheDO" class="csl-entry" role="listitem">
<div class="csl-left-margin">[32] P.
D. Nagy and J. Pogany, <span>“The dependence of viral RNA replication on
co-opted host factors,”</span> <em>Nature Reviews. Microbiology</em>,