Skip to content

Commit

Permalink
Rebuilt site
Browse files Browse the repository at this point in the history
  • Loading branch information
David Evans committed Nov 20, 2023
1 parent 46d3d20 commit fb9b04b
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 4 deletions.
6 changes: 5 additions & 1 deletion index.html
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,11 @@ <h1><a href="/week12/">Week 12: Regulating Dangerous Technologies</a></h1>
Washington Post, 10 Feb 1999. There is still a lot of uncertainty and
skepticism if we should be fearing any kind of out-of-control AI risk,
but it is not so hard to imagine scenarios where our fate will
similarly come down to an individual&rsquo;s decision at a critical juncture.</p>
similarly come down to an individual&rsquo;s decision at a critical
juncture. (On the other hand, this article argues that we shouldn&rsquo;t
oversensationalize Petrov&rsquo;s actions and there were many other
safeguards between him and nuclear war, and we really shouldn&rsquo;t design
extinction-level systems in a way that they are so fragile to depend on an individual decision: <a href="https://russianforces.org/blog/2022/10/did_stanislav_petrov_save_the_.shtml"><em>Did Stanislav Petrov save the world in 1983? It&rsquo;s complicated</em></a>, from a Russian perspective.)</p>

</div>
<hr class="post-separator"></hr>
Expand Down
6 changes: 5 additions & 1 deletion src/content/post/week12.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,11 @@ Gut'_](https://www.washingtonpost.com/wp-srv/inatl/longterm/coldwar/shatter02109
Washington Post, 10 Feb 1999. There is still a lot of uncertainty and
skepticism if we should be fearing any kind of out-of-control AI risk,
but it is not so hard to imagine scenarios where our fate will
similarly come down to an individual's decision at a critical juncture.
similarly come down to an individual's decision at a critical
juncture. (On the other hand, this article argues that we shouldn't
oversensationalize Petrov's actions and there were many other
safeguards between him and nuclear war, and we really shouldn't design
extinction-level systems in a way that they are so fragile to depend on an individual decision: [_Did Stanislav Petrov save the world in 1983? It's complicated_](https://russianforces.org/blog/2022/10/did_stanislav_petrov_save_the_.shtml), from a Russian perspective.)



8 changes: 6 additions & 2 deletions week12/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -105,11 +105,15 @@ <h1 itemprop="name">Week 12: Regulating Dangerous Technologies</h1>
Washington Post, 10 Feb 1999. There is still a lot of uncertainty and
skepticism if we should be fearing any kind of out-of-control AI risk,
but it is not so hard to imagine scenarios where our fate will
similarly come down to an individual&rsquo;s decision at a critical juncture.</p>
similarly come down to an individual&rsquo;s decision at a critical
juncture. (On the other hand, this article argues that we shouldn&rsquo;t
oversensationalize Petrov&rsquo;s actions and there were many other
safeguards between him and nuclear war, and we really shouldn&rsquo;t design
extinction-level systems in a way that they are so fragile to depend on an individual decision: <a href="https://russianforces.org/blog/2022/10/did_stanislav_petrov_save_the_.shtml"><em>Did Stanislav Petrov save the world in 1983? It&rsquo;s complicated</em></a>, from a Russian perspective.)</p>

</div>

<meta itemprop="wordCount" content="214">
<meta itemprop="wordCount" content="273">
<meta itemprop="datePublished" content="2023-11-20">
<meta itemprop="url" content="https://llmrisks.github.io/week12/">
</article>
Expand Down

0 comments on commit fb9b04b

Please sign in to comment.