From fb9b04b89cacacfa13ef8064536b93ac51fc5794 Mon Sep 17 00:00:00 2001 From: David Evans <> Date: Mon, 20 Nov 2023 17:06:54 -0500 Subject: [PATCH] Rebuilt site --- index.html | 6 +++++- src/content/post/week12.md | 6 +++++- week12/index.html | 8 ++++++-- 3 files changed, 16 insertions(+), 4 deletions(-) diff --git a/index.html b/index.html index 754e0ce..75d222c 100644 --- a/index.html +++ b/index.html @@ -267,7 +267,11 @@

Week 12: Regulating Dangerous Technologies

Washington Post, 10 Feb 1999. There is still a lot of uncertainty and skepticism if we should be fearing any kind of out-of-control AI risk, but it is not so hard to imagine scenarios where our fate will -similarly come down to an individual’s decision at a critical juncture.

+similarly come down to an individual’s decision at a critical +juncture. (On the other hand, this article argues that we shouldn’t +oversensationalize Petrov’s actions and there were many other +safeguards between him and nuclear war, and we really shouldn’t design +extinction-level systems in a way that they are so fragile to depend on an individual decision: Did Stanislav Petrov save the world in 1983? It’s complicated, from a Russian perspective.)


diff --git a/src/content/post/week12.md b/src/content/post/week12.md index 33d6d0c..9e8f745 100644 --- a/src/content/post/week12.md +++ b/src/content/post/week12.md @@ -21,7 +21,11 @@ Gut'_](https://www.washingtonpost.com/wp-srv/inatl/longterm/coldwar/shatter02109 Washington Post, 10 Feb 1999. There is still a lot of uncertainty and skepticism if we should be fearing any kind of out-of-control AI risk, but it is not so hard to imagine scenarios where our fate will -similarly come down to an individual's decision at a critical juncture. +similarly come down to an individual's decision at a critical +juncture. (On the other hand, this article argues that we shouldn't +oversensationalize Petrov's actions and there were many other +safeguards between him and nuclear war, and we really shouldn't design +extinction-level systems in a way that they are so fragile to depend on an individual decision: [_Did Stanislav Petrov save the world in 1983? It's complicated_](https://russianforces.org/blog/2022/10/did_stanislav_petrov_save_the_.shtml), from a Russian perspective.) diff --git a/week12/index.html b/week12/index.html index 26812e1..8578b34 100644 --- a/week12/index.html +++ b/week12/index.html @@ -105,11 +105,15 @@

Week 12: Regulating Dangerous Technologies

Washington Post, 10 Feb 1999. There is still a lot of uncertainty and skepticism if we should be fearing any kind of out-of-control AI risk, but it is not so hard to imagine scenarios where our fate will -similarly come down to an individual’s decision at a critical juncture.

+similarly come down to an individual’s decision at a critical +juncture. (On the other hand, this article argues that we shouldn’t +oversensationalize Petrov’s actions and there were many other +safeguards between him and nuclear war, and we really shouldn’t design +extinction-level systems in a way that they are so fragile to depend on an individual decision: Did Stanislav Petrov save the world in 1983? It’s complicated, from a Russian perspective.)

- +