Skip to content

Commit

Permalink
updated people site and publications
Browse files Browse the repository at this point in the history
  • Loading branch information
tobiasgerstenberg committed Nov 24, 2024
1 parent 1b395cf commit e3a9d7f
Show file tree
Hide file tree
Showing 36 changed files with 831 additions and 175 deletions.
51 changes: 26 additions & 25 deletions content/home/people.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,24 +19,14 @@ weight = 3
id = "Tobias Gerstenberg"
position = "Principal Investigator"
email = "[email protected]"
twitter = "tobigerstenberg"
twitter = "tobigerstenberg"
# bluesky = "tobigerstenberg.bsky.social"
github = "tobiasgerstenberg"
scholar = "citations?user=d0TfP8EAAAAJ&hl=en&oi=ao"
cv = "tobias_gerstenberg.pdf"
website = "tobias_gerstenberg"
description = "I am interested in how people hold others responsible, how these judgments are grounded in causal representations of the world, and supported by counterfactual simulations. I also like to drink tea."

[[member]]
id = "Jan-Philipp Fränken"
position = "Postdoctoral Researcher"
image = "jan-philipp_franken.jpg"
# because special characters have trouble rendering on OS X
email = "[email protected]"
github = "janphilippfranken"
scholar = "citations?user=s2omqQcAAAAJ&hl=en"
website = "https://janphilippfranken.github.io/"
description = "I'm interested in representation learning, causal inference, and theory of mind reasoning. I currently work on [MARPLE](https://hai.stanford.edu/2022-hoffman-yee-grant-recipients): Explaining what happened through multimodal simulation. I like to drink hot chocolate."

[[member]]
id = "Erik Brockbank"
position = "Postdoctoral Researcher"
Expand Down Expand Up @@ -96,12 +86,6 @@ weight = 3
twitter = "verona_teo"
description = "I'm interested in computational and behavioral models of cognition, including social, moral, and causal reasoning. I did my undergraduate studies in data science and philosophy. I like coffee."

[[member]]
id = "Addison Jadwin"
position = "Research Assistant"
email = "[email protected]"
description = "I'm a junior at Stanford majoring in symbolic systems. I'm interested in understanding cognition through computational models. Outside of this I enjoy playing viola and taking care of my fish and corals!"

[[member]]
id = "Ricky Ma"
position = "Research Assistant"
Expand All @@ -118,12 +102,6 @@ weight = 3
github = "shruti-sridhar"
description = "I am a sophomore at Stanford looking to major in Computer Science on the AI track. I am interested in using computational models to explore causality in social settings. Outside of that, I enjoy dancing and amateur vegan baking."

[[member]]
id = "Siying Zhang"
position = "Research Assistant"
email = "[email protected]"
description = "I'm a dutiful questioner and an adroit researcher. I have a background in education and second language acquisition. I'm interested in how language affects social category development as well as perceived characteristics of individual social group members. I am also interested in the psychological and sociological disciplines that interact with each other and how the information I've learned from both perspectives are related together. So far at Stanford, I'm working on a couple of projects on causal judgements and shape bias. Ultimately, I'm planning to become a human factors researcher or UX research scientist. I love to do high intensity workouts followed by vanilla sweet cream cold brew coffee, or maybe coffee first!"

[[member]]
id = "Sunny Yu"
position = "Research Assistant"
Expand Down Expand Up @@ -524,8 +502,31 @@ weight = 3
# website = "https://haoranzhao419.github.io/"
# description = "I am interested in understanding how effectively language models can perform reasoning and comprehend commonsense and factual knowledge. Subsequently, with a better understanding of LLM's cognitive abilities, I hope to build more cognitive-feasible and efficient language models at small scales. In my free time, I like running and sailing. I like lemonade."

# [[member]]
# id = "Jan-Philipp Fränken"
# position = "Postdoctoral Researcher"
# image = "jan-philipp_franken.jpg"
# # because special characters have trouble rendering on OS X
# email = "[email protected]"
# github = "janphilippfranken"
# scholar = "citations?user=s2omqQcAAAAJ&hl=en"
# website = "https://janphilippfranken.github.io/"
# description = "I'm interested in representation learning, causal inference, and theory of mind reasoning. I currently work on [MARPLE](https://hai.stanford.edu/2022-hoffman-yee-grant-recipients): Explaining what happened through multimodal simulation. I like to drink hot chocolate."

# [[member]]
# id = "Siying Zhang"
# position = "Research Assistant"
# email = "[email protected]"
# description = "I'm a dutiful questioner and an adroit researcher. I have a background in education and second language acquisition. I'm interested in how language affects social category development as well as perceived characteristics of individual social group members. I am also interested in the psychological and sociological disciplines that interact with each other and how the information I've learned from both perspectives are related together. So far at Stanford, I'm working on a couple of projects on causal judgements and shape bias. Ultimately, I'm planning to become a human factors researcher or UX research scientist. I love to do high intensity workouts followed by vanilla sweet cream cold brew coffee, or maybe coffee first!"

# [[member]]
# id = "Addison Jadwin"
# position = "Research Assistant"
# email = "[email protected]"
# description = "I'm a junior at Stanford majoring in symbolic systems. I'm interested in understanding cognition through computational models. Outside of this I enjoy playing viola and taking care of my fish and corals!"

[[member]]
id = "Alumni"
description = "<ul><li> <a href='https://www.mpib-berlin.mpg.de/person/lara-kirfel/367762'>Lara Kirfel</a> (postdoc): Now Postdoctoral Fellow at the Center for Humans and Machines, MPI Berlin.<li> <a href='https://www.cmu.edu/dietrich/philosophy/people/masters/damini-kusum.html'>Damini Kusum</a> (research assistant): Now MSc student at Carnegie Mellon University. <li> <a href='https://josephouta.com/'>Joseph Outa</a> (research assistant): Now PhD student at Johns Hopkins University. </li><li> <a href='https://zach-davis.github.io/'>Zach Davis</a> (postdoc): Now research scientist at Facebook Reality Labs. </li> <li><a href='https://www.linkedin.com/in/erin-bennett-a1a9623a'>Erin Bennett</a> (lab affiliate)</li> <li>Bryce Linford (research assistant): Now PhD student at UCLA.</li> <li><a href='https://scholar.google.com/citations?user=R2Ji5Z8AAAAJ&hl=en'>Antonia Langenhoff</a> (research assistant): Now PhD student at UC Berkeley.</li> </ul>"
description = "<ul><li> <a href='https://janphilippfranken.github.io/'>Jan-Philipp Fränken</a> (postdoc): Next step 👣 Research Scientist at Google Deepmind, London.<li> <a href='https://www.siyingzhg.com/'>Siying Zhang</a> (research assistant): Next step 👣 PhD student at University of Washington.<li> <a href='https://www.mpib-berlin.mpg.de/person/lara-kirfel/367762'>Lara Kirfel</a> (postdoc): Next step 👣 Postdoctoral Fellow at the Center for Humans and Machines, MPI Berlin.<li> <a href='https://www.cmu.edu/dietrich/philosophy/people/masters/damini-kusum.html'>Damini Kusum</a> (research assistant): Next step 👣 MSc student at Carnegie Mellon University. <li> <a href='https://josephouta.com/'>Joseph Outa</a> (research assistant): Next step 👣 PhD student at Johns Hopkins University. </li><li> <a href='https://zach-davis.github.io/'>Zach Davis</a> (postdoc): Next step 👣 Research scientist at Facebook Reality Labs. </li> <li><a href='https://www.linkedin.com/in/erin-bennett-a1a9623a'>Erin Bennett</a> (lab affiliate)</li> <li>Bryce Linford (research assistant): Next step 👣 PhD student at UCLA.</li> <li><a href='https://scholar.google.com/citations?user=R2Ji5Z8AAAAJ&hl=en'>Antonia Langenhoff</a> (research assistant): Next step 👣 PhD student at UC Berkeley.</li> </ul>"

+++
2 changes: 1 addition & 1 deletion content/publication/beller2024causation.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ abstract = "The words we use to describe what happened shape what comes to a lis
image_preview = ""
selected = false
projects = []
#url_pdf = "papers/beller2023language.pdf"
url_pdf = "papers/beller2024causation.pdf"
url_preprint = "https://psyarxiv.com/xv8hf"
url_code = ""
url_dataset = ""
Expand Down
2 changes: 1 addition & 1 deletion content/publication/du2024robotic.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ abstract = "When faced with a novel scenario, it can be hard to succeed on the f
image_preview = ""
selected = false
projects = []
#url_pdf = "papers/du2024robotic.pdf"
url_pdf = "papers/du2024robotic.pdf"
url_preprint = "https://arxiv.org/abs/2406.15917"
url_code = ""
url_dataset = ""
Expand Down
2 changes: 1 addition & 1 deletion content/publication/johnson2024wise.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ abstract = "Recent advances in artificial intelligence (AI) have produced system
image_preview = ""
selected = false
projects = []
#url_pdf = "papers/johnson2024wise.pdf"
url_pdf = "papers/johnson2024wise.pdf"
url_preprint = "https://arxiv.org/abs/2411.02478"
url_code = ""
url_dataset = ""
Expand Down
4 changes: 2 additions & 2 deletions content/publication/prinzing2024purpose.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,10 @@ abstract = "People attribute purposes in both mundane and profound ways—such a
image_preview = ""
selected = false
projects = []
#url_pdf = "papers/prinzing2024purpose.pdf"
url_pdf = "papers/prinzing2024purpose.pdf"
url_preprint = "https://osf.io/7enkr"
url_code = ""
url_dataset = ""
url_dataset = "https://osf.io/uj7vf/"
url_slides = ""
url_video = ""
url_poster = ""
Expand Down
33 changes: 33 additions & 0 deletions content/publication/xiang2024handicapping.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
+++
# 0 -> 'Forthcoming',
# 1 -> 'Preprint',
# 2 -> 'Journal',
# 3 -> 'Conference Proceedings',
# 4 -> 'Book chapter',
# 5 -> 'Thesis'

title = "A signaling theory of self-handicapping"
date = "2024-11-24"
authors = ["Y. Xiang","S. J Gershman","T. Gerstenberg"]
publication_types = ["1"]
publication_short = "_PsyArXiv_"
publication = "Xiang, Y., Gershman*, S. J., Gerstenberg*, T. (2024). A signaling theory of self-handicapping. _PsyArXiv_."
abstract = "People use various strategies to bolster the perception of their competence. One strategy is self-handicapping, by which people deliberately impede their performance in order to protect or enhance perceived competence. Despite much prior research, it is unclear why, when, and how self-handicapping occurs. We develop a formal theory that chooses the optimal degree of selfhandicapping based on its anticipated performance and signaling effects. We test the theory's predictions in two experiments (𝑁 = 400), showing that self-handicapping occurs more often when it is unlikely to affect the outcome and when it increases the perceived competence in the eyes of a naive observer. With sophisticated observers (who consider whether a person chooses to self-handicap), self-handicapping is less effective when followed by failure. We show that the theory also explains the findings of several past studies. By offering a systematic explanation of self-handicapping, the theory lays the groundwork for developing effective interventions."
image_preview = ""
selected = false
projects = []
url_pdf = "papers/xiang2024handicapping.pdf"
url_preprint = "https://osf.io/preprints/psyarxiv/84tvm"
url_code = ""
url_dataset = ""
url_slides = ""
url_video = ""
url_poster = ""
url_source = ""
url_custom = [{name = "Github", url = "https://github.com/yyyxiang/self-handicapping"}]
math = true
highlight = true
[header]
# image = "publications/xiang2024handicapping.png"
caption = ""
+++
8 changes: 4 additions & 4 deletions docs/404.html
Original file line number Diff line number Diff line change
Expand Up @@ -237,6 +237,10 @@ <h1>Page not found</h1>

<h2>Publications</h2>

<ul>
<li><a href="https://cicl.stanford.edu/publication/xiang2024handicapping/">A signaling theory of self-handicapping</a></li>
</ul>

<ul>
<li><a href="https://cicl.stanford.edu/publication/johnson2024wise/">Imagining and building wise machines: The centrality of AI metacognition</a></li>
</ul>
Expand All @@ -253,10 +257,6 @@ <h2>Publications</h2>
<li><a href="https://cicl.stanford.edu/publication/jin2024marple/">MARPLE: A Benchmark for Long-Horizon Inference</a></li>
</ul>

<ul>
<li><a href="https://cicl.stanford.edu/publication/beller2024causation/">Causation, Meaning, and Communication</a></li>
</ul>




Expand Down
12 changes: 11 additions & 1 deletion docs/bibtex/cic_papers.bib
Original file line number Diff line number Diff line change
@@ -1,13 +1,23 @@
%% This BibTeX bibliography file was created using BibDesk.
%% https://bibdesk.sourceforge.io/
%% Created for Tobias Gerstenberg at 2024-11-06 11:16:38 -0600
%% Created for Tobias Gerstenberg at 2024-11-24 10:45:20 -0800
%% Saved with string encoding Unicode (UTF-8)
@article{xiang2024handicapping,
abstract = {People use various strategies to bolster the perception of their competence. One strategy is self-handicapping, by which people deliberately impede their performance in order to protect or enhance perceived competence. Despite much prior research, it is unclear why, when, and how self-handicapping occurs. We develop a formal theory that chooses the optimal degree of selfhandicapping based on its anticipated performance and signaling effects. We test the theory's predictions in two experiments (𝑁 = 400), showing that self-handicapping occurs more often when it is unlikely to affect the outcome and when it increases the perceived competence in the eyes of a naive observer. With sophisticated observers (who consider whether a person chooses to self-handicap), self-handicapping is less effective when followed by failure. We show that the theory also explains the findings of several past studies. By offering a systematic explanation of self-handicapping, the theory lays the groundwork for developing effective interventions.},
author = {Xiang, Yang and Gershman, Samuel J and Gerstenberg, Tobias},
date-added = {2024-11-24 10:45:12 -0800},
date-modified = {2024-11-24 10:45:12 -0800},
journal = {PsyArXiv},
note = {https://osf.io/preprints/psyarxiv/84tvm},
title = {A signaling theory of self-handicapping},
year = {2024}}

@article{johnson2024wise,
abstract = {Recent advances in artificial intelligence (AI) have produced systems capable of increasingly sophisticated performance on cognitive tasks. However, AI systems still struggle in critical ways: unpredictable and novel environments (robustness), lack transparency in their reasoning (explainability), face challenges in communication and commitment (cooperation), and pose risks due to potential harmful actions (safety). We argue that these shortcomings stem from one overarching failure: AI systems lack wisdom. Drawing from cognitive and social sciences, we define wisdom as the ability to navigate intractable problems---those that are ambiguous, radically uncertain, novel, chaotic, or computationally explosive---through effective task-level and metacognitive strategies. While AI research has focused on task-level strategies, metacognition---the ability to reflect on and regulate one's thought processes---is underdeveloped in AI systems. In humans, metacognitive strategies such as recognizing the limits of one's knowledge, considering diverse perspectives, and adapting to context are essential for wise decision-making. We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety. By focusing on developing wise AI, we suggest an alternative to aligning AI with specific human values---a task fraught with conceptual and practical difficulties. Instead, wise AI systems can thoughtfully navigate complex situations, account for diverse human values, and avoid harmful actions. We discuss potential approaches to building wise AI, including benchmarking metacognitive abilities and training AI systems to employ wise reasoning. Prioritizing metacognition in AI research will lead to systems that act not only intelligently but also wisely in complex, real-world situations.},
author = {Johnson, Samuel G B and Karimi, Amir-Hossein and Bengio, Yoshua and Chater, Nick and Gerstenberg, Tobias and Larson, Kate and Levine, Sydney and Mitchell, Melanie and Sch{\"o}lkopf, Bernhard and Grossmann, Igor},
Expand Down
Loading

0 comments on commit e3a9d7f

Please sign in to comment.