-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 45a1b8a
Showing
11 changed files
with
357 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
BSD 3-Clause License | ||
|
||
Copyright (c) 2021, BioNLP Lab | ||
All rights reserved. | ||
|
||
Redistribution and use in source and binary forms, with or without | ||
modification, are permitted provided that the following conditions are met: | ||
|
||
1. Redistributions of source code must retain the above copyright notice, this | ||
list of conditions and the following disclaimer. | ||
|
||
2. Redistributions in binary form must reproduce the above copyright notice, | ||
this list of conditions and the following disclaimer in the documentation | ||
and/or other materials provided with the distribution. | ||
|
||
3. Neither the name of the copyright holder nor the names of its | ||
contributors may be used to endorse or promote products derived from | ||
this software without specific prior written permission. | ||
|
||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" | ||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE | ||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE | ||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE | ||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL | ||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR | ||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER | ||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, | ||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE | ||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
## AMIA 2023 Annual Symposium Panel on Large Language Models in Healthcare: Opportunities and Challenge | ||
|
||
https://bionlplab.github.io/2023_AMIA_LLM/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,195 @@ | ||
/* CSS Document */ | ||
|
||
|
||
body { | ||
/*background: #f7f7f7;*/ | ||
background: #e3e5e8; | ||
color: #f7f7f7; | ||
font-family: 'Lato', Verdana, Helvetica, sans-serif;; | ||
font-weight: 300; | ||
font-size:16px; | ||
} | ||
|
||
/* Headings */ | ||
|
||
h1 { | ||
font-size:30pt; | ||
} | ||
|
||
h2 { | ||
font-size:22pt; | ||
} | ||
|
||
h3 { | ||
font-size:14pt; | ||
} | ||
|
||
|
||
/* Hyperlinks */ | ||
|
||
a:link { | ||
color: #1772d0; | ||
text-decoration: none; | ||
} | ||
|
||
a:visited { | ||
color: #1772d0; | ||
text-decoration: none; | ||
} | ||
|
||
a:active { | ||
color: red; | ||
text-decoration: none; | ||
} | ||
|
||
a:hover { | ||
color: #f09228; | ||
text-decoration: none; | ||
} | ||
|
||
|
||
/* Main page container */ | ||
|
||
|
||
.container { | ||
width: 1024px; | ||
min-height: 200px; | ||
margin: 0 auto; /* top and bottom, right and left */ | ||
border: 1px hidden #000; | ||
/* border: none; */ | ||
text-align: center; | ||
padding: 1em 1em 1em 1em; /* top, right, bottom, left */ | ||
color: #4d4b59; | ||
background: #f7f7f7; | ||
} | ||
|
||
.overview { | ||
text-align: left; | ||
} | ||
|
||
|
||
.containersmall { | ||
width: 1024px; | ||
min-height: 10px; | ||
margin: 0 auto; /* top and bottom, right and left */ | ||
border: 1px hidden #000; | ||
/* border: none; */ | ||
text-align: left; | ||
padding: 1em 1em 1em 1em; /* top, right, bottom, left */ | ||
color: #4d4b59; | ||
background: #f7f7f7; | ||
} | ||
|
||
.schedule { | ||
width: 900px; | ||
min-height: 200px; | ||
margin: 0 auto; /* top and bottom, right and left */ | ||
/*border: 1px solid #000;*/ | ||
border: none; | ||
text-align: left; | ||
padding: 1em 1em 1em 1em; /* top, right, bottom, left */ | ||
color: #4d4b59; | ||
background: #f7f7f7; | ||
} | ||
|
||
/* Title and menu */ | ||
|
||
.title{ | ||
font-size: 22pt; | ||
margin: 1px; | ||
} | ||
|
||
.menubar { | ||
white-space: nowrap; | ||
margin-bottom: 0em; | ||
text-align:center; | ||
font-size:16px; | ||
} | ||
|
||
|
||
/* Announcements */ | ||
|
||
.announce_date { | ||
font-size: .875em; | ||
font-style: italic; | ||
} | ||
.announce { | ||
font-size: inherit; | ||
} | ||
.schedule_week { | ||
font-size: small; | ||
background-color: #CCF; | ||
} | ||
|
||
|
||
/* Schedule */ | ||
|
||
table.schedule { | ||
border-width: 1px; | ||
border-spacing: 2px; | ||
border-style: none; | ||
border-color: #000; | ||
border-collapse: collapse; | ||
background-color: white; | ||
} | ||
|
||
p.subtitle { | ||
text-indent: -5em; | ||
margin-left: 5em; | ||
} | ||
|
||
/* Notes */ | ||
|
||
table.notes { | ||
border: none; | ||
border-collapse: collapse; | ||
} | ||
|
||
.notes td { | ||
border-bottom: 1px solid; | ||
padding-bottom: 5px; | ||
padding-top: 5px; | ||
} | ||
|
||
|
||
/* Problem sets */ | ||
|
||
table.psets { | ||
/* border: none;*/ | ||
border-collapse: collapse; | ||
} | ||
|
||
.psets td { | ||
border-bottom: 1px solid; | ||
padding-bottom: 5px; | ||
padding-top: 5px; | ||
} | ||
|
||
|
||
.acknowledgement | ||
{ | ||
font-size: .875em; | ||
} | ||
|
||
.code { | ||
font-family: "Courier New", Courier, monospace | ||
} | ||
|
||
.instructorphoto img { | ||
width: 120px; | ||
border-radius: 120px; | ||
margin-bottom: 10px; | ||
} | ||
|
||
.instructorphotosmall img { | ||
width: 60px; | ||
border-radius: 60px; | ||
margin-bottom: 10px; | ||
} | ||
|
||
.instructor { | ||
display: inline-block; | ||
width: 200px; | ||
text-align: center; | ||
margin-right: 20px; | ||
} |
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,130 @@ | ||
|
||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> | ||
<html xmlns="http://www.w3.org/1999/xhtml"> | ||
<head> | ||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> | ||
<title>AMIA 2024 Annual Symposium Panel on Multimodal Data Analysis in Healthcare: Opportunities and Challenges</title> | ||
|
||
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css"> | ||
<link href='http://fonts.googleapis.com/css?family=Lato:400,700' rel='stylesheet' type='text/css'> | ||
<link href="css/style.css" rel="stylesheet" type="text/css" /> | ||
</head> | ||
|
||
<body> | ||
|
||
<div class="container"> | ||
<table border="0" align="center"> | ||
<tr> | ||
<td width="700" align="center" valign="middle"><h3>AMIA 2024 Annual Symposium Panel on</h3> | ||
<span class="title">Multimodal Data Analysis in Healthcare: Opportunities and Challenges</span></td> | ||
</tr> | ||
<tr> | ||
<h2><td colspan="3" align="center"><br> | ||
Location: San Francisco, CA, USA<br> | ||
Time: <b>November 9 - 13, 2024</b> | ||
</td> | ||
</h2> | ||
</tr> | ||
</table> | ||
<!-- <p><img src="figures/teaser.jpg" width="1000" align="middle" /></p> --> | ||
</div> | ||
|
||
</br> | ||
|
||
<div class="container"> | ||
<h2>Panelists</h2> | ||
<div class = "row"> | ||
<div class="instructor"> | ||
<a href="https://www.ncbi.nlm.nih.gov/research/bionlp/"> | ||
<div class="instructorphoto"><img src="figures/luzh.png"></div> | ||
<div>Zhiyong Lu<br>National Library of Medicine<br></div> | ||
</a> | ||
</div> | ||
|
||
<div class="instructor"> | ||
<a href="https://ldi.upenn.edu/fellows/fellows-directory/kevin-b-johnson-md-ms/"> | ||
<div class="instructorphoto"><img src="figures/kevin.jpg"></div> | ||
<div>Kevin B. Johnson<br>University of Pennsylvania<br></div> | ||
</a> | ||
</div> | ||
|
||
<div class="instructor"> | ||
<a href="https://www.microsoft.com/en-us/research/people/hoifung/"> | ||
<div class="instructorphoto"><img src="figures/hoifung.jpg"></div> | ||
<div>Hoifung Poon<br>Microsoft Inc<br></div> | ||
</a> | ||
</div> | ||
</div> | ||
|
||
<div class = "row"> | ||
<div class="instructor"> | ||
<a href="https://www.mayo.edu/research/faculty/banerjee-imon-ph-d/bio-20555791"> | ||
<div class="instructorphoto"><img src="figures/imon.jpg"></div> | ||
<div>Imon Banerjee<br>Mayo Clinic<br></div> | ||
</a> | ||
</div> | ||
|
||
<div class="instructor"> | ||
<a href="https://www.ncbi.nlm.nih.gov/research/bionlp/"> | ||
<div class="instructorphoto"><img src="figures/yifan_peng.jpg"></div> | ||
<div>Yifan Peng<br>Weill Cornell Medicine<br></div> | ||
</a> | ||
</div> | ||
</div> | ||
</div> | ||
|
||
</br> | ||
|
||
<div class="container"> | ||
<h2>Overview</h2> | ||
<div class="overview"> | ||
<p>The recent advances in multimodal foundation models have made a significant shift in research and clinical practices. However, to fully realize the potential of multimodal data analysis, there are various scientific and social challenges that need to be addressed, such as how to ensure models’ trustworthiness and scalability, and how to maintain data quality and integration. The objective of this panel is to introduce the audience to the opportunities and challenges, as well as the development and responsible employment of such technology in research and healthcare. It will specifically focus on the development of multimodal foundation models in healthcare, issues of model transparency, accountability, and fairness, and multimodal data de-identification and sharing. After participating in this session, attendees should be able to understand the most important challenges facing multimodal data analysis and some of the possible solutions. </p> | ||
</div> | ||
</div> | ||
|
||
<br> | ||
|
||
<div class="container"> | ||
<h2>Tentative Schedule</h2> | ||
<div class="schedule"> | ||
<p><span class="announce_date">15 min</span>. Panel 1: Hidden flaws behind expert-level accuracy of GPT-4 vision in medicine (Zhiyong Lu)</p> | ||
<p><span class="announce_date">15 min</span>. Panel 2: Challenges of de-identifying and sharing multimodal data (Kevin B. Johnson)</p> | ||
<p><span class="announce_date">15 min</span>. Panel 3: Precision health in the age of multimodal generative AI (Hoifung Poon)</p> | ||
<p><span class="announce_date">15 min</span>. Panel 4: Health disparities in large visual-language models (Imon Banerjee)</p> | ||
<p><span class="announce_date">30 min</span>. QA</p> | ||
</div> | ||
</div> | ||
|
||
<br> | ||
|
||
<div class="container"> | ||
<h2>About the speakers</h2> | ||
<div class="schedule"> | ||
<p><b>Zhiyong Lu</b>, Ph.D., is a tenured Senior Investigator in the NIH Intramural Research Program, leading research in biomedical text and image processing, information retrieval, and AI/machine learning. In his role as Deputy Director for Literature Search at the National Center of Biotechnology Information (NCBI), Dr. Lu oversees the overall R&D efforts to improve literature search and information access in resources like PubMed and LitCovid that are used by millions worldwide on a daily basis. Additionally, Dr. Lu holds an Adjunct Professor position with the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC). In this panel, Dr. Lu will discuss their latest research related to multi-modal foundation models such as GPT4 Vision and their applications in various medical applications such as automated disease diagnosis and medical report generation. | ||
</p> | ||
|
||
<p><b>Kevin B. Johnson</b>, MD, MS, is the David L. Cohen University Professor of Pediatrics, Biomedical Informatics, and Science Communication at the University of Pennsylvania. He is an internationally known developer and evaluator of clinical information technology. His main research is focused on the use of multimodal data and machine learning to summarize and quantify patient signs and symptoms in the EHR, assist with generating medical communications, and create decision support tools using real-time streaming data. In this panel, Dr. Johnson will discuss the challenges of deidentifying multimodal data and developing robust pipelines that promote the sharing of these data. | ||
</p> | ||
|
||
<p><b>Hoifung Poon</b>, Ph.D., is General Manager at Health Futures in Microsoft Research and an affiliated faculty at the University of Washington Medical School. He leads biomedical AI research and incubation, with the overarching goal of structuring medical data to optimize delivery and accelerate discovery for precision health. His team and collaborators are among the first to explore large language models (LLMs) in health applications, producing popular open-source foundation models such as PubMedBERT, BioGPT, BiomedCLIP, LLaVA-Med. He has led successful research partnerships with large health providers and life science companies, creating AI systems in daily use for applications such as molecular tumor board and clinical trial matching. In this panel, Dr. Poon will discuss the exciting frontier of multimodal generative AI in Precision Health, where multimodal, longitudinal real-world patient data can be used to pretrain powerful multimodal patient embedding, enable patient-like-me reasoning at scale, and unlock population-level real-world evidence for advancing precision medicine. | ||
</p> | ||
|
||
<p><b>Imon Banerjee</b>, Ph.D., pertains to computer science, particularly artificial intelligence (AI) and data mining. Her studies show implicit bias in the AI model toward race across multiple imaging modalities. Dr. Banerjee's goal is to decrease AI-driven healthcare disparities. She reduces this tendency by using model unlearning and adversarial debiasing. These techniques decrease inaccurate, harmful, and outdated information learned by the AI models. She collaborates with institutions and centers that serve minority groups, such as Emory University in Atlanta and the Mountain Park Health Center in Arizona, to train and evaluate AI models with diverse datasets. In this panel, Dr. Banerjee will discuss the challenges in reducing bias and vulnerability in the pre-training of large visual-language models. She will also highlight the effect of pre-training bias in the downstream targeted tasks. | ||
</p> | ||
|
||
<p><b>Yifan Peng</b> (Moderator), Ph.D., Assistant Professor in the Division of Health Sciences Department of Population Health Sciences at Weill Cornell Medicine. | ||
Dr. Peng's main research interests include BioNLP and medical image analysis. To facilitate research on language representations in the biomedicine domain, one of his studies present the Biomedical Language Understanding Evaluation (BLUE) benchmark, a collection of resources for evaluating and analyzing biomedical natural language representation models4. Detailed analysis shows that BLUE can be used to evaluate the capacity of the models to understand the biomedicine text and, moreover, to shed light on the future directions for developing biomedicine language representations. As the panel moderator, Dr. Peng will describe the current state of LLMs and list their unique opportunities and challenges compared to other language models. | ||
</p> | ||
</div> | ||
</div> | ||
|
||
<br> | ||
|
||
<div class="containersmall"> | ||
<p>Please contact <a href="[email protected]">Yifan Peng</a> if you have question. The webpage template is by the courtesy of awesome <a href="https://gkioxari.github.io/">Georgia</a>.</p> | ||
</div> | ||
|
||
<!--<p align="center" class="acknowledgement">Last updated: Jan. 6, 2017</p>--> | ||
</body> | ||
</html> | ||
|