-
Notifications
You must be signed in to change notification settings - Fork 4
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
b0e09c2
commit e15e397
Showing
6 changed files
with
110 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,6 @@ | ||
--- | ||
Ali: | ||
firstname: ["Mohammad", "M.", "M. A.", "Mohammad Ali"] | ||
Alibeigi: | ||
firstname: ["Mina", "M.", "M. A.", "Mina Alibeigi"] | ||
github: alibeigi | ||
|
@@ -13,14 +15,26 @@ Astrom: | |
org: Lund University | ||
scholar: YIzs6eoAAAAJ | ||
title: Professor | ||
Basu: | ||
firstname: ["Debabrota", "D.", "D. B.", "Debabrota Basu"] | ||
scholar: e26Maa4AAAAJ | ||
Batkovic: | ||
firstname: ["Ivo", "I.", "I. B.", "Ivo Batkovic"] | ||
scholar: 7X_8eVEAAAAJ | ||
Bodin: | ||
firstname: ["Alexander", "A.", "A. B.", "Alexander Bodin"] | ||
org: Zenseact | ||
title: Intern | ||
Dimitrakakis: | ||
firstname: ["Christos", "C.", "C. D.", "Christos Dimitrakakis"] | ||
scholar: 9Kw4t_kAAAAJ | ||
EliasSvensson: | ||
firstname: ["Elias", "E.", "E. S.", "Elias Svensson"] | ||
org: Zenseact | ||
title: Intern | ||
Eriksson: | ||
firstname: ["Hannes", "H.", "H. E.", "Hannes Eriksson"] | ||
scholar: KyX9dfEAAAAJ | ||
Fatemi: | ||
firstname: ["Maryam", "M.", "M. F.", "Maryam Fatemi"] | ||
mail: [email protected] | ||
|
@@ -42,10 +56,12 @@ Fu: | |
scholar: z3lud1UAAAAJ | ||
title: Researcher | ||
url: https://junshengfu.github.io | ||
Gronberg: | ||
firstname: ["Robin", "R.", "R. G.", "Robin Grönberg"] | ||
Hagerman: | ||
firstname: ["David", "D.", "D. H.", "David Hagerman"] | ||
org: Zenseact | ||
title: Intern | ||
org: Chalmers University of Technology | ||
scholar: VRDfJPAAAAAJ | ||
Hammarstrand: | ||
firstname: ["Lars", "L.", "L. H.", "Lars Hammarstrand"] | ||
org: Chalmers University of Technology | ||
|
@@ -61,6 +77,11 @@ Hess: | |
scholar: ZvUoV2EAAAAJ | ||
title: PhD Student | ||
url: https://georghess.github.io/ | ||
Hoel: | ||
firstname: ["Carl-Johan", "C.", "C. H.", "Carl-Johan Hoel"] | ||
scholar: f7ewwIsAAAAJ | ||
Jansson: | ||
firstname: ["Anton", "A.", "A. J.", "Anton Jansson"] | ||
Jaxing: | ||
firstname: ["Johan", "J.", "J. X.", "Johan Jaxing"] | ||
org: Zenseact | ||
|
@@ -120,6 +141,9 @@ Rafidashti: | |
mail: [email protected] | ||
org: [Zenseact, Chalmers University of Technology] | ||
title: PhD Student | ||
Sjoberg: | ||
firstname: ["Jonas", "J.", "J. S.", "Jonas Sjöberg"] | ||
scholar: s0Qakg77XTYC | ||
Stenborg: | ||
firstname: ["Erik", "E.", "E. S.", "Erik Stenborg"] | ||
mail: [email protected] | ||
|
@@ -141,6 +165,11 @@ Tonderski: | |
scholar: 2R5ZLp0AAAAJ | ||
title: PhD Student | ||
url: https://atonderski.github.io/ | ||
Tram: | ||
firstname: ["Tommy", "T.", "T. T.", "Tommy Tram"] | ||
org: Zenseact | ||
scholar: m_O_xjIAAAA | ||
title: null | ||
Widahl: | ||
firstname: ["Jenny", "J.", "J.W.", "Jenny Widahl"] | ||
org: Zenseact |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
17 changes: 17 additions & 0 deletions
17
_publications/learning-when-to-drive/learning-when-to-drive.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
--- | ||
layout: publication | ||
permalink: /publications/learning-when-to-drive/ | ||
title: "Learning When to Drive in Intersections by Combining Reinforcement Learning and Model Predictive Control" | ||
venue: ITSC19 | ||
authors: | ||
- Tram | ||
- Batkovic | ||
- Ali | ||
- Sjoberg | ||
date: 2019-10-17 00:00:00 +00:00 | ||
arxiv: https://ieeexplore.ieee.org/abstract/document/8916922 | ||
n_equal_contrib: 1 | ||
--- | ||
|
||
# Abstract | ||
In this paper, we propose a decision making algorithm intended for automated vehicles that negotiate with other possibly non-automated vehicles in intersections. The decision algorithm is separated into two parts: a high-level decision module based on reinforcement learning, and a low-level planning module based on model predictive control. Traffic is simulated with numerous predefined driver behaviors and intentions, and the performance of the proposed decision algorithm was evaluated against another controller. The results show that the proposed decision algorithm yields shorter training episodes and an increased performance in success rate compared to the other controller. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
layout: publication | ||
permalink: /publications/mtrl/ | ||
title: "Reinforcement Learning in the Wild with Maximum Likelihood-based Model Transfer" | ||
venue: ARXIV | ||
authors: | ||
- Eriksson | ||
- Basu | ||
- Tram | ||
- Alibeigi | ||
- Dimtrakakis | ||
date: 2023-02-18 00:00:00 +00:00 | ||
arxiv: https://arxiv.org/pdf/2302.09273.pdf | ||
n_equal_contrib: 1 | ||
--- | ||
|
||
# Abstract | ||
In this paper, we study the problem of transferring the available Markov Decision Process (MDP) models to learn and plan efficiently in an unknown but similar MDP. We refer to it as \textit{Model Transfer Reinforcement Learning (MTRL)} problem. First, we formulate MTRL for discrete MDPs and Linear Quadratic Regulators (LQRs) with continuous state actions. Then, we propose a generic two-stage algorithm, MLEMTRL, to address the MTRL problem in discrete and continuous settings. In the first stage, MLEMTRL uses a \textit{constrained Maximum Likelihood Estimation (MLE)}-based approach to estimate the target MDP model using a set of known MDP models. In the second stage, using the estimated target MDP model, MLEMTRL deploys a model-based planning algorithm appropriate for the MDP class. Theoretically, we prove worst-case regret bounds for MLEMTRL both in realisable and non-realisable settings. We empirically demonstrate that MLEMTRL allows faster learning in new MDPs than learning from scratch and achieves near-optimal performance depending on the similarity of the available MDPs and the target MDP. |
18 changes: 18 additions & 0 deletions
18
.../negotiating-behavior-using-q-learning/negotiating-behavior-using-q-learning.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
layout: publication | ||
permalink: /publications/negotiating-behavior-using-q-learning/ | ||
title: "Learning Negotiating Behavior Between Cars in Intersections using Deep Q-Learning" | ||
venue: ITSC18 | ||
authors: | ||
- Tram | ||
- Jansson | ||
- Gronberg | ||
- Ali | ||
- Sjoberg | ||
date: 2018-11-04 00:00:00 +00:00 | ||
arxiv: https://ieeexplore.ieee.org/abstract/document/8569316 | ||
n_equal_contrib: 1 | ||
--- | ||
|
||
# Abstract | ||
This paper concerns automated vehicles negotiating with other vehicles, typically human driven, in crossings with the goal to find a decision algorithm by learning typical behaviors of other vehicles. The vehicle observes distance and speed of vehicles on the intersecting road and use a policy that adapts its speed along its pre-defined trajectory to pass the crossing efficiently. Deep Q-learning is used on simulated traffic with different predefined driver behaviors and intentions. The results show a policy that is able to cross the intersection avoiding collision with other vehicles 98% of the time, while at the same time not being too passive. Moreover, inferring information over time is important to distinguish between different intentions and is shown by comparing the collision rate between a Deep Recurrent Q-Network at 0.85% and a Deep Q-learning at 1.75%. |
16 changes: 16 additions & 0 deletions
16
...ions/tactical-decisions-in-intersections/tactical-decisions-in-intersections.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
--- | ||
layout: publication | ||
permalink: /publications/tactical-decisions-in-intersections/ | ||
title: "Reinforcement Learning with Uncertainty Estimation for Tactical Decision-Making in Intersections" | ||
venue: ITSC20 | ||
authors: | ||
- Hoel | ||
- Tram | ||
- Sjoberg | ||
date: 2020-09-20 00:00:00 +00:00 | ||
arxiv: https://ieeexplore.ieee.org/abstract/document/9294407 | ||
n_equal_contrib: 1 | ||
--- | ||
|
||
# Abstract | ||
This paper investigates how a Bayesian reinforcement learning method can be used to create a tactical decision-making agent for autonomous driving in an intersection scenario, where the agent can estimate the confidence of its decisions. An ensemble of neural networks, with additional randomized prior functions (RPF), are trained by using a bootstrapped experience replay memory. The coefficient of variation in the estimated Q-values of the ensemble members is used to approximate the uncertainty, and a criterion that determines if the agent is sufficiently confident to make a particular decision is introduced. The performance of the ensemble RPF method is evaluated in an intersection scenario and compared to a standard Deep Q-Network method, which does not estimate the uncertainty. It is shown that the trained ensemble RPF agent can detect cases with high uncertainty, both in situations that are far from the training distribution, and in situations that seldom occur within the training distribution. This work demonstrates one possible application of such a confidence estimate, by using this information to choose safe actions in unknown situations, which removes all collisions from within the training distribution, and most collisions outside of the distribution. |