Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve docsUpdate README.md #9

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

*Deep Portfolio Theory* is a portfolio selection method published by J. B. Heaton, N. G. Polson, J. H. Witte from GreyMaths Inc.

Authors' codes are proprietary, so I (this github repo owner) can only try to code this notebook myself for experiment. I am not the author and is not related to the original authors. This code may not achieve satisfying results as the paper states. Maybe I misunderstand some parts from the paper, so I hope that someone can continue the research and contribute to the framework. (you are welcome to open issues.)
Authors' codes are proprietary, so I (this github repo owner) can only try to code this notebook myself for experiment. I am not the author and is not related to the original authors. This code may not achieve satisfying results as the paper states. Maybe I misunderstand some parts from the paper, so I hope that someone can continue the research and contribute to the framework. (you are welcome to open issues.)

You may find relevant papers according to the lists:

Expand All @@ -11,7 +11,7 @@ You may find relevant papers according to the lists:


# Some "tricky" stuffs you may want to know after reading the paper
- The authors use **"auto-encoding, calibration, validation and verification"** as machine learning steps. In computer science, we are more comfortable to call them **"auto-encoding, validation, testing and verification"**. But we will still follow the terms the authors use in this repo.
- The authors use **"auto-encoding, calibration, validation and verification"** as machine learning steps. In computer science, we are more comfortable to call them **"auto-encoding, validation, testing and verification"**. But we will still follow the terms the authors use in this repo.

- For the graph below in Page 13, for convenience, let's name upper left, upper right, lower left, lower right as A, B, C, D.
![p13](image/p13.png)
Expand All @@ -24,7 +24,7 @@ You may find relevant papers according to the lists:
Python 3, Keras (Tensorflow Backend)


# Data
# Data Analysis

- Downloaded from Bloomberg Terminal

Expand Down