We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RNNの初期値にシーケンス生成のバリエーションを埋め込む
学習済みRNNの重みを固定しまま、初期値を与えてFSMとインタラクションさせる
シーケンスのバリエーションをどのくらい表現できているか?
Karras, Tero, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. “Analyzing and Improving the Image Quality of StyleGAN,” 2019.: SyleGAN2論文。ここで使われているPath length regularizationはRNNの初期値の正則化と対応していそう
Path length regularization
Bojanowski, Piotr, Armand Joulin, David Lopez Paz, and Arthur Szlam. “Optimizing the Latent Space of Generative Networks.” 35th International Conference on Machine Learning, ICML 2018 2 (2018): 960–72.: 潜在表現ベクトルも勾配法の学習対象にする画像生成モデル
The text was updated successfully, but these errors were encountered:
Sorry, something went wrong.
No branches or pull requests
RNNの初期値にシーケンス生成のバリエーションを埋め込む
論文本体・著者
解きたい問題
新規性
実装
実験・議論
学習済みRNNの重みを固定しまま、初期値を与えてFSMとインタラクションさせる
シーケンスのバリエーションをどのくらい表現できているか?
読んだ中での不明点などの感想
関連論文
Karras, Tero, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. “Analyzing and Improving the Image Quality of StyleGAN,” 2019.: SyleGAN2論文。ここで使われている
Path length regularization
はRNNの初期値の正則化と対応していそうBojanowski, Piotr, Armand Joulin, David Lopez Paz, and Arthur Szlam. “Optimizing the Latent Space of Generative Networks.” 35th International Conference on Machine Learning, ICML 2018 2 (2018): 960–72.: 潜在表現ベクトルも勾配法の学習対象にする画像生成モデル
The text was updated successfully, but these errors were encountered: