AVM001 隨時有效矩陣迴歸分析

目錄

Question

Q. 隨時有效矩陣迴歸所需的相關文獻?

Q. 隨時有效矩陣迴歸所需的相關文獻?

Literature

Reduced rank regression


🤔 2013: Reduced rank regression via adaptive nuclear norm penalization :

  • (a) reduced-rank estimation method for the general high-dimensional multivariate regression problems.
  • (b) low-rank matrix approximation problem. 低rank矩陣逼近.
  • (c) its low-rank approximation for denoising, which can be achieved by various methods, e.g., rank penalization or nuclear norm penalization. 核範數懲罰.

😍 2018: Generalized high-dimensional trace regression via nuclear norm regularization Journal Version :

  • (a) reduced-rank regression, matrix regression that accommodates a panel of regressors, matrix completion, among others 有矩陣填充.
  • (b) 我們是always-valid的statistical error rate of $\hat{\Theta}$.
  • (c) 我們的版本match 最小化negative log-likelihood plus nuclear norm penalty.
  • (d) One-bit matrix completion: Related to our work is the matrix completion problem with binary entry, i.e., 1-bit matrix completion, which is a specific example of our generalized trace regression and has direct application in predicting aforementioned P2P microfinance.
  • 引用情況

先用這篇文章來幫忙Revise.

 1@article{chen2013reduced,
 2  title={Reduced rank regression via adaptive nuclear norm penalization},
 3  author={Chen, Kun and Dong, Hongbo and Chan, Kung-Sik},
 4  journal={Biometrika},
 5  volume={100},
 6  number={4},
 7  pages={901--920},
 8  year={2013},
 9  publisher={Oxford University Press}
10}
11@article{fan2019generalized,
12  title={Generalized high-dimensional trace regression via nuclear norm regularization},
13  author={Fan, Jianqing and Gong, Wenyan and Zhu, Ziwei},
14  journal={Journal of econometrics},
15  volume={212},
16  number={1},
17  pages={177--202},
18  year={2019},
19  publisher={Elsevier}
20}

Low-Rank bandit

Matrix Game

🤩 UAI2021: sMatrix games with bandit feedback

  • (a) Payment of two players: $r_{t}=A_{i_{t} j_{t}}+\eta_{t}$. 與我們的模型$y_{t}=g\left(\Theta_{\pi_{t}}^{*}\right)+\epsilon_{t}$比較, 這模型幾乎一樣!只是它是linear, 我們可以是GLM.
  • (b) 如何探索? 使用UCB for matrix games.
  • (c) 也可以使用Thompson sampling for matrix games
  • (d) 還可以做Entropy based的探索

在我們的框架中, 好的Statistical error對regret有幫助嗎? 感覺技術難點是怎麼讓sample足夠, 使其good event有好結果. 所以, 套路是, 我們可以用不同的「探索機制」來組合我們的「隨時有效統計誤差結果」. 這個蠻酷的, 均衡探索與剝削.

Answer

A. 下一步要先總結一下我們目前的證明思路與結果.

版權

CC BY-NC-ND 4.0

評論