forked from mis0butter/julia_learn
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathnotes.txt
103 lines (70 loc) · 3.19 KB
/
notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
# Last week (July 17-21):
- look into sparsity: increase lambda, compare gpsindy and sindy
- decrease samples and increase noise, gpsindy works better?
- use GP to smooth data and use as input into sindy
fixed kernel?
- input should be time, not x or dx as previously specified in the paper?
# This week (July 24-28):
Somi: (2)
- try Matern52 kernel within SINDy-GP-ADMM (and maybe Matern32 kernel)
David: (4)
- table it for now: figure right way to do sparsity, soft thresholding? hard thresholding?
- iterate: GP --> SINDy --> GP --> SINDy --> repeat
- GP takes in (t,x) as input, outputs smoothed x
- Coefficients * function library to generate dx ?
- (tangential) try SINDy with soft thresholding
- create pull request, merge into main
# new dx
# dx_new = Θx * Ξ_gpsindy
# combine dx_noise and dx_new
# dx_combine = ... ?
# dx_GP2 = post_dist_M52I( t, t_test, dx_new )
# Ξ_gpsindy2 = SINDy_test( x_GP, dx_GP2, λ )
Adam: (1) --> GPs
- standardize x, add noise --> GP --> derivative of GP *set up GP dx = f(x)
- set up kernel to be function of x *DONE
Yue: (3) --> l1 norm min, ADMM
- remove relative tolerances for ADMM *DONE
- SINDy, keep it the way it is
- don't change hyperparameters every step for ADMM (do not change objective function at every iteration), maybe let ADMM run for 1000 iterations and then update hyperparameters *DONE
- try taking out log(det(Ky)) of obj fn *DONE
Junette:
- just do Jake's car data *DONE SET UP SANDBOX
# July 26
Somi:
1. smooth dx_noise with x_noise as training inputs --> dx_GP
- smooth x with t first --> x_GP (SE kernel)
- smooth dx with x --> dx_GP (SE kernel)
- try diff kernel? SE *DONE
- set up kernel to be function of x *DONE
2. plug in dx_GP and x_noise into SINDy, good plots? *DONE
3. fix ADMM stuff, only update hyperparameters at the start *DONE
4. compare SINDy and GPSINDy *DONE
# July 27
David:
- implement the picture you took with your phone (GP-SINDy-GP-SINDy etc)
- create nice plot from Lasse (fig 3) *DONE
- table it: actually implement LASSO
- write problem in a way to say that we used coordinate descent (with separate primal variables)
- say that we stacked unknown variables (xi coefficients and sigma hyperparameters) into 1 unknown vector
- and then say we used coordinate descent to split unknown into 2 split variables
- and then do that thing that we wrote on the white board
- try on Jake's data, just see how it looks *DONE (it looks terrible with SINDy alone)
deadline:
- nicer to have journal
- decide later
email Chante about work ending Aug 4
email her again about lunch reimbursement
she is responsive
# July 28
- (suggestion) For step 2: implement the mean function from Adam's white board as the mean function
- subtract Theta(x)*xi from training points dx_noise
submit paper:
- learned models have uncertainty
- can take learned models with uncertainties into parametric form
- experiments, learn dynamics
- treat uncertainty of the learned system as disturbance on data
this weekend:
- try on real data
my idea:
- kalman filter GP-SINDy? condition learned model on new data?