Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sebastian.p/orth dgp #88

Open
wants to merge 56 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
cbe5d05
implementation of DistDGPs
SebastianPopescu Aug 1, 2022
826022e
implementation of orthogonal sparse GPs
SebastianPopescu Aug 5, 2022
5261edf
implementation of Orthogonal Sparse GPs
SebastianPopescu Aug 5, 2022
8a66610
changes in requirements
SebastianPopescu Aug 11, 2022
9334862
minor updated
SebastianPopescu Aug 12, 2022
868f89a
latest_update
SebastianPopescu Nov 24, 2022
f6c0bc2
updated codebase
SebastianPopescu Nov 24, 2022
ae2bbaf
added tests
SebastianPopescu Nov 24, 2022
425cc58
full covariance function coverage
SebastianPopescu Nov 25, 2022
24d065d
removing files from git
hstojic Nov 25, 2022
a01a6a2
format
hstojic Nov 25, 2022
4ff917f
Address linting issues.
avullo Nov 28, 2022
10251ae
Addressing some more lint errors.
avullo Nov 28, 2022
53dae0f
fixed some bugs
SebastianPopescu Nov 28, 2022
c9dd296
Mypy fixes.
avullo Nov 28, 2022
376effe
Merge branch 'sebastian.p/orth_dgp' of github.com:secondmind-labs/GPf…
avullo Nov 28, 2022
6c4dcfe
Black.
avullo Nov 28, 2022
0e50c84
fixed some tests
SebastianPopescu Nov 28, 2022
bd6163f
fixed some tests
SebastianPopescu Nov 28, 2022
0386df6
tests functioning
SebastianPopescu Nov 29, 2022
ba01ca5
moved notebooks
SebastianPopescu Nov 29, 2022
bb59d7b
Moving to jupytext.
avullo Nov 29, 2022
4705825
Adding reference and contributors.
avullo Nov 29, 2022
f7d80fb
Can reuse the GPflow het likelihood.
avullo Nov 29, 2022
2a1b33e
Formatting.
avullo Nov 29, 2022
f3e3c0a
Last minute correction by Sebastian during briefing
sc336 Nov 30, 2022
9e6fb9b
Sebastian thinks this is no longer needed
sc336 Nov 30, 2022
b939ad3
Format
sc336 Nov 30, 2022
95bbabe
format
sc336 Nov 30, 2022
1060ff2
- Use configurations with models and likelihoods
avullo Nov 30, 2022
f5c2fd8
Copyright and formatting.
avullo Nov 30, 2022
2f354b1
Merge remote-tracking branch 'origin/sebastian.p/orth_dgp' into sebas…
avullo Nov 30, 2022
dbaab42
No longer needed.
avullo Nov 30, 2022
38e8644
Validate model config and test.
avullo Nov 30, 2022
277c440
Parametrise tests.
avullo Dec 1, 2022
cf206b2
Removed redundant notebooks, finished main one.
avullo Dec 1, 2022
b905ff5
Mypy fixes.
avullo Dec 1, 2022
9beb7ad
Typo.
avullo Dec 1, 2022
63a1178
New config and architecture factory infrastructure.
avullo Dec 1, 2022
e087c2a
Formatting.
avullo Dec 1, 2022
cf2ba87
API change.
avullo Dec 2, 2022
caf3442
Must be test skipped.
avullo Dec 2, 2022
d9f6a1c
Remove unused components.
avullo Dec 2, 2022
2386fc7
Merge branch 'develop' into sebastian.p/orth_dgp
avullo Dec 2, 2022
fd3193a
More type-safety.
avullo Dec 7, 2022
01690dc
Remove gpflow duplicate.
avullo Dec 8, 2022
500e253
Merge branch 'develop' into sebastian.p/orth_dgp
avullo Dec 8, 2022
ff84e98
Fix assertion considering numpy array.
avullo Dec 8, 2022
1a108b7
Refactoring in an attempt to increase coverage.
avullo Dec 9, 2022
4d99896
Both models are defined in the same file, so same test module.
avullo Dec 9, 2022
382dbf7
Unused module.
avullo Dec 9, 2022
87fe418
Addresing PR's comment.
avullo Dec 12, 2022
638756a
Expand tests to increase coverage.
avullo Dec 12, 2022
fb7f9ae
Expand tests to increase coverage.
avullo Dec 12, 2022
3283956
Address PR's comment.
avullo Dec 12, 2022
dccbb6b
Rolling back changes.
avullo Dec 12, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@ Because GitHub's [graph of contributors](http://github.com/secondmind-labs/GPflu
[Felix Leibfried](https://github.com/fleibfried),
[John A. McLeod](https://github.com/johnamcleod),
[Hugh Salimbeni](https://github.com/hughsalimbeni),
[Marcin B. Tomczak](https://github.com/marctom)
[Marcin B. Tomczak](https://github.com/marctom),
[Sebastian Popescu](https://github.com/SebastianPopescu),
[Alessandro Vullo](https://github.com/avullo),


Feel free to add yourself when you first contribute to GPflux's code, tests, or documentation!
502 changes: 171 additions & 331 deletions docs/notebooks/deep_cde.ipynb

Large diffs are not rendered by default.

21 changes: 13 additions & 8 deletions docs/notebooks/gpflux_features.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,15 +61,22 @@ def motorcycle_data():
"""

# %%
import gpflux
from gpflow.kernels import SquaredExponential

from gpflux.architectures import Config, build_constant_input_dim_deep_gp
import gpflux
from gpflux.architectures.config import GaussianLikelihoodConfig, ModelHyperParametersConfig
from gpflux.architectures.factory import build_constant_input_dim_architecture
from gpflux.models import DeepGP

config = Config(
num_inducing=25, inner_layer_qsqrt_factor=1e-5, likelihood_noise_variance=1e-2, whiten=True
config = ModelHyperParametersConfig(
num_layers=2,
kernel=SquaredExponential,
likelihood=GaussianLikelihoodConfig(noise_variance=1e-2),
inner_layer_qsqrt_factor=1e-5,
whiten=True,
num_inducing=25,
)
deep_gp: DeepGP = build_constant_input_dim_deep_gp(X, num_layers=2, config=config)
deep_gp: DeepGP = build_constant_input_dim_architecture(config, X)

# %% [markdown]
"""
Expand Down Expand Up @@ -164,9 +171,7 @@ def plot(model, X, Y, ax=None):
prediction_model.save_weights("weights")

# %%
prediction_model_new = build_constant_input_dim_deep_gp(
X, num_layers=2, config=config
).as_prediction_model()
prediction_model_new = build_constant_input_dim_architecture(config, X).as_prediction_model()
prediction_model_new.load_weights("weights")

# %%
Expand Down
195 changes: 195 additions & 0 deletions docs/notebooks/plotting_functions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
#
# Copyright (c) 2022 The GPflux Contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
This module contains experimental code for plotting. As such, the disclaimer is clear.
"""
import numpy as np
avullo marked this conversation as resolved.
Show resolved Hide resolved
import matplotlib.pyplot as plt
import tensorflow as tf
import io


def get_classification_detailed_plot(
num_layers, X_training, Y_training, where_to_save, f_mean_overall, f_var_overall, name_file
):

xx, yy = np.mgrid[-5:5:0.1, -5:5:0.1]
grid = np.c_[xx.ravel(), yy.ravel()]
grid = grid.astype(np.float32)

indices_class_1 = np.where(Y_training == 1.0)
indices_class_0 = np.where(Y_training == 0.0)

fig, axs = plt.subplots(
nrows=2, ncols=num_layers, sharex=True, sharey=True, figsize=(20 * num_layers, 40)
)

for current_layer in range(num_layers):

current_mean = f_mean_overall[current_layer]
current_mean = current_mean.reshape((100, 100))
current_var = f_var_overall[current_layer]
current_var = current_var.reshape((100, 100))

###################
##### F mean #####
###################

axis = axs[0, current_layer]
contour = axis.contourf(xx, yy, current_mean, 50, cmap="coolwarm")
cbar1 = fig.colorbar(contour, ax=axis)

cbar1.ax.tick_params(labelsize=60)

axis.set(xlim=(-5.0, 5.0), ylim=(-5.0, 5.0), xlabel="$X_1$", ylabel="$X_2$")
axis.set_title(label="Predictive Mean", fontdict={"fontsize": 60})
axis.tick_params(axis="both", which="major", labelsize=80)

axis.scatter(
X_training[indices_class_0, 0],
X_training[indices_class_0, 1],
s=100,
marker="X",
alpha=0.2,
c="green",
linewidth=1,
label="Class 0",
)
axis.scatter(
X_training[indices_class_1, 0],
X_training[indices_class_1, 1],
s=100,
marker="D",
alpha=0.2,
c="purple",
linewidth=1,
label="Class 1",
)

# NOTE -- don't need this
avullo marked this conversation as resolved.
Show resolved Hide resolved
# axis.scatter(Z_np[current_layer][:,0], Z_np[current_layer][:,1],
# s=750, marker="*", alpha=0.95, c = 'cyan',
# linewidth=1, label = 'Inducing Points')

axis.legend(loc="upper right", prop={"size": 60})
# axis.text(-4.5, 4.5, 'LL:'+"{:.2f}".format(total_nll_np)+'; Acc:'+"{:.2f}".format(precision_testing_overall_np), size=50, color='black')

#################################
##### F var Distributional #####
#################################

axis = axs[1, current_layer]
contour = axis.contourf(xx, yy, current_var, 50, cmap="coolwarm")
cbar1 = fig.colorbar(contour, ax=axis)
cbar1.ax.tick_params(labelsize=60)

axis.set(xlim=(-5, 5), ylim=(-5, 5), xlabel="$X_1$", ylabel="$X_2$")

axis.set_title(label="Predictive Variance", fontdict={"fontsize": 60})
axis.tick_params(axis="both", which="major", labelsize=80)

axis.scatter(
X_training[indices_class_0, 0],
X_training[indices_class_0, 1],
s=100,
marker="X",
alpha=0.2,
c="green",
linewidth=1,
label="Class 0",
)
axis.scatter(
X_training[indices_class_1, 0],
X_training[indices_class_1, 1],
s=100,
marker="D",
alpha=0.2,
c="purple",
linewidth=1,
label="Class 1",
)

# axis.scatter(Z_np[current_layer][:,0], Z_np[current_layer][:,1],
# s=750, marker="*", alpha=0.95, c = 'cyan',
# linewidth=1, label = 'Inducing Points')
axis.legend(loc="upper right", prop={"size": 60})

plt.tight_layout()
plt.savefig(where_to_save + name_file)
plt.close()


def get_regression_detailed_plot(
num_layers,
X_training,
Y_training,
where_to_save,
mean,
var,
name_file,
x_margin,
y_margin,
X_test,
):

figure, axs = plt.subplots(
nrows=1, ncols=num_layers, sharex=True, sharey=True, figsize=(10 * num_layers, 10)
)

for current_layer in range(num_layers):
current_mean = mean[current_layer]
current_var = var[current_layer]

###################
##### F mean #####
###################

X_test = X_test.squeeze()
lower = current_mean - 2 * np.sqrt(current_var)
upper = current_mean + 2 * np.sqrt(current_var)

axis = axs[current_layer]
axis.set_ylim(Y_training.min() - y_margin, Y_training.max() + y_margin)
axis.plot(X_training, Y_training, "kx", alpha=0.5, label="Training")
axis.plot(X_test, current_mean, "C1")

axis.fill_between(X_test, lower, upper, color="C1", alpha=0.3)
axis.legend(loc="upper right", prop={"size": 60})

axis.set_title(label=f"Layer {current_layer+1}", fontdict={"fontsize": 60})
axis.tick_params(axis="both", which="major", labelsize=80)

plt.tight_layout()
# plt.savefig(where_to_save+name_file)
# plt.close()
return figure


def plot_to_image(figure):
"""Converts the matplotlib plot specified by 'figure' to a PNG image and
returns it. The supplied figure is closed and inaccessible after this call."""
# Save the plot to a PNG in memory.
buf = io.BytesIO()
plt.savefig(buf, format="png")
# Closing the figure prevents it from being displayed directly inside
# the notebook.
plt.close(figure)
buf.seek(0)
# Convert PNG buffer to TF image
image = tf.image.decode_png(buf.getvalue(), channels=4)
# Add the batch dimension
image = tf.expand_dims(image, 0)
return image
Loading