Skip to content

2023 05 17 Client Meeting Notes

severinbratus edited this page May 17, 2023 · 2 revisions

Meeting Notes

Context

  • Location: online
  • Date: Tue May 17
  • Time: 12:00
  • Attendees:
    • Team 11D:
      • Mark Ardman
      • Severin Bratus
      • Adelina Cazacu
      • Andrei Ionescu
      • Ivan Makarov
    • Patrick Altmeyer (client)
  • Chair: Mark Ardman
  • Notetaker: Severin Bratus

Agenda

Check-in / Current Progress

  • Each team member reports on their progress since the last meeting, including any completed tasks and any obstacles encountered.
  • Guiding questions:
    • What did you do since the last meeting / last week?
    • What are you planning to do next, until the next meeting / next week?
    • Are there any obstacles in your way?

We have updated PA that we are almost done with the Getting started issues, now beginning to tackle the core issues.

Discussion Points

  • Difference between Fisher and Empirical Fisher
  • Usage of a (smaller) Hessian in the computation of the GGN
  • Mini-batching and their interaction with the jacobian function
  • Interfacing to MLJ.jl, structural changes

If possible to use minibatching with support from Flux, use it.

On IM's issue (multi-class support): we can add tests: By computing the Hessian for multinomial regression analytically (in closed form) Multinomial regression == single layer NN. However it is also acceptible (and also less complex) to hardcode the resutls of the pytorch library computation.

MA: MLJ has strict requirements on what the outputs / inputs are. Currently our functions are not up to the standard (very different)

Patrick has developed arly-stage package for Joint Energy Modeling, a generative task neural networks alone cannot do. Have a look at it. PA has created an issue in FluxMLJ, more related to Counterfactual Explanations. Take a look MLJ Flux -- an interface that combines the two and makes it easy to use them.

PA on the computation of GGN: The middle term in formula 8, Daxberger et al., is a smaller Hessian (wrt linear function output) to approximate the greater Hessian (wrt the weights).

Fisher information matrix is approximated by the empirical Fisher. GGN may not be equal to the emp. Fisher (from Pytorch).

Patrick suggests we use Tullio for einsums.

PA: Maybe it is a good time to come together and see how these relate to each other. The refactoring issue will affect existing code. "Take your time with this."

PA expects the three issues (GGN + Minibatching + Refactoring) to have a common link. Though it is possible to compute the GGN approx-n without minibatching.

Any Other Questions

  • Discuss any additional items not covered by the above agenda.

We might meet at next Tue, 23 May, after the ACC resit exam, on a Delft fintech event.

Closure