Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support of a method network % evaluate #179

Open
jvdp1 opened this issue Apr 20, 2024 · 5 comments
Open

Support of a method network % evaluate #179

jvdp1 opened this issue Apr 20, 2024 · 5 comments

Comments

@jvdp1
Copy link
Collaborator

jvdp1 commented Apr 20, 2024

I propose to introduce the method evaluate that will returns the loss value for the trained model:

function evaluate(input_data, output_data, loss) result(res)
  real, intent(in) :: input_data(:,:)
  real, intent(in) :: output_data(:,:)
  class(loss_type), intent(in), optional :: loss
  real :: res(size(output_data, 1))
end function
@milancurcic
Copy link
Member

I agree this would be useful. Python frameworks distinguish between losses and metrics, e.g. see https://keras.io/api/metrics/. It could be a good idea that we differentiate here as well, if we envision providing a broad range of losses and metrics. Losses can be used as metrics, but not all metrics functions are differentiable. How to best implement this without duplicating too much code from losses? The simplest approach could be to simple have a completely new nf_metrics.f90 with nf_metrics module that follows the same model as nf_loss and its abstract derived type and function implementations, but without derivatives.

@milancurcic
Copy link
Member

In this case, I would make the metric/loss function non-optional, for better readability of the client code.

@jvdp1
Copy link
Collaborator Author

jvdp1 commented Apr 30, 2024

Would somenthing like this work:

  type, abstract :: metrics_type
  contains
    procedure(loss_interface), nopass, deferred :: eval
  end type loss_type


  type, extends(metrics_type), abstract :: loss_type
  contains
    procedure(loss_derivative_interface), nopass, deferred :: derivative
  end type loss_type

witth following API:

subroutine evaluate(self, input, output, loss, metrics) result(val)
  class(network), intent(inout) :: self
  real, intent(in) :: input(:)
  real, intent(in) :: output(:)
  class(loss_type), intent(in), optional :: loss
  class(metric_type), intent(in), optional :: metrics(:) !this would also accept loss types, and therefore we don't need to duplicate them.
end subroutine

@milancurcic
Copy link
Member

milancurcic commented Apr 30, 2024

Sounds good, in that case, is it not just:

subroutine evaluate(self, input, output, metrics) result(val)
  class(network), intent(inout) :: self
  real, intent(in) :: input(:)
  real, intent(in) :: output(:)
  class(metric_type), intent(in), optional :: metrics(:) !this would also accept loss types, and therefore we don't need to duplicate them.
end subroutine

@milancurcic
Copy link
Member

As discussed, the scalar variant would be

pure real function evaluate(self, input, output, metric) result(val)
  class(network), intent(in) :: self
  real, intent(in) :: input(:)
  real, intent(in) :: output(:)
  class(metric_type), intent(in), optional :: metric ! this would also accept loss types, and therefore we don't need to duplicate them.
end subroutine evaluate

If optional, the default metric can be MSE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants