-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support of a method network % evaluate
#179
Comments
I agree this would be useful. Python frameworks distinguish between losses and metrics, e.g. see https://keras.io/api/metrics/. It could be a good idea that we differentiate here as well, if we envision providing a broad range of losses and metrics. Losses can be used as metrics, but not all metrics functions are differentiable. How to best implement this without duplicating too much code from losses? The simplest approach could be to simple have a completely new |
In this case, I would make the metric/loss function non-optional, for better readability of the client code. |
Would somenthing like this work: type, abstract :: metrics_type
contains
procedure(loss_interface), nopass, deferred :: eval
end type loss_type
type, extends(metrics_type), abstract :: loss_type
contains
procedure(loss_derivative_interface), nopass, deferred :: derivative
end type loss_type witth following API: subroutine evaluate(self, input, output, loss, metrics) result(val)
class(network), intent(inout) :: self
real, intent(in) :: input(:)
real, intent(in) :: output(:)
class(loss_type), intent(in), optional :: loss
class(metric_type), intent(in), optional :: metrics(:) !this would also accept loss types, and therefore we don't need to duplicate them.
end subroutine |
Sounds good, in that case, is it not just: subroutine evaluate(self, input, output, metrics) result(val)
class(network), intent(inout) :: self
real, intent(in) :: input(:)
real, intent(in) :: output(:)
class(metric_type), intent(in), optional :: metrics(:) !this would also accept loss types, and therefore we don't need to duplicate them.
end subroutine |
As discussed, the scalar variant would be pure real function evaluate(self, input, output, metric) result(val)
class(network), intent(in) :: self
real, intent(in) :: input(:)
real, intent(in) :: output(:)
class(metric_type), intent(in), optional :: metric ! this would also accept loss types, and therefore we don't need to duplicate them.
end subroutine evaluate If optional, the default metric can be MSE. |
I propose to introduce the method
evaluate
that will returns the loss value for the trained model:The text was updated successfully, but these errors were encountered: