Since this records a series of metrics for each HTTP handler
class, this quickly leads to an explosion of cardinality and
makes storing metrics quite difficult. For example, just accessing
the metrics endpoint creates the following 17 metrics:
```
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.005",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.01",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.025",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.05",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.075",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.1",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.25",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.5",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="0.75",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="1.0",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="2.5",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="5.0",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="7.5",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="10.0",method="GET",status_code="200"} 9.0
http_request_duration_seconds_bucket{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",le="+Inf",method="GET",status_code="200"} 9.0
http_request_duration_seconds_count{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",method="GET",status_code="200"} 9.0
http_request_duration_seconds_sum{handler="jupyter_server.base.handlers.PrometheusMetricsHandler",method="GET",status_code="200"} 0.009019851684570312
```
This has what has stalled prior attempts at collecting metrics
from jupyter_server usefully in multitenant deployments
(see berkeley-dsep-infra/datahub#1977).
This PR adds a traitlet that allows hub admins to turn these
metrics off.