Instrument your FastAPI with Prometheus metrics.
A configurable and modular Prometheus Instrumentator for your FastAPI. Install
prometheus-fastapi-instrumentator
from
PyPI. Here is
the fast track to get started with a pre-configured instrumentator. Import the
instrumentator class:
from prometheus_fastapi_instrumentator import Instrumentator
Instrument your app with default metrics and expose the metrics:
Instrumentator().instrument(app).expose(app)
Depending on your code you might have to use the following instead:
instrumentator = Instrumentator().instrument(app)
@app.on_event("startup")
async def _startup():
instrumentator.expose(app)
With this, your FastAPI is instrumented and metrics are ready to be scraped. The defaults give you:
http_requests_total
with handler
, status
and method
. Total
number of requests.http_request_size_bytes
with handler
. Added up total of the
content lengths of all incoming requests.http_response_size_bytes
with handler
. Added up total of the
content lengths of all outgoing responses.http_request_duration_seconds
with handler
and method
. Only a
few buckets to keep cardinality low.http_request_duration_highr_seconds
without any labels. Large
number of buckets (>20).In addition, following behavior is active:
2xx
, 3xx
and so on.none
.If one of these presets does not suit your needs you can do one of multiple things:
metrics
and pass it to
the instrumentator instance. See here how to do that.Not made for generic Prometheus instrumentation in Python. Use the Prometheus client library for that. This packages uses it as well.
All the generic middleware and instrumentation code comes with a cost in performance that can become noticeable.
Beyond the fast track, this instrumentator is highly configurable and it is very easy to customize and adapt to your specific use case. Here is a list of some of these options you may opt-in to:
It also features a modular approach to metrics that should instrument all FastAPI endpoints. You can either choose from a set of already existing metrics or create your own. And every metric function by itself can be configured as well. You can see ready to use metrics here.
This chapter contains an example on the advanced usage of the Prometheus FastAPI Instrumentator to showcase most of it's features. Fore more concrete info check out the automatically generated documentation.
We start by creating an instance of the Instrumentator. Notice the additional
metrics
import. This will come in handy later.
from prometheus_fastapi_instrumentator import Instrumentator, metrics
instrumentator = Instrumentator(
should_group_status_codes=False,
should_ignore_untemplated=True,
should_respect_env_var=True,
should_instrument_requests_inprogress=True,
excluded_handlers=[".*admin.*", "/metrics"],
env_var_name="ENABLE_METRICS",
inprogress_name="inprogress",
inprogress_labels=True,
)
Unlike in the fast track example, now the instrumentation and exposition will
only take place if the environment variable ENABLE_METRICS
is true
at
run-time. This can be helpful in larger deployments with multiple services
depending on the same base FastAPI.
Let's say we also want to instrument the size of requests and responses. For
this we use the add()
method. This method does nothing more than taking a
function and adding it to a list. Then during run-time every time FastAPI
handles a request all functions in this list will be called while giving them a
single argument that stores useful information like the request and response
objects. If no add()
at all is used, the default metric gets added in the
background. This is what happens in the fast track example.
All instrumentation functions are stored as closures in the metrics
module.
Fore more concrete info check out the
automatically generated documentation.
Closures come in handy here because it allows us to configure the functions within.
instrumentator.add(metrics.latency(buckets=(1, 2, 3,)))
This simply adds the metric you also get in the fast track example with a modified buckets argument. But we would also like to record the size of all requests and responses.
instrumentator.add(
metrics.request_size(
should_include_handler=True,
should_include_method=False,
should_include_status=True,
metric_namespace="a",
metric_subsystem="b",
)
).add(
metrics.response_size(
should_include_handler=True,
should_include_method=False,
should_include_status=True,
metric_namespace="namespace",
metric_subsystem="subsystem",
)
)
You can add as many metrics you like to the instrumentator.
As already mentioned, it is possible to create custom functions to pass on to
add()
. This is also how the default metrics are implemented. The documentation
and code
here
is helpful to get an overview.
The basic idea is that the instrumentator creates an info
object that contains
everything necessary for instrumentation based on the configuration of the
instrumentator. This includes the raw request and response objects but also the
modified handler, grouped status code and duration. Next, all registered
instrumentation functions are called. They get info
as their single argument.
Let's say we want to count the number of times a certain language has been requested.
from typing import Callable
from prometheus_fastapi_instrumentator.metrics import Info
from prometheus_client import Counter
def http_requested_languages_total() -> Callable[[Info], None]:
METRIC = Counter(
"http_requested_languages_total",
"Number of times a certain language has been requested.",
labelnames=("langs",)
)
def instrumentation(info: Info) -> None:
langs = set()
lang_str = info.request.headers["Accept-Language"]
for element in lang_str.split(","):
element = element.split(";")[0].strip().lower()
langs.add(element)
for language in langs:
METRIC.labels(language).inc()
return instrumentation
The function http_requested_languages_total
is used for persistent elements
that are stored between all instrumentation executions (for example the metric
instance itself). Next comes the closure. This function must adhere to the shown
interface. It will always get an Info
object that contains the request,
response and a few other modified informations. For example the (grouped) status
code or the handler. Finally, the closure is returned.
Important: The response object inside info
can either be the response
object or None
. In addition, errors thrown in the handler are not caught by
the instrumentator. I recommend to check the documentation and/or the source
code before creating your own metrics.
To use it, we hand over the closure to the instrumentator object.
instrumentator.add(http_requested_languages_total())
Up to this point, the FastAPI has not been touched at all. Everything has been
stored in the instrumentator
only. To actually register the instrumentation
with FastAPI, the instrument()
method has to be called.
instrumentator.instrument(app)
Notice that this will do nothing if should_respect_env_var
has been set during
construction of the instrumentator object and the respective env var is not
found.
You can specify the namespace and subsystem of the metrics by passing them in the instrument method.
from prometheus_fastapi_instrumentator import Instrumentator
@app.on_event("startup")
async def startup():
Instrumentator().instrument(app, metric_namespace='myproject', metric_subsystem='myservice').expose(app)
Then your metrics will contain the namespace and subsystem in the metric name.
# TYPE myproject_myservice_http_request_duration_highr_seconds histogram
myproject_myservice_http_request_duration_highr_seconds_bucket{le="0.01"} 0.0
To expose an endpoint for the metrics either follow
Prometheus Python Client and add
the endpoint manually to the FastAPI or serve it on a separate server. You can
also use the included expose
method. It will add an endpoint to the given
FastAPI. With should_gzip
you can instruct the endpoint to compress the data
as long as the client accepts gzip encoding. Prometheus for example does by
default. Beware that network bandwith is often cheaper than CPU cycles.
instrumentator.expose(app, include_in_schema=False, should_gzip=True)
Notice that this will to nothing if should_respect_env_var
has been set during
construction of the instrumentator object and the respective env var is not
found.
Please refer to CONTRIBUTING.md
.
Consult DEVELOPMENT.md
for guidance regarding development.
Read RELEASE.md
for details about the release process.
The default license for this project is the
ISC License. A permissive license
functionally equivalent to the BSD 2-Clause and MIT licenses, removing some
language that is no longer necessary. See LICENSE
for the license
text.
The BSD 3-Clause License is
used as the license for the
routing
module. This is
due to it containing code from
elastic/apm-agent-python. BSD
3-Clause is a permissive license similar to the BSD 2-Clause License, but with a
3rd clause that prohibits others from using the name of the copyright holder or
its contributors to promote derived products without written consent. The
license text is included in the module itself.