Granian 1.4

emmett-framework

We recently released version 1.4 of Granian – the Rust HTTP server for Python applications.

This minor release actually include some major features you might be interested into, so let's deep dive into them!


The access log

The most wanted feature from the community since quite a lot of time was to have an access log of the requests handled by Granian. Also thank to the community who pledged the issue, we introduced this in 1.4 release!

The access log uses the new granian.access Python logger, which you can configure as the main logger using the --log-config option from the CLI or the log_dictconfig parameter of the Granian class.

The format of the access log messages is set by default to look like this:

[%(time)s] %(addr)s - "%(method)s %(path)s %(protocol)s" %(status)d %(dt_ms).3f

and it can be customised using the --access-log-fmt CLI option or the log_access_format parameter of the Granian class. The supported atoms in 1.4 are:

identifierdescription
addrClient remote address
timeDatetime of the request
dt_msRequest duration in ms
statusHTTP response status
pathRequest path (without query string)
query_stringRequest query string
methodRequest HTTP method
schemeRequest scheme
protocolHTTP protocol version

Back pressure

There's a really nice blog post by Armin Ronacher on back pressure and why it is important.

Granian is designed around Tokio runtime, which is conceptually quite similar to the Python asyncio event loop. This runtime is handling all the network part of the request/response cycle of the application, moving data in and out from the underlying Python interpreter.
As such, for asynchronous protocols like ASGI and RSGI, there will be two different event loops running in Granian: the Rust one and the Python one. These two event loops will exchange data and wait for each other with non blocking structures, whereas on synchronous protocols like WSGI, Granian will spawn separated threads to interact with the Python code and avoid to block the Rust event loop.

Prior to 1.4, in order to limit the amount of requests pushed from Granian to Python, the only configuration available was the --blocking-threads option, which would also affect other aspects in Granian runtime, so quite far from being the ideal way.

This is also why in 1.4 we re-designed the Python threading paradigm and added a back pressure option in Granian to properly modulate the amount of requests to be processed in parallel by your Python application. While this won't make a huge difference in asynchronous protocols – even the Python event loop can handle quite a lot of tasks – in synchronous protocol this will also control the number of active threads working on the Python side of things.

Back pressure might look like a trifling value, but in many context might radically change the average latency of your application response time. For instance, if your application connects to a database, you probably want to set the back pressure in Granian to the same value of maximum connections you want to open to the database. Otherwise, if the pool of database connections has a lower limit than the back pressure, you will have several tasks or threads just waiting for a database connection as soon as you have a number of parallel requests higher than the connection pool limit.

By default, Granian will set its back pressure to the backlog divided by the number of workers, but you should really tune this value based on your needs.