Table of contents
1.
Introduction 👩🏿
2.
Asynchronous Applications Overview 🧐
3.
Limitations of Synchronous WSGI 😲
3.1.
Greenlets 👌
3.1.1.
Code👀
3.1.2.
Code Explanation 👀
3.2.
Event Callbacks 🤷‍♂️
3.2.1.
Code👀
3.2.2.
Code 👀
3.3.
WebSockets 🤨
3.3.1.
Code 👀
3.3.2.
Code 👀
4.
Frequently Asked Questions
4.1.
What is WSGI stand for?
4.2.
What is the Bottle framework? 
4.3.
What are the three advantages of the Bottle framework?
4.4.
What is the command to install a Bottle directly from CMD?
4.5.
What is the port required to run the bottle web server?
5.
Conclusion
Last Updated: Mar 27, 2024
Medium

Primer to Asynchronous Applications Bottle framework Python

Author Ankit Kumar
0 upvote
Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction 👩🏿

After listening about Python, the first thing that hit in mind is development, and web applications always excite the developer and the user the most. Like the Bottle, flask, and Django, several frameworks in Python let you develop websites. 

Bottle framework

You can discover how to make a straightforward app bottle in this tutorial. The Bottle is a Python WSGI micro web framework that is quick, easy, and lightweight. It is supplied as a single file module and only requires the Python Standard Library as a dependency. The application of a Bottle with asynchronous WSGI is described in this article.

Asynchronous Applications Overview 🧐

The synchronous aspect of WSGI (Web Server Gateway Interface ) doesn't work well with asynchronous design patterns. Most asynchronous frameworks (such as Tornado, Twisted, and others) use a unique API to expose its asynchronous functionality.

The Bottle framework is a WSGI framework and shares WSGI's synchronous nature.

Bottle framework still enables the creation of asynchronous applications. 

Let's first examine the synchronous WSGI constraints, though.

Recommended Topic:- Fibonacci Series in Python

Limitations of Synchronous WSGI 😲

The WSGI specification (pep 3333) describes a request/response circle in the following succinct manner: 

  1. For each request, the application callable is called once and is required to deliver a body iterator. 
  2. Following that, the server loops through the body and writes each chunk to the socket. 
  3. The client connection is terminated as soon as the body iterator runs out of resources.
     

Everything happens simultaneously, which is fine but presents a problem. Your program must either return empty strings (busy wait) or block the running thread if it has to wait for data (IO, sockets, databases, etc.). The handling thread is blocked from responding to additional requests by both solutions. As a result, each thread only has one active request.

To prevent their comparatively large overhead, the majority of servers set a thread limit. Common thread pools have 20 or fewer threads. Any new connection is terminated as soon as all threads are operational. For everyone else, the server is effectively shut off. You would hit the restriction at 20 concurrent connections if you wanted to construct a chat that relied on long-polling ajax requests to acquire real-time updates. That was a brief conversation.

Greenlets 👌

Greenlets are an addition made by the gevent module. Although they cost a lot less to produce, greenlets operate similarly to conventional threads. With essentially no overhead, a gevent-based server can produce thousands of greenlets (one for each connection). There is no effect on the server's capacity to accept new requests when specific greenlets are blocked. Virtually infinite connections can be active at once.

Due to the significant complexity associated with switching between and establishing new threads, most servers restrict the size of their var pools to a very small number of concurrent threads. For each new connection, threads must still be created, even if they are more affordable than processes (forks).

Because they replicate synchronous apps in appearance and functionality, developing asynchronous applications relatively simple. Actually, a gevent-based server is massively multi-threaded, not asynchronous.

Let us understand it better by an example.

Code👀

from gevent import monkey; monkey.patch_all()
from time import sleep
from bottle import route
from bottle import run

@route('/stream')

def stream():
    yield 'We are at the beginning.'
    sleep(3)
    yield 'Somewhere at the Middle.'
    sleep(5)
    yield 'Finally, the End.'

run(host='0.0.0.0', port=8080, server='gevent')


Code Explanation 👀

The opening line is key. Most of Python's blocking APIs become monkey-patched by gevent, so they no longer block the current thread and instead pass the CPU to the following greenlet. Actually, gevent-based pseudo-threads take the place of Python's threading. This explains why time.sleep(), which would often block the entire thread, can still be used. If using Python's monkey-patching built-ins makes you uncomfortable, you can use the comparable gevent function that is gevent.sleep().

You should see 'We are at the beginning, 'Somewhere at the Middle', and 'Finally, the End' appear sequentially if you run this script and direct your browser to http://localhost:8080/stream (rather than waiting the whole time to see them all at once). Your server can now handle thousands of simultaneous requests without experiencing any issues, and it functions precisely as it would with regular threads.

Event Callbacks 🤷‍♂️

Using non-blocking APIs and binding callbacks to asynchronous events is a frequent design technique in asynchronous frameworks (such as tornado, twisted, node.js, and friends). The socket object is kept on until it is explicitly closed, allowing callbacks to write mostly to the socket at a later time.

An illustration using the tornado library is as follows:

Code👀

class MainHandler(tornado.web.RequestHandler):
    @tornado.web.asynchronous
    def get(self):
        var = SomeAsyncWorker()
        var.on_data(lambda chunk: self.write(chunk))
        var.on_finish(lambda: self.finish())

 

The request handler's early termination is the key advantage. While callbacks continue to write the results of previous requests to sockets, the handling thread can continue processing incoming requests. Because of this, many frameworks are able to handle numerous concurrent requests while using a minimal amount of OS threads.

Gevent+WSGI, however, makes a difference:

  • There is no advantage to terminating early because there is an infinite supply of (pseudo) threads available to receive new connections. 
  • Premature termination will close the socket. Thus we cannot do that (as required by WSGI). 
  • In order to comply with WSGI, we must return an iterable.
     

We only need to return a body iterable that we may write to asynchronously in order to comply with the WSGI standard. We may mimic a detached socket with the aid of gevent.queue and modify the preceding example as follows:

Code 👀

@route('/fetch')
def fetch():
    result = gevent.queue.
Queue()
    var = SomeAsyncWorker()
    var.on_data(result.put)
    var.on_finish(lambda: result.put(StopIteration))
    var.start()
    return result

 

The queue object is iterable when seen from the server's perspective. If empty, it blocks, and once it reaches StopIteration, it ends. This is WSGI compliant. The queue object functions like a non-blocking socket on the application side. You can start a new (pseudo) thread that writes to it asynchronously at any time, pass it around, and write to it at any time. The majority of the time, long-polling is used in this manner.

WebSockets 🤨

For now, let's put the technical intricacies aside and talk about WebSockets. You most likely already know what WebSockets are as you are reading this article: a channel for two-way communication between a client (the browser) and a web application (server).

The gevent-websocket package, thankfully, handles all the tedious work for us. Here is a straightforward WebSocket endpoint that only delivers messages back to the client after receiving them:

Code 👀

from bottle import request
from bottle import bottle
from bottle import abort
app = Bottle()
@app.route('/websocket')
def handle_websocket():
    webSocket = request.environ.get('wsgi.websocket')
    if not webSocket:
        abort(400, 'Expected WebSocket request.')

    while True:
        try:
            message = webSocket.receive()
            webSocket.send("Your message was: %r" % message)
        except WebSocketError:
            break

from gevent.pywsgi import WSGIServer
from geventwebsocket import WebSocketError
from geventwebsocket.handler import WebSocketHandler

server = WSGIServer(("0.0.0.0", 8080), app, handler_class=WebSocketHandler)
server.serve_forever()

 

Until the client disconnects, the while-loop remains in effect. 

Now let us go through the client-site JavaScript API:

Code 👀

<!DOCTYPE html>
<html>
<head>
  <script type="text/javascript">
    var ws = new WebSocket("ws://example.com:8080/websocket");
    ws.onopen = function() {
        ws.send("Hey Ninjas");
    };
    ws.onmessage = function (evt) {
        alert(evt.data);
    };
  </script>
</head>
</html>


We have you have understood everything about Primer to Asynchronous Applications - Bottle framework | Python. 🤗

Frequently Asked Questions

What is WSGI stand for?

WSGI stands for Web Server Gateway Interface.

What is the Bottle framework? 

The bottle is a Python WSGI micro web framework that is quick, easy, and lightweight. It is supplied as a single file module and only requires the Python Standard Library as a dependency.

What are the three advantages of the Bottle framework?

The bottle is fantastic in the web development scenarios, which are idea prototyping, creating and maintaining straightforward personal web applications, and understanding the creation of web frameworks.

What is the command to install a Bottle directly from CMD?

For Windows, it is pip install bottle; for Ubuntu, pip3 install bottle. 

What is the port required to run the bottle web server?

Port: 8080 is the required port to run a web server.

Conclusion

This blog has extensively discussed the primer to asynchronous applications in the bottle framework. We overviewed the Asynchronous Applications and then discussed the limitations of synchronous WSGI. After that, we discussed greenlet to the rescue, Event Callbacks, and Websockets. In the end, we discussed some of the frequently asked questions related to this. Learn more about python frameworks if this topic excites you. 

Refer to our guided paths on Coding Ninjas Studio to learn more about DSA, Competitive Programming, JavaScript, System Design, etc. Enrol in our courses and refer to the mock test and problems available. Take a look at the interview experiences and interview bundle for placement preparations.

Do upvote our blog to help other ninjas grow. 

Happy Learning Ninja! 🥷

Thankyou image
Live masterclass