Do you think IIT Guwahati certified course can help you in your career?
No
Introduction
Asynchronous programming is a technique that allows your software to begin a work that could take a while to complete while still being able to respond to other events without having to wait until that task is complete. When that task is complete, the outcome is displayed on your software.
The processing of a single HTTP request using non-blocking I/O and, if desired, several threads is now possible thanks to the relatively new concept of asynchronous HTTP request processing. It's also known as COMET capabilities.
Let's dive into the article to know more about asynchronous HTTP programming.
Asynchronous results
Play Framework is asynchronous from the ground up on the inside. Every request is handled by Play in an asynchronous, non-blocking manner. Asynchronous controllers are catered for in the default configuration. In other words, the controller code shouldn't be blocked by the application code, which would make the controller code wait for an operation. JDBC calls, streaming API calls, HTTP requests, and lengthy computations are typical examples of such blocking procedures.
Although it is possible to increase the number of threads in the default execution environment to enable blocking controllers to handle more concurrent requests, adopting the advised strategy of keeping the controllers asynchronous makes it easier to scale and keeps the system responsive under stress.
Creating non-blocking actions
Due to the nature of Play, the action code must be non-blocking and as quick as possible. Therefore, we should deliver the promise when we return from our action but cannot compute the outcome.
CompletionStage is a general-purpose promise API offered by Java 8. Eventually, a value of type Result will be used to redeem a CompletionStageResult>. Instead of utilizing a standard Result, we can return from our action quickly and without blocking anything by using a CompletionStageResult>. Once the promise is kept, play will then serve the outcome.
How to create a CompletionStage<Result>
The promise that will provide us with the real value we need to compute the result is required before we can create a CompletionStageResult>:
CompletionStage<Double> promiseOfPIValue = computePIAsynchronously();
// Runs in the same thread
CompletionStage<Result> promiseOfResult =
promiseOfPIValue.thenApply(pi -> ok("PI value computed: " + pi));
Asynchronous API methods for play return a CompletionStage. This is true whether you're using the play.libs.WS API to contact an external web service, Akka to plan asynchronous operations, or play.libs.Akka to interact with Actors.
The completion stage will be carried out in the same calling thread as the earlier job in this situation using CompletionStage.thenApply. This is acceptable when there is only a little amount of CPU-bound logic without any blocking.
Using HttpExecutionContext
To keep the classloader in scope when utilizing a Java CompletionStage inside an Action, you must explicitly give the HTTP execution context as an executor. Through dependency injection, you can provide a play.libs.concurrent.HttpExecutionContext instance:
import play.libs.concurrent.HttpExecutionContext;
import play.mvc.*;
import javax.inject.Inject;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
public class MyController extends Controller {
private HttpExecutionContext httpExecutionContext;
@Inject
public MyController(HttpExecutionContext ec) {
this.httpExecutionContext = ec;
}
public CompletionStage<Result> index() {
// Use a different task with explicit EC
return calculateResponse()
.thenApplyAsync(
answer -> {
return ok("answer was " + answer).flashing("info", "Response updated!");
},
httpExecutionContext.current());
}
private static CompletionStage<String> calculateResponse() {
return CompletableFuture.completedFuture("42");
}
}
Using CustomExecutionContext and HttpExecution
However, using a HttpExecutionContext or a CompletionStage is only one part of the solution. You are still using Play's default ExecutionContext at this time. To get your ExecutionStage off Play's rendering thread pool, you still need to have it execute with a different executor if you are calling out to a blocking API like JDBC. Create a subclass of play.libs.concurrent.CustomExecutionContext and add a reference to the custom dispatcher to accomplish this.
public class MyExecutionContext extends CustomExecutionContext {
@Inject
public MyExecutionContext(ActorSystem actorSystem) {
// uses a custom thread pool defined in the application.conf
super(actorSystem, "my.dispatcher");
}
}
A custom dispatcher must be defined in the application.conf using the Akka dispatcher configuration. After creating a custom dispatcher, include an explicit executor and wrap it in a HttpException. fromThread.
Actions are asynchronous by default.
By default, play actions are asynchronous. For instance, the returned Result is internally included in a promise in the controller code below:
public Result index(Http.Request request) {
return ok("Got request " + request + "!");
}
Handling time-outs
Handling time-outs properly is frequently useful to prevent the web browser from blocking and waiting if something goes wrong. A CompletionStage can be encapsulated in a non-blocking time-out using the play.libs.concurrent.Futures.timeout function.
class MyClass {
private final Futures futures;
private final Executor customExecutor = ForkJoinPool.commonPool();
@Inject
public MyClass(Futures futures) {
this.futures = futures;
}
CompletionStage<Double> callWithOneSecondTimeout() {
return futures.timeout(computePIAsynchronously(), Duration.ofSeconds(1));
}
public CompletionStage<String> delayedResult() {
long start = System.currentTimeMillis();
return futures.delayed(
() ->
CompletableFuture.supplyAsync(
() -> {
long end = System.currentTimeMillis();
long seconds = end - start;
return "rendered after " + seconds + " seconds";
},
customExecutor),
Duration.of(3, SECONDS));
}
}
Streaming HTTP responses
Since HTTP 1.1, the server must deliver the correct Content-Length HTTP header and the response to maintain a single connection open to serve multiple HTTP requests and responses. When you send back a straightforward result, such as: By default, you are not specifying a Content-Length header.
public Result index() {
return ok("Hello World");
}
Sending large amounts of data
What about huge data sets if it's not an issue to put the entire material into memory? Say we wish to send a sizable file back to the web client. To establish a Source[ByteString, _] for the file content, let's first see how to do so:
Now, it seems easy, isn't it? Let's just specify the response body using this streamed HttpEntity:
public Result index() {
java.io.File file = new java.io.File("/tmp/fileToServe.pdf");
java.nio.file.Path path = file.toPath();
Source<ByteString, ?> source = FileIO.fromPath(path);
return new Result(
new ResponseHeader(200, Collections.emptyMap()),
new HttpEntity.Streamed(source, Optional.empty(), Optional.of("text/plain")));
}
This is where the issue lies. Play will have to determine the Material-Length independently because we don't specify it in the streaming object. The only way to accomplish this is to ingest the entire source content, load it into memory, and determine the response size.
Large files that we don't want to fully load into memory present a difficulty because of this. We simply need to set the Content-Length header yourself to avoid that. In this manner, Play will sluggishly consume the body source, copying each accessible data chunk to the HTTP response.
public Result index() {
java.io.File file = new java.io.File("/tmp/fileToServe.pdf");
java.nio.file.Path path = file.toPath();
Source<ByteString, ?> source = FileIO.fromPath(path);
Optional<Long> contentLength = null;
try {
contentLength = Optional.of(Files.size(path));
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
return new Result(
new ResponseHeader(200, Collections.emptyMap()),
new HttpEntity.Streamed(source, contentLength, Optional.of("text/plain")));
}
Serving files
Of course, Play offers simple-to-use tools that make providing local files a routine task:
public Result index() {
return ok(new java.io.File("/tmp/fileToServe.pdf"));
}
To tell the web browser how to handle this response, this helper will add the Content-Disposition header and compute the Content-Type header from the file name. By including the header Content-Disposition: inline; filename=fileToServe.pdf to the HTTP response, this file will, by default, be displayed inline.
Chunked responses
A web server will serve content in a series of pieces using the chunked transfer encoding feature of HTTP 1.1. Instead of using the Content-Length HTTP response header as the protocol typically requires, this uses the Transfer-Encoding HTTP response header. The server does not need to know the content's length before responding to the client because the Content-Length header is not used (usually in a web browser). Web servers can start sending responses with dynamically created material even before they know its full size.
For a client to know when it has finished receiving data for a chunk, the size of each chunk is sent right before the chunk itself. A final chunk of length zero ends the data transport. The benefit is that we can serve data live, which allows us to distribute data chunks as soon as they become available. The disadvantage is that a suitable download progress bar cannot be displayed because the web browser does not know the content size.
Assume we have access to a service that offers a dynamic InputStream that generates some data. We can use a chunked response to request that Play stream this material directly:
public Result index() {
InputStream is = getDynamicStreamSomewhere();
return ok(is);
}
Additionally, you can create your own chunked response builder:
public Result index() {
// Prepare a chunked text stream
Source<ByteString, ?> source =
Source.<ByteString>actorRef(256, OverflowStrategy.dropNew())
.mapMaterializedValue(
sourceActor -> {
sourceActor.tell(ByteString.fromString("kiki"), null);
sourceActor.tell(ByteString.fromString("foo"), null);
sourceActor.tell(ByteString.fromString("bar"), null);
sourceActor.tell(new Status.Success(NotUsed.getInstance()), null);
return NotUsed.getInstance();
});
// Serves this stream with 200 OK
return ok().chunked(source);
}
An Akka Streams Source that materializes into an ActorRef is created by Source.actorRef. After that, you can publish elements to the stream by sending messages to the actor. An alternate strategy is to utilize the Stream.actorPublisher method to build an actor that extends ActorPublisher.
Comet
The creation of Comet sockets is a typical application of chunked answers. A chunked text/HTML response with only script> elements is known as a comet socket. We create a script> tag with JavaScript that is immediately run by the web browser for each piece. By enclosing each message in a script> element that performs a JavaScript callback function and writing it to the chunked response, we can communicate events directly from the server to the web browser.
We can send a flow of items and alter it so that each element is escaped and wrapped in the Javascript function since ok().chunked uses Akka Streams to take a FlowByteString>. The Comet helper automates Comet sockets, supporting both String and JSON messages, and pushing an initial blank buffer of data for browser compatibility.
Comet Imports
Import the necessary classes to utilize the Comet helper:
Using the log() procedure to display any mistakes in data mapping via the stream is the simplest way to troubleshoot a broken Comet stream.
Legacy Comet Functionality
You can still use deprecated Comet functionality using play.libs.Comet, but you are urged to switch to the Akka Streams-based alternative. Due to the callback-based architecture of the Java Comet helper, it might be simpler to convert the callback-based functionality into an org.reactivestreams. Create a source by calling Source.fromPublisher on the Publisher directly.
WebSockets
WebSockets are sockets that can be utilized from a web browser and are based on a protocol that permits two-way full-duplex communication. As long as the server and client have a live WebSocket connection, the client can send messages at any time, and the server can receive messages at any time.
WebSockets are natively supported by HTML5-compliant modern web browsers thanks to a JavaScript WebSocket API. Although many WebSocket client libraries are available, WebSockets are not simply utilized by WebBrowsers. They may also be used by native mobile apps, allowing servers to communicate with one another. The benefit of using WebSockets in these situations is that the Play server's current TCP port can be reused.
Handling WebSockets
We have now handled standard HTTP requests and returned standard HTTP answers using Action objects. Because WebSockets are a completely distinct animal, they cannot be handled by a regular Action. The handling of WebSockets by Play is based on Akka streams. A WebSocket is modeled as a flow, and messages generated by the flow are sent to the client. Incoming WebSocket messages are fed into the flow.
A flow is conceptually thought of as something that takes messages, processes them, and then outputs the processed messages. However, there is no reason why this needs to be the case; the flow's input and output can be completely unrelated. In order to achieve this, Akka streams provide a function Object() { [native code] } called Flow.fromSinkAndSource. When managing WebSockets, the input and output are frequently completely disconnected.
Handling WebSockets with actors
We can utilize the Play tool, ActorFlow, to change an ActorRef into a flow and manage a WebSocket with an actor. This tool accepts a function that transforms an ActorRef into an Akka.actor so that it can send messages. When Play receives the WebSocket connection, it should create an actor as described in the Props object:
import play.libs.streams.ActorFlow;
import play.mvc.*;
import akka.actor.*;
import akka.stream.*;
import javax.inject.Inject;
public class HomeController extends Controller {
private final ActorSystem actorSystem;
private final Materializer materializer;
@Inject
public HomeController(ActorSystem actorSystem, Materializer materializer) {
this.actorSystem = actorSystem;
this.materializer = materializer;
}
public WebSocket socket() {
return WebSocket.Text.accept(
request -> ActorFlow.actorRef(MyWebSocketActor::props, actorSystem, materializer));
}
}
Detecting when a WebSocket has closed
The actor will be terminated by Play once the WebSocket has closed. The actor's postStop method may be used to deal with this issue by clearing up any resources that the WebSocket may have used. For instance:
public void postStop() throws Exception {
someResource.close();
}
Closing a WebSocket
Play will automatically close the WebSocket when the actor handling the WebSocket terminates. Send a PoisonPill to your actor to shut down the WebSocket:
self().tell(PoisonPill.getInstance(), self());
Rejecting a WebSocket
Occasionally, you might want to deny a WebSocket request, for instance, if the user needs to be authenticated to connect to the WebSocket or if the WebSocket is linked to a resource. Still, there isn't a resource with that id. For instance, Play offers an acceptOrResult WebSocket builder:
If any asynchronous processing is required before you are ready to reject the WebSocket or construct an actor, you can simply return CompletionStage rather than WebSocket.
Handling different types of messages
We have only used the Text function Object() { [native code] } to handle String frames thus far. Additionally, Play provides built-in handlers for JSONNode messages parsed from String frames using the JSON builder and ByteString frames using the Binary builder. Example using JSON builder:
When managing WebSockets, actors are not necessarily the best abstraction, especially if the WebSocket behaves more like a stream. Instead, you can handle WebSockets directly using Akka streams. Import the Akka streams javadsl first before using Akka streams:
import akka.stream.javadsl.*;
You can now use it in this manner.
public WebSocket socket() {
return WebSocket.Text.accept(
request -> {
Sink<String, ?> in = Sink.foreach(System.out::println);
Source<String, ?> out = Source.single("Hello!").concat(Source.maybe());
return Flow.fromSinkAndSource(in, out);
});
}
The HTTP request that starts the WebSocket connection gives WebSocket access to the request headers, enabling you to get common headers and session data. It cannot, however, access the HTTP response or the requested content.
In this illustration, we're building a straightforward sink that outputs each message to the console. We establish a straightforward source that will send a single Hello! message to deliver messages. To prevent our single source from ending the flow and the connection, we also need to concatenate a source that will never send anything.
Accessing a WebSocket
A route for your WebSocket must be added to your routes file to send data or access it.
Configuring WebSocket Frame Length
When executing your application, you may specify the maximum length for WebSocket data frames by using the play.server.websocket.frame.maxLength or the -Dwebsocket.frame.maxLength system property. For instance:
sbt -Dwebsocket.frame.maxLength=64k run
You can customize this setup to meet the needs of your application and gain additional control over WebSocket frame duration. Long data frame denial of service attacks might be less common.
Frequently Asked Questions
Is asynchronous programming supported in Java?
The feature in Java that makes asynchronous programming possible is called concurrency. Concurrency is primarily known for having the capacity to run numerous programs or applications in parallel seamlessly.
What is Java asynchronous processing?
Asynchronous processing entails transferring these blocking operations to a different thread and returning the request's related thread to the container immediately.
What is an HTTP asynchronous request?
One HTTP request can be processed utilizing non-blocking I/O and, if preferred, in different threads thanks to asynchronous HTTP request processing, a relatively recent method. It's also known as COMET capabilities.
How is asynchronous programming carried out?
Parallel programming in the form of asynchronous programming enables a task to run independently of the main application thread. When the task is finished, it alerts the main thread (as well as whether the work was completed or failed).
What does Java's HttpClient mean?
To submit requests and get results, one can utilize a HttpClient. A builder is used to create a HttpClient. The builder can set up the preferences for each client, such as the preferred protocol version (HTTP/1.1 or HTTP/2), whether to follow redirects, the usage of a proxy or authenticator, and other settings.
Conclusion
In this article, we have extensively discussed Asynchronous HTTP programming for Java developers. We have also explained asynchronous results, streaming HTTP responses, comet, WebSockets, and more in detail.