With this first blog, I'd like to start a series about application
integration. Quite often, I encounter projects where Liferay must integrate
with some backend services. There are lots of integration patterns.
The idea behind this series is to build some sample projects for illustrative
use cases in order to focus on specific issues you may encounter and to
provide some ideas about how to deal with them.
As soon as you depend on some third party system in your project, you may find
out that they have some performance issues.
Let's take the example of a REST backend service
(https://petstore.swagger.io) and imagine it could have performance
issues. Not only is bad backend performance going to impact response time,
but you also have to beware that most of the time, backend integrations are
written in a synchronous fashion.
What it means is that the JVM Thread which has issued the HTTP Request is
going to wait for the Response to be received (or the response timeout). The
application server has a max number of threads (Tomcat's default is usually
200). As a consequence, if at some point you have several users who have
triggered a lot of calls to backend services, they may end up clogging up
the Thread pool.
Connection and response timeouts can usually be configured in most java HTTP
client libraries and you should take some time defining the proper values for
them. Otherwise, any backend service accepting connections but not answering
in a reasonable amount of time may stuck your application server.
The default ServletContext
is
synchronous. It means that for each servlet request the servlet container is
going to manage, the context is assigned to one available Thread from the
Thread Pool. This is something you can easily monitor from within your
application logs where the ID of the threads appears between brackets, just
after the timestamp and the log level.
Note: There are a few places in the Liferay source code where Liferay relies
on this to write and read data inside of a ThreadLocal (ie. a variable local
to the thread) so as to avoid propagating lots of info from the servlet
context.
The Servlet 3.0 API has introduced the concept of
theAsyncContext.
The idea behind this concept is to detach a request/response context and
answer to it later. For example, you may receive a servlet request, issue an
HTTP request to a backend service, release the initial worker thread and wait
for a callback to have another thread write the response.
Having an async context managed through the specification has a strong
advantage. The framework helps you making sure you do not end up with loose
Threads. The AsyncContext provides constructs to help you bind your worker
threads with a context so that you can easily terminate them all together if,
for example, the client aborts the servlet request.
If you have looked at Liferay Portal
properties,
you may have seen this:
layout.parallel.render.enable=false
This is a feature that used to be available until Liferay 6.x. It used to
allow parallel rendering of portlets on the layout. Its mention remains until
the 7.2 documentation as a remainder that it was removed.
This is something important to have in mind: if you have several portlets on a
page, they are rendered one after another. I was not around when this change
happened. However, this capability was not standard in the portlet
specification and my feeling is that it was somehow replaced with Senna.js'
ability to perform partial page refreshing.
Now, let's imagine you have written a portlet and its Render Phase depends
on a series of backend services calls.
One common pattern is to move those expensive service calls to a Resource
Phase and take advantage of iframes or AJAX calls to build the view
iteratively. Besides, you can also have every backend service call be
encapsulated within its own resource request.
While this is going to help solve the response time issue, this solution may
accelerate the starvation of the Thread Pool.
Wheras the Render Phase is synchronous territory, the Portlet 3.0
specification has embraced the Servlet 3.0 AsyncContext novelty and allows you
to use a PortletAsyncContext from inside of a
Resource Request.
Let's look at this Resource Controller (see Git
repo):
@ResourceMapping("pets") public void getPets(ResourceRequest resourceRequest) { // Start an async context PortletAsyncContext portletAsyncContext = resourceRequest.startPortletAsync(); // Request completable futures for 3 API calls CompletableFuture<List<Pet>> availablePetsfuture = this._petService.getPets("available"); CompletableFuture<List<Pet>> pendingPetsfuture = this._petService.getPets("pending"); CompletableFuture<List<Pet>> soldPetsfuture = this._petService.getPets("sold"); // Create a runnable responsible for the management of the async process // It implements Runnable PetsResourceRequestRunnable task = new PetsResourceRequestRunnable(portletAsyncContext, availablePetsfuture, pendingPetsfuture, soldPetsfuture); // Setting timeout to the async context portletAsyncContext.setTimeout(Constants.ASYNC_TIMEOUT); // This listener handles events related to that async context (timeout, completion...) portletAsyncContext.addListener(new PetsResourcePortletAsyncListener(task)); // The initial ServletRequest thread will be released and the response will be managed by another thread as the context completes portletAsyncContext.start(task); }
When dealing with such resource request, the controller first starts a
PortletAsyncContext , then requests a CompletableFuture for 3 web
service calls and assigns their handling to an async task that will be managed
through a callback handler.
Spring 4 has recently reached end of life. However, I often see projects
that still use Spring 4 MVC Portlets.
Spring 5 was quite an impactful upgrade from our point of view because the
project decided to get rid of a lot of niche features. Among them, Spring
MVC Portlets. This is the reason why the PortletMVC4Spring project was
born. See more details in Neil's blog
post.
I decided to build this sample project using PortletMVC4Spring because Spring
MVC Portlet has often been the framework of choice for building Liferay apps
that integrate with large backend systems. As a consequence, this is one
place where you're likely to find that kind of integration with slow backend
services facading stuff like old AS/400 apps or Mainframes.
We will now showcase how you can turn synchronous calls to those
CompletableFutures.
Java's CompletableFuture is close to Javascript's Promise. I won't go into
details because that could be a blog post on its own but this is a variable
that somehow allows you to hold a pointer to a future value.
In order to benefit from this construct, you need a java HTTP client library
able to deal with asynchronous communication. Spring REST
Template (which is a quite
popular choice among Java developpers) is unable to do it. Among the
other libraries available, my preference goes to
OkHttp (an all purpose Http Client) +
Retrofit (a library that turns a
Swagger / OpenAPI definition into a Java interface, using OkHttp under the
hood).
Example of code generation for Retrofit using Swagger
Codegen (See Git
repo):
<plugin> <groupId>io.swagger</groupId> <artifactId>swagger-codegen-maven-plugin</artifactId> <version>2.4.19</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <inputSpec>${project.basedir}/src/main/resources/swagger.json</inputSpec> <language>java</language> <library>retrofit2</library> <configOptions> <sourceFolder>src/gen/java/main</sourceFolder> </configOptions> </configuration> </execution> </executions> </plugin>
Using that generator code, this is how you're going to issue an Call from
within you Service module (see Git repo):
// I create a client so that I can attach a logger OkHttpClient client = new OkHttpClient.Builder() .addInterceptor(new OkHttpLoggingInterceptor()) .callTimeout(Constants.OKHTTP_TIMEOUT, TimeUnit.MILLISECONDS) .build(); Retrofit.Builder builder = new Retrofit.Builder() .baseUrl(BASE_URL) .addConverterFactory(GsonConverterFactory.create()).client(client); Retrofit retrofit = builder.build(); PetApi petApi = retrofit.create(PetApi.class); List<String> statusList = new ArrayList<String>(); statusList.add(status); // This is my Retrofit call Call<List<Pet>> asyncCall = petApi.findPetsByStatus(statusList);
At this point, you may treat this Call in a sync or async fashion. In the next
instructions, I'm going to treat it asynchronously, turning the callback into
a CompletableFuture (see Git repo):
// I'm making an async call and manage the completable future's outcome in Retrofit's callback asyncCall.enqueue(new Callback<List<Pet>>() { @Override public void onFailure(Call<List<Pet>> call, Throwable t) { LOG.debug("Request failed"); future.completeExceptionally(new PetServiceException("Request failed")); } @Override public void onResponse(Call<List<Pet>> call, Response<List<Pet>> response) { if(response.isSuccessful()) { LOG.debug("Request succeeded"); future.complete(response.body()); } else { LOG.debug("Request failed with code " + response.code()); future.completeExceptionally(new PetServiceException("Response got code " + response.code())); } } });
In order for this to work, you have to configure your portlet to support
asynchronism.
First, you have to mark it in the portlet.xml descriptor (see Git
repo):
<?xml version="1.0"?> <portlet-app xmlns="http://xmlns.jcp.org/xml/ns/portlet" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/portlet http://xmlns.jcp.org/xml/ns/portlet/portlet-app_3_0.xsd" version="3.0"> <portlet> <portlet-name>spring-mvc-petstore-portlet</portlet-name> <display-name>spring-mvc-petstore-portlet</display-name> <portlet-class>com.liferay.portletmvc4spring.DispatcherPortlet</portlet-class> (...) **< async-supported>true</async-supported>** </portlet> (...) </portlet-app>
And you also need to activate it at the Spring MVC Portlet context level (see
Git repo):
<?xml version="1.0"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd"> <context:component-scan base-package="com.liferay.samples.fbo.spring.petstore.**"/> <mvc:annotation-driven> ** <mvc:async-support default-timeout="10000"></mvc:async-support> ** </mvc:annotation-driven> <bean id="portletMultipartResolver" class="com.liferay.portletmvc4spring.multipart.StandardPortletMultipartResolver" /> </beans>
And finally at the Servlet Context level (see Git
repo):
<?xml version="1.0"?> <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring-context/portlet-application-context.xml</param-value> </context-param> <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>com.liferay.portletmvc4spring.ViewRendererServlet</servlet-class> <load-on-startup>1</load-on-startup> ** <async-supported>true</async-supported>** </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <filter> <filter-name>delegatingFilterProxy</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> ** <async-supported>true</async-supported>** </filter> <filter-mapping> <filter-name>delegatingFilterProxy</filter-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> ** <dispatcher>ASYNC</dispatcher> ** </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> </web-app>
Now that we have all the fragment in place to spawn a Handler dealing with all
the response promises, let's have a look at how we could manage it (see Git
repo):
@Override public void run() { // We're going to run all 3 requests in parallel getPetsFromApi(this._availablePetsFuture, new MarkAvailablePets(), "available"); getPetsFromApi(this._pendingPetsFuture, new MarkPendingPets(), "pending"); getPetsFromApi(this._soldPetsFuture, new MarkSoldPets(), "sold"); }
In this program, I'm going to receive the responses for my 3 API calls in any
order. The code is going to use the ResourceResponse's PrintWriter to write
the corresponding output as we receive it and check whether all 3 requests
have been completed. In which case, it will ask the PortletAsyncContext to
complete:
public void getPetsFromApi(CompletableFuture<List<Pet>> future, MarkPets callback, String status) { // Look closely at the logs, you'll see the order in which they display depends on execution LOG.debug("Running petstore " + status + " request"); future.exceptionally(exception -> { // In case of an error, I make the future return an empty list instead of the response LOG.error("Failed to get " + status + " Pet", exception); return new ArrayList<>(); }).thenAcceptAsync(list -> { if(_isTerminated) { LOG.debug("Don't write anything, the task was terminated"); } else { // The task is still running, I'm printing stuff to the output writePets(status, list); } }).whenComplete((i, t) -> { if(_isTerminated) { LOG.debug("Don't do anything else, the task was terminated"); } else { // This subtask is complete callback.done(); // If all 3 tasks are complete, we complete the asyncContext which is going to close the output's writer if(allDone()) { _portletAsyncContext.complete(); } } }); }
We have configured some response timeouts both at the ResourceResponse level
and also for each of the backend service calls.
1/ If one individual web service call times out , the CompletableFuture
will Complete Exceptionnally (with an exception) and won't write any output.
In my example, this will not impact the other calls which are going to try to
finish on time and write their output.
We could also have decided that one timeout should cancel all the other
requests leading to the same outcome as 2/.
2/ If the ResourceResponse times out , then there is no need to wait for
the backend service calls to finish, we can cancel them (see Git
repo).
// This is executed on async context timeout @Override public void onTimeout(PortletAsyncEvent evt) throws IOException { LOG.debug("Async timeout"); // We ask the Runnable task to terminate what was still running _task.terminate(); // We write a message to the output Writer writer = evt.getPortletAsyncContext().getResourceResponse().getWriter(); writer.append("<h1 style='color: red'>Timeout error</h1>"); evt.getPortletAsyncContext().complete(); }
Asking all the CompletableFutures to be cancelled (see Git
repo):
public void terminate() { LOG.debug("Request to terminate"); this._isTerminated = true; // Completable Futures can be canceled! _availablePetsFuture.cancel(true); _pendingPetsFuture.cancel(true); _soldPetsFuture.cancel(true); // If the Completable Future has already completed, cancel does nothing }
This is possible because OkHttp supports the interruption of HTTP calls (see
Git repo):
// This is how I manage cancellation final CompletableFuture<List<Pet>> future = new CompletableFuture<List<Pet>>() { @Override public boolean cancel(boolean mayInterruptIfRunning) { if (mayInterruptIfRunning) { asyncCall.cancel(); } return super.cancel(mayInterruptIfRunning); } };
The architecture of this project can be summarized with the following diagram:
See the complete source code of the sample project in this git
repository.
Using PortletAsyncContext in Liferay has been introduced alongside Portlet 3.0
but this feature has not been promoted a lot yet. I hope that this blog post
will give you some ideas and help you find new solution paths to the issues
you encounter in your projects. Because this practice is new, I expect there
are going to be some iterations before we establish true best practices.
Please comment the blog post or reach out to me
fabian.bouche@liferay.com if you'd like to
share your thoughts.
The next blog post will deal with a similar use case:
See you soon for the next part!