The initial implementation of the Servlet specification used blocking I/O. Over time, and to increase scalability, containers switched to non-blocking I/O internally and later the Servlet specification introduced the asynchronous API with associated access to non-blocking I/O for applications.
The latest development that aims to provide further scalability improvements is Project Loom from the OpenJDK project. Loom aims to deliver features that support, amongst other things, easy-to-use, high-throughput, lightweight concurrency.
Using Apache Tomcat as a basis, this session will start with a brief review of the history of the key scalability improvements that have taken place over the life of the Servlet specification before going on to examine what Loom has to offer for web applications built on the Servlet specification.
The possibilities for Loom will be examined both at the container level and at the application level. This will be supported with data generated by a range of experiments undertaken using Loom and Apache Tomcat. While benchmarks can only ever provide guides to what you might expect for the performance of a real application, this session will provide you with the basis of what you need to determine what Loom might be able to offer for your applications and where to start with your own performance testing so you can quantify those benefits.