Thursday, November 23, 2017

Deploy a Custom WebSphere Liberty Runtime with the MicroProfile 1.2 Feature in IBM Cloud

WebSphere Liberty is a fast, dynamic, and easy-to-use Java application server, built on the open source Open Liberty project. Ideal for developers but also ready for production, on-premise or in the cloud.

IBM Bluemix (is now IBM Cloud) is the latest cloud offering from IBM. It enables organizations and developers to quickly and easily create, deploy, and manage applications on the cloud. Bluemix is an implementation of IBM's Open Cloud Architecture based on Cloud Foundry, an open source Platform as a Service (PaaS). IBM Cloud Foundry includes runtimes for Java, Node.js, PHP, Python, Ruby, Swift and Go; Cloud Foundry community build packs are also available.

Although IBM Cloud has already provided a runtime engine for WebSphere Liberty, sometimes this isn't enough and developers may need their own version of the platform, i.e. a lightweight version based on Liberty Kernel, or an old version to ensure backward compatibility, or the version of WebSphere Liberty armed with a set of features specific for the developed application.

The blog post provides a demonstration of how to deploy your own installation of WebSphere Liberty to IBM Cloud as a usual Java application. The deployed installation is armed with the latest version of MicroProfile, an open forum to collaborate on Enterprise Java Microservices, issued on October 3, 2017.

Eclipse MicroProfile 1.2 is built on the 1.1 version and updates the config API and adds the health check, fault tolerance, metrics, and JWT propagation APIs. As stated on the official page of the project, the goal of MicroProfile is to iterate and innovate in short cycles, get community approval, release, and repeat. Eventually, the output of this project could be submitted to the JCP for possible future inclusion in a Java JSR (or some other standards body). The WebSphere Liberty application server implements Microprofile 1.2, just the corresponding feature -
microprofile-1.2 - must be included in the server.xml configuration file.

Wednesday, October 25, 2017

Threads in Managed Environments. Why Our Work Managers Need Some Tuning

First of all, I need to say that the standard ('default') Work Manager is entirely permissible: a separate Work Manager with the default configuration will be created during server starting for every deployed application. An additional Work Manager should be defined only in the following cases:

  • By default, all threads have the same priority; if this behaviour isn't suitable, the Fair Share parameter must be set.

  • There is a response time goal assigned to the server; the Response Time parameter must be set.

  • A deadlock (e.g., during server-to-server communication) might be met, a Minimum Thread Constraint should be created and assigned to the Work Manager

  • Applications use a common JDBC connection pool, a maximum number of available threads (Maximum Threads Constraint) for the applications must be limited by the pool capacity.

Separately, if Oracle Service Bus is deployed to Oracle WebLogic and the Service Callout action is used, each Proxy- and Business-service invoked using a Service Callout should have its own Work Manager. More information can be found in the Following the Thread in OSB article by Antony Reynolds.

Friday, October 13, 2017

ESB vs EAI: "Universal Service", What is Wrong with This Pattern

Some technical people do understand the Enterprise Service Bus (ESB) concept as a universal channel designed just to enable some XML messages encoded as plain string transmission among enterprise applications. The channel should provide no validation/enrichment/monitoring capabilities, the channel is considered only as a dumb message router that also provides message transformation into an accessible for the enterprise applications format. A powerful and expensive integration middleware, like Oracle Service Bus, Oracle SOA Suite, IBM Integration Bus, or SAP PI/XI, is chosen as a platform for the integration solution. Usually, it's required that the IT team should be able to configure new or existing routes just by edit a few records in the configuration database.

The developers of such "universal solution" believe that a new application can be connected to the solution just by design an appropriate adapter and insert a few records into the configuration database.

In fact, the developers have to implement a number of integration patterns and, optionally, a canonical data model using a small subset of the capabilities provided by the integration platform.

The focus of the article is to explain why the above approach is not effective and why developers have to leverage as many capabilities of their preferable middleware platform as possible.

Tuesday, October 3, 2017

Threads in Managed Environments. Work Managers

We pay for modern application servers since they provide a managed environment for our applications. An application server implements some APIs, for example Java EE 7 or Java EE 8, as well as provides some capabilities such as application life-cycle management, transaction management, resource access and thread management.


Thread pool


An application server uses a thread pool to provide the thread management capability. While an application deployed on the server works, an application thread isn't created when a new request is accepted but taken from the pool. This approach protects the server from creating a lot of threads and overwhelming the operating system by the duty to process too many threads. The goal has been pursued through blocking accepted requests if there are no threads in the pool.

The IT team can specify the following parameters of the thread pool:

  • thread priority - ranges threads created by a number of pools by priority. A user request to a business critical application hangs other threads in the system.

  • number of threads - limits the number of concurrent threads executing requests. Modern application servers, for example Oracle WebLogic, let us set up the limit not only as a constant value but also as a reference to a data source so the maximum number of thread would be equal to the capacity of the connection pool related to the data source. On thread gets a connection to the database.

The application server takes into account the above parameters in cooperation with some inner optimizations by analyzing the current workload, the number of available processors and the amount of free memory.

Friday, September 8, 2017

Exposing Servlet- and JAX-RS-based WebSphere Liberty REST APIs with Swagger

An amazing article Developing a Swagger-enabled REST API using WebSphere Developer Tools demonstrates how to expose a usual servlet as a REST API using a new feature of WebSphere Liberty called apiDiscovery-1.0.


I've rewritten a bit the code of the servlet taking the JSR 353/JSON-P API into account and eliminated all WebSphere-related code, so the demonstration project can be built using Apache Maven: just put the 'javax.json:javax.json-api:jar' dependency into your pom.xml.



Including a swagger.json or swagger.yaml file inside the corresponding META-INF folder is the easiest way to expose the documentation of web modules, but not the only one. If the web application does not provide a swagger.json or swagger.yaml file and the application contains JAX-RS annotated resources, the Swagger document would be automatically generated. As mentioned in the official documentation, the server configuration must have the apiDiscovery-1.0 feature and the jaxrs-1.1 or jaxrs-2.0 feature; for example:



The product scans all classes in the web application for JAX-RS and Swagger annotations, searching for classes with @Path, @Api, and @SwaggerDefinition annotations. The apiDiscovery-1.0 feature automatically generates a corresponding Swagger document and makes it available at the following URIs: http://host:port/context-root/swagger.json and http://host:port/context-root/swagger.yaml.

For example, if the following JAX-RS resource is deployed on the server:

Thursday, August 31, 2017

Oracle SOA Suite Performance Monitoring

Oracle Enterprise Manager Fusion Middleware Control Console (EM) - ensures runtime governance through composite application modelling and monitoring as well as comprehensive Service and infrastructure management functionality to help organizations maximize the return on investment. Let's consider capabilities for performance management provided by this instrument.

Monitoring performance of the Oracle SOA Suite runtime


The Request Processing tab uses three grid views to present performance information. The tab is available under the Monitoring -> Request Processing item of the context menu SOA -> soa-infra. The displayed information is layered by:

  • service engine (BPEL, BPMN, Mediator, Human Workflow, Business Rule, Spring):
    • average request processing time - synchronous
    • average request processing time - asynchronous
    • active request count
    • processed requst count
    • fault count
  • the summary about service infrastructure:
    • average request processing time - synchronous
    • average request processing time - asynchronous
    • active request count
    • processed request count
    • fault count
  • binding components:
    • web-service (WS) inbound
    • web-service (WS) outbound
    • Java EE Connector Architecture (J2CA) inbound
    • Java EE Connector Architecture (J2CA) outbound
    The following metrics are available:
    • average request processing time
    • processed request count
    • error count


Wednesday, August 23, 2017

Oracle WebLogic Cluster Causes Network Storm When a Problem Happens

Not every system administrator is aware of such WebLogic Server capability as Message Forwarding to Domain Logs. In addition to writing messages to the server log file, each server instance forwards a subset of its messages to a domain-wide log file. This domain-wide log file certainly helps to the system administrator understand the situation on a large domain, for instance, when several dozen servers belong to the domain, it is very helpful having all server logs sit in one place. But the convenience comes at a price.

If there are some problems caused by applications deployed on the server, the server log tends to get stuck by diagnostic messages and large stack traces. The problem is these messages and traces are being written not only to the server log file but also being forwarded to the administration server via the network and causes network storm. The administration server, as a result, might become inaccessible.

For a medium domain (4 - 8 WebLogic Server instances belong to) the idea just to disable the message forwarding capability can be considered. The Domain log broadcaster:, Severity level property value on the Environment -> Servers -> SERVER -> Logging -> General, Advanced page for every managed server must be set to Critical or higher.