8 Min reading time

Consolidation of application servers at FINA

03. 12. 2015

Željko Pavić, Member of the Board of FINA, is one of the most experienced IT experts in the state administration. We find out first-hand how and why FINA decided to launch a consolidation project.

FINA boasts a half-a-century tradition in IT support for critical business processes. Once called SDK and later on ZAP, it processed all payment transactions in Croatia. That period saw the rise of a serious approach to computing, which led to the belief that IT systems have to be highly disposable and available. A system needs to ensure continuous and predictable performances (transaction processing time) and simultaneously support high-end safety and reliability standards.

The Financial Agency (FINA) of today is the leading Croatian company in the area of financial intermediation. National coverage, an IT system tested on the most difficult tasks of national importance, and a highly professional level of expert teams enable the preparation and implementation of various projects, from simple financial transactions to the most sophisticated processes in electronic business.

Even though it is a state-owned company, FINA does business exclusively based on the market principle. We successfully work with commercial banks, the Croatian National Bank, numerous business systems, and other business entities. We have also partnered with the state in the field of public finances, where we have implemented several comprehensive key projects. FINA has also had a vital role in the operative preparation and implementation of several major projects: the reform of the payment and pension systems, the central processing of state employee salaries, pre-bankruptcy settlements, etc.

A need for consolidation

The active market role of FINA resulted in scores of application and business services that daily, often on a 24/7 basis, and successfully provide support for commercial and state organizations, which in time resulted in a huge number of application servers, databases, and other elements of an IT system. Following the dynamics of business requests, which were not predictable and usually with very short implementation deadlines, we had a system characterized by increased entropy. The software infrastructure was heterogeneous – we used different software products, that is, versions of the same products. During 2012, we took stock of our IT system and established that the present state was unsustainable in the long run because it would be (too) demanding and (too) expensive to maintain, while the complexity of the system became a weight that hindered our readiness for increasingly fast business demands.

We, therefore, decided to launch a project to consolidate and standardize our IT system. To start, we needed to standardize application servers and databases so that we use one (or two, at the most) software product, using a single version of each selected product. We chose the WebSphere Application Server, DB2, and Oracle Database. We additionally standardized support for business processes by using the IBM Business Process Manager and file management by using the IBM FileNet Content Manager.

This stage of consolidation was quite demanding since we had to make adjustments and modernize numerous application modules, which requires the work of programmers and cannot be automated.

In early 2015, we were ready for the next step – the consolidation of the server platform. Since Linux had proven to be an excellent choice for years and since a growing number of our services had used Linux as an operating system, we didn’t have to think much about that. In selecting the hardware platform, we chose between the existing Intel and mainframe servers.

Mainframe at FINA

For many years, IBM’s mainframe platform had served as the basis of IT support for the most important business services at FINA. The payment system service had been consolidated to the mainframe server already in the late 1990s. Soon after, we began to support the REGOS service, whose database is located on the mainframe server. In the early 2000s, the mainframe server was the only one that met the strict criteria for the IT support of the NCS (National Clearing System) service. We also modernized the platform, reducing the expenses significantly and optimizing and improving the usability of the mainframe platform.

After acquiring the new generation of the IBM z Systems mainframe processor (zEC12) in 2013, we established the basic prerequisites for the consolidation on the mainframe platform utilizing virtualization  options based on the z/VM and Linux operating systems. Through the implementation of GDPS solutions on the z/VM operating system, the high level of availability and resistance to failure of services based on the Linux on z platform was additionally increased.

During the stocktaking of out IT system, we analysed and reviewed the role of the IBM mainframe platform, which resulted in further modernization of applications in the mainframe environment and our decision to use mainframe as the hardware platform for the consolidation of Linux services.

The first service that was selected for implementation on this platform was the service of pre-bankruptcy settlements. The previous architecture had been based on an Intel platform – IBM Business Process Management for creating and managing processes and Alfresco One, which served for file storage. The introduction of new services (Pre-bankruptcy agreement and the Process of attachment of movable and immovable property) on the existing infrastructure would require the expanding of hardware capacities, and thus also the expansion (purchase) of software licences and improvement (version upgrade) of existing BPM and DMS platforms. After analyzing the needs for a hardware/software upgrade in the existing environment and comparing it with the existing hardware capacities on the mainframe platform, we concluded that the migration of these services to the Linux on z would save us the costs of licences and additional hardware investments.

Description of the Pre-bankruptcy settlement system

The system consists of the following components:

BPM (Business Process Manager)

The initial architecture had used the IBM BPM Standard Edition The BPM cluster had not been implemented. In the new environment, a migration was executed to v8.5. Both components were migrated: the Process Center and the Process Server. To achieve high availability, the BMP component was implemented as a cluster; for the purposes of testing patches and new versions, two instances of the Process Center component were implemented: the test instance and the production instance.

WebSphere Application Server

The WebSphere Application Server component had been implemented as standalone, that is, without the option of using cluster technology. For the purposes of raising the level of high availability, WebSphere Application Server instances were implemented within the Network Deployment cell on the new platform. Migration from v8.0.0.4 to v8.5.5 was executed.


In the existing infrastructure, the DB2 base is used as a user base, as a repository for BPM, and a DMS base. The initial version had been Version 10.5 was implemented on the Linux on z platform, with HADR technology implemented for the purposes of high availability.

Document Management System

In the initial infrastructure, Alfresco open source product had been used as the DMS. Due to our strategic decision, the Alfresco DMS component was migrated to the IBM FileNet DMS on the new platform.

Planning and building the infrastructure on the Linux on z

As the basis for planning the target infrastructure, we took the results of the analysis of the existing environment, as well as the non-functional requests in the new environment.

The analysis of the existing environment components covered the following:

  • analysis of processor resources exploited per specific components
  • analysis of memory used per specific components
  • analysis of disk-space requirements per specific components
  • analysis of the network interconnection of components
  • analysis of software package versions and their interdependence.

Based on the analysis, the required resources were defined for each component or virtual machine:

  • the number of virtual processors
  • the quantity of RAM
  • the size of individual file systems
  • the size of swap space
  • network parameters.

Non-functional requests set for the new environment were the following:

  • a high level of availability (cluster and DB2 HADR implementation)
  • isolation of network segments
  • quantity of available software product licences
  • isolation of the production environment.

The planning covered the use of several technologies specific for Linux on z.

  1. Hypersockets

Hypersockets enable a high-speed TCP/IP connection between virtual machines. The communication runs through memory and doesn’t require any physical device or cabling.

  1. CPU pool

Defining CPU pools enables the limiting of the scope of usage of IFL processors due to a limited number of licences while using a greater number of IFLs. This way, the virtual machines, which have more than one virtual CPU defined, maintain the ability to simultaneously execute parallel instructions on several physical IFL processors.

  1. Memory overcommitment

This technology is not specific only to Linux on z and it is used on other virtualization platforms, but it is important in planning memory resources. Virtual machines can in total use more memory than available through physical memory. For instance, if there is 100 GB of physical memory and the total allocated memory for all virtual machines adds up to 120 GB, we have a case of overcommitment, which is desirable since not all virtual machines use up all the allocated memory, which can then be used for virtual machines that use up the allocated memory entirely. The redistribution of memory is a dynamic process under the control of z/VM Hypervisor.

Image 2 offers an overview of virtual machines and resource allocations.

The infrastructure was built according to the following plan:

  1. Installation of z/VM 6.3
  2. Configuration of z/VM subsystems (CP, TCPIP, Performance Toolkit)
  3. Preparation of Hypersockets
  4. Configuration of prerequisites for GDPS
  5. Installation of RHEL 7.3
  6. Installation of the TMS client and other software shared by all virtual machines
  7. Adjustment of security parameters
  8. Definition of CPU pools
  9. Cloning of virtual machines
  10. Installation and setting of software products: BPM, WebSphere Application Server, DB2, FileNet.

Next steps

Our daily work experience has so far shown that the strategy of consolidation and standardization of software and hardware platforms was chosen correctly. Daily operative tasks have been significantly simplified, and our readiness for the realization of new services has been increased.

Naturally, perfection and nirvana are unattainable ideals. We need to continuously follow the developments in IT technology and the business environment and to change, upgrade, and adjust the selected architecture. That way, FINA will be prepared for all future market challenges.

Get in touch

If you have any questions, we are one click away.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Contact us

Schedule a call with an expert