Connecting the Dots …. The growing business of Virtual Health Care services

fiorano HealthcareConsumers of all ages are using technology in all aspects of their lives, and health care is following the same track at a blistering pace. The current rage amongst millennials ranges from Fitbit watches and other wearables that track every part of your day-including activity, exercise, food, weight and sleep-to help you find you are fit, stay motivated, and see how small steps make a big impact-to Applications that allow you to receive medication alerts or reminders, and measure, record, and transmit data about medications or treatments.

Looking at the commercial market, a recent study found that the average estimated cost of a telehealth visit is $40 to $50 per visit compared to the average estimated cost of $136 to $176 for in-person acute care. The initial telehealth visit is resolving patient issues at an average of 83 percent of the time. (References) (Yamamoto, 2014).

Regulatory and Insurance industry shifts are also increasingly accommodating virtual visits as part of their overall well-being and care programs. Thus, it is making increasing business sense to pursue the digital aspect of health care as part of an overall corporate strategy. The future is in connected health care, a system that is connecting patient, healthcare provider, general practitioner, laboratory test, radiology and other imaging results to close the loop of naturally occurring latencies in the system of data exchange leading to delays in diagnosis, prescription, cure and ultimately, the quality of health care and even well-being.

The technology to access and track health care information have also proliferated from websites to smartphone, tablet applications, digital medical assistants, and personal medical devices or fitness monitors, with more and more people preferring the on-the-go option of monitoring their fitness and health.

With big data available from multiple sources on treatments, symptoms and prescriptive treatment outcomes, combining all this into an AI (Artificial Intelligence) environment seems almost a given predictable path that will evolve. But going beyond the various limitations and drawbacks, perhaps, of such a route to healthcare, the question becomes whether healthcare alternatives and treatment to far flung populations with unavailability of doctors, facilities, medication or even political unrest – is something better than nothing?

Connecting the data in an intelligent way, providing visibility to health care providers, building algorithms that can consider every option to give an outcome that improves the care of patients are some of the main challenges. Lack of interoperability of systems, data locked in legacy applications and failure to log and share relevant treatment outcomes are some of the areas that will need to be addressed.

Many health care organizations have already implemented a virtual health strategy. However, to be effective, organizations need to start proactively putting infrastructure and strategies in place and ensure that appropriate technologies and platforms are integrated into the care delivery model; data availability and integration need to be as real-time as possible allowing care givers to respond timely to treatment plans and their adjustment to positively impact patient health.

All over the world, Fiorano technology has been used by Healthcare organizations to easily unlock data and connect with their legacy systems to transform into Digital, Cloud and API-first businesses. The Fiorano Healthcare Solution supports interoperability and allows secure sharing of healthcare data in the high-volume environments of next generation Healthcare markets. It has been implemented at Babylon, that has an aggressive bold mission to democratize healthcare and to put an accessible and affordable health service in the hands of every person on earth. To learn more about how you can connect your systems together quickly and efficiently, contact us at Fiorano.

Fiorano at Open Banking Expo, London for PSD2 RTS.


Payment Services Directive
Payment Services Directive


Now that we are in November 2018, the PSD2 Regulatory calendar leaves banks with just 4 1/2 months before the March 2019 deadline to publish (ASPSP) sandboxes for testing. It is not as simple as just publishing APIs though as there are multiple technology components required to do this properly. It will come as no surprise that the operating model PSD2 will enforce is completely unlike the way Retail Banks are used to working and unlike the way consumers are used to interacting with them.

With the Europe Banking Authorities Regulatory Technical Standard (the PSD2 RTS) understandably not being prescriptive about how banks should go about enabling aspects such as Access to Account (XS2A), Strong Customer Authentication (SCA) and Common and Secure channels of Communication (CSC), the technology requirements themselves can seem overwhelming. The core issue many banks are facing today is around being able to put in-place the technology components to meet March and September 2019 obligations in-time, yet still be able to adapt, flex and launch new products and services with minimal change as the TPP market develops.

The absolute must-haves required by banks include separate applications for:

(i) Core Banking Integration

(ii) API Management

(iii) Identity and Access Management

(iv) Security

Introducing just one piece of new technology to a bank can sometimes be overwhelming. In the case of PSD2, depending on what an individual bank may already have, introducing three or four almost becomes a programme in itself, with associated complexity, timelines and costs associated. With so little time left, Fiorano has been working to be able to provide banks with regulation specific technology that can be implemented rapidly, and yet meet the PSD2 timelines.

So how does Fiorano do it?

Built on top of class leading Fiorano MQ, Middleware and API Management technology, the Fiorano PSD2 Accelerator brings together all the components banks require to deliver ASPSP interfaces in a single, easy-to deploy technology product which covers all the functional requirements around XS2A, CSC, SCA and Security.

This uniform, single platform delivers easy integrations and light-weight maintenance, guaranteeing to be the fastest and most efficient route for banks to deliver ASPSP interfaces. To top it all, the Fiorano PSD2 Accelerator also incorporates PSD2 specific limits, thresholds and exemptions as visually-configurable components, which means banks implementing Fiorano will not require super-specialists to manage the environment post implementation.

Interested? To learn more about how you can still meet the regulatory timeline using the Fiorano PSD2 Accelerator, come meet Fiorano at stand 15 at the Open Banking Expo taking place at the America Square Conference Centre in London on 27th  November. If it can’t wait till then, contact us or email us now at and we will be in touch immediately.


Read more about the Fiorano PSD2 Accelerator



PSD2: Boon or Bane in the world of Open Banking

To many, the term Open Banking itself may be a misnomer as the concept involves banks giving end customers more control over the way their data is collected, used and shared with organisations who provide competing financial products and services.

This, as we know, is not how banks are used to working.

Open Banking as a principle is great for end users and the general public as it massively increases competition and thereby the number of options (and service standards) that are available to them.

PSD2, Open Banking, Compliance

PSD2 (Revised Payment Service Directive), the EU’s directive aimed at creating a more integrated and efficient European payments market was initially published in January 2016 and to a large extent is the driving force behind Open Banking in Europe. Its reach however is way beyond payments with Access to Customer Accounts (X2SA) being one of the areas of greatest transformational impact.

In the UK, while the FCA (Financial Conduct Authority) itself remains the competent authority for PSD2, the directive is implemented through the Payment Services Regulations 2017 which took effect on 13th January 2018 along similar timelines as most of Europe.

Banks here have been given a bit of a head start through the Open Banking Working Group (OBWG) who in 2016 published the Open Banking Standard (OBS) framework along with Barclays. This initial report was followed up by the Competition and Markets Authority funding the UK’s Open Banking Implementation Entity (OBIE or Open Banking Limited), essentially laying the foundations for banks in the UK to adopt PSD2.

What this means is that customers already have the legal right to use an authorised Third Party Provider (TPP) who has access to their payment account information and consent to initiate payments on their behalf. In reality, there is time till September 2019 before other parts of the PSD2 become fully applicable and we can expect to see the real impact.

While changes brought about by PSD2 are great from an end-customer perspective, it is not the same for banks. A number of changes are required to the way banks operate and many are understandably viewing PSD2 as a threat to their business and age-old revenue streams.

These concerns are not unfounded as this is precisely what Open Banking is intended to facilitate – more innovation and competition.

The changes required to be implemented by banks are manifold and have far reaching impact, right from areas covering Technology, Operations, Compliance, Data Privacy, Security and end user interfaces.

Just to make matters worse there are many competing standards being developed in different countries (Berlin Group, STET, OBUK and others) to cover critical angles such X2SA, Strong Customer Authentication and Secure Communications, including the API specifications themselves along with aspects such as TPP registries and Trust services.

However if you look hard at these seemingly grey clouds there is more than a silver lining, and an opportunity for banks to reinvent themselves to become absolutely core to the customers’ digital life.

The world’s economy has changed significantly over the last few decades with the Fortune leaderboards being overtaken by companies like Facebook, Amazon, Google, Apple and Uber. In a world where customer experience, product and trust are key, traditional banks have a huge advantage which should not be underestimated.

 T R U S T

PSD2 is a bit of a wake up call, but does not need to be all doom-and-gloom. Traditional banks are in a better position than any of the startups, challenger banks and competitive TPP service providers to make use of the opportunity, so long as they are willing to recognise the threats; view PSD2 as more than just a regulation to comply with; come face-to-face with the new digital world and start transforming themselves.

Banks have also had other directives to deal with till now including the GDPR (General Data Protection Regulation) and potentially MiFID II and Ring-fencing. With just 12 months to go for all the infrastructure to get put in place and tested, there is a lot of technology related ground-work to cover, and many are just getting started.

Middleware and API Management specialist Fiorano has taken a lead, combining years of expertise integrating core banking systems all over the world with a deep understanding of the European Banking Authority’s RTS itself and PSD2 standards from both Open Banking UK and the Berlin Group. Fiorano’s PSD2 Accelerator framework incorporates all the technology a bank needs to meet PSD2 obligations rapidly, irrespective of standard chosen.

We see PSD2 as the beginning of a new world of banking, in many cases being the trigger point for real digital transformation in the banking industry. While the GDPR and data privacy are common underlying themes, it is elements like API Computing, Automation, AI and Cognitive services that are likely to drive competitive differentiation over the coming years.

For more details, visit or  Get In touch Fiorano to speak to our specialists.

Why traditional ESBs are a mismatch for Cloud-based Integration

Cloud ESB

The explosive adoption of cloud-based applications by modern enterprises has created an increased demand for cloud-centric integration platforms.  The cloud poses daunting architectural challenges for integration technology like: decentralization, unlimited horizontal scalability, elasticity and automated recovery from failures. The traditional ESBs were never designed to solve these issues.  Here are a few reasons why ESBs are not the best bet for cloud-based integration

Performance and Scalability
Most ESBs do simplify integration but use a hub-and-spoke model that limits scalability since the hub becomes a communication bottleneck.  To scale linearly in the cloud, one requires a more federated, distributed, peer-to-peer processing approach towards integration with automated failure recovery. Traditional ESBs lack this approach.

ESBs evolved when XML was the dominant data-exchange format for inter-application communication and SOAP the standard protocol for exposing web services. The world has since moved on to JSON and today, mobile and enterprise APIs are exposed using REST protocols. ESBs that are natively based on XML and SOAP are less relevant in today’s cloud-centric architecture.

Security and Governance
These are key concerns for any enterprise that chooses to move to cloud. With multiple applications in the cloud, enterprises are not always comfortable with centralized security hubs. Security and governance need to be decentralized to exploit the elasticity of the cloud. Old-guard middleware products were typically deployed within the firewall and were never architected to address the issues of decentralized security and governance.

Latency and Network connectivity
When your ESB lives in the external cloud, latency becomes a critical challenge as end-points are increasingly distributed across multiple public and private clouds. Traversing a single hub in such an environment leads to unpredictable and significant performance problems which can only be addressed with new designs built ground-up with Cloud challenges in mind.

Microservices – The issue of Granularity: Atomic or Composite?

While implementing Microservices architecture, the “granularity” of a service has always been the subject of more than a few debates in the industry. Analysts, developers and solution architects still ponder over defining the most apt size of a service/component (The term “Service” and “Component” are used interchangeably in the discussion that follows). Such discussion usually ends up with two principal adversaries:

  • Single-level components
  • Two-level components

Single-level, “Atomic” components:  An “Atomic” component consists of a single blob of code, together with a set of defined interfaces (inputs and outputs).  In the typical case, the component has a few (two or three) inputs and outputs.  The Service-code of each Atomic component typically runs in a separate process. Figure 1 shows an Atomic component.


Two-level, “Composite” components: A composite-service consists of a single ‘outer’ service, with a set of interfaces. This outer service further contains one or more ‘inner’ components that are used in the implementation of the main, outer component.  The Composite-service runs in a separate process by default, while each of the inner components run in a separate thread of the Composite component process.  Proponents of this approach point to the fact that by componentizing the implementation of the composite component, one has greater flexibility and more opportunities to reuse implementation artifacts within Microservice implementations. Figure 2 illustrates a Composite component.

Atomic-Microservices-Diagram-02Atomic Microservices are as simple as they get.  It’s just a single blob of code, in a programming language of your choice.  Depending on the underlying Microservices infrastructure, you may have to implement a threading model yourself, or you may be able to leverage the threading model of the underlying Microservices framework (for instance, Sessions provide a single-threaded context in the case of a JMS-based Microservices platform).  Overall, Atomic Microservices offer a relatively low level of complexity for development, being as it were a single logical module.

On the contrary, Composite Microservices have an almost romantic appeal for many developers, who are enchanted with the concept of “reusing” multiple smaller inner-components to implement a larger single component.  Unfortunately, although this approach is good in theory it has several drawbacks in practice.  For starters, the execution model is complicated, since the underlying framework has to be able to identify the separate threaded contexts for the Inner components that comprise the single composite component. This carries significant performance overhead and complicates the platform framework. For reference, in the early 2000’s, the BPEL (Business Process Execution Language) in vogue followed this approach, which proved to be very heavyweight in practice.  Another issue with composite components is that there is no simple model for deployment; since composite components are more difficult to auto-deploy as agents across the network, unlike Atomic components.

Provided that the services run as separate processes, in our experience the Atomic components represent a better choice for Microservice-project implementations.

The API economy is here, is your business ready to Digitize and Monetize?

With the current trend of moving data and applications to the cloud, data needs to pass through various systems, where Application Program Interfaces (APIs) are used to link components with each other, mobile devices and browsers. It can be an organization’s dilemma to willingly risk exposing its data to external systems over the internet, but ambitious business leaders realize that the key to successful business is its willingness to employ the best technology to meet business needs on a scale unmatched by its peers. By exposing internal enterprise data and application functionality via APIs to external applications on mobile devices, consoles and affiliate Web sites, an organization can transform its business into an extensible platform that the digital future requires and unlock new revenue streams called the API Economy.

How an organization can monetize existing applications and data using APIs depends on the product, the business model and innovation. Some companies expose core features allowing developers and end-users to find innovative ways to incorporate the organizations features and services into new social and mobile applications

Twitter and Facebook rely on APIs to drive much of the usage that makes their platforms valuable by expanding engagement beyond their primary user interfaces via third-party Web, mobile and social applications.

A small retailer might integrate with Amazon’s Store API, which would allow it to sell and ship merchandise from its own Web property without developing standalone e-commerce functionality. At a mass scale perspective, this opened up an entirely new channel (and economy) for small and medium-sized merchants.

Consider another example of a large financial spread betting company that offers retail investors leveraged access to over ten thousand financial markets through their dealing platform and mobile applications. With a secure API Management solution in place, the spread betting company exposes trading systems, relevant data and indexes via APIs, enabling its clients to easily integrate on a self-serve basis in a secure, managed, metered and monitored way allowing third parties to perform and execute trades programmatically, without the necessity of using the company’s desktop or mobile clients. By keeping track of how APIs are consumed by various developers, products etc., the spread betting company has the capability to monetize existing data and applications via its APIs. This is just an example of one of the ways to monetize. A company’s monetizing capability really depends on the ability of the business leader’s creativity and innovation.

Choosing the right API Management solution is an important aspect not just for IT but for the business leader as well to reap the agility benefits needed to support digital initiatives. Digital enterprises require a proven technology with deep integration capabilities to build APIs on top of existing applications. Ease of API design, API request transformation features, SOAP to REST conversion, mobile backend-as-a-service (MBaaS), API rate limiting, and metering (analytics) are some of core capabilities that one should look for when researching an API Management solution.

Digitization can revolutionize customer experience

We now live in a competitive world where competitors and peers continue to raise the bar of customer experience. Businesses are always looking to deepen engagements with their target audience but that target audience’s expectations have changed, thanks to the digital customer experience.

A common characteristic of successful businesses is the ability to adapt. Anything digital can and will be recorded, archived, analyzed, and shared. With the proliferation of digital channels, businesses are challenged to find better ways to authentically engage across channels, be it with customers, partners or even employees. To tap into new revenue growth potential, companies must adopt new customer centric practices, including offering an integrated customer experience across digital and analog channels to meet customer preferences.

Digital transformation on the customer experience level is not just a matter of the front-end and customer-facing functions; this is just part of a transformational challenge on the level of technology and processes. It’s a matter of the whole organization and requires involving back-end transformations as well. It requires an enterprise-wide approach or better, a roadmap towards such a holistic approach. Digital transformation requires a strategy with a fully integrated operating model, a technology that can rapidly and scalably provision connections with proliferating cloud, mobile, Internet of Things (IoT) and business partners’ APIs.

As businesses dive deep in providing the best customer experience, there is a greater need to integrate systems to cope with fast-moving phenomena, such as cloud and mobile, involving cloud-to-cloud and cloud-to-on premises integration. These complex connections can easily be established with a hybrid integration platform, a new way to connect cloud-based, mobile and on-premises resources. Hybrid integration platforms such as Fiorano Cloud can deal with the increasing volume, speed and variety of information that new digital channels bring, while supporting the multichannel architecture associated with mobile and other multichannel initiatives.

Here is an example of how digitization can significantly improve customer experience; Delaware North, a global leader in hospitality and food service recently revolutionized its customer experience by deploying the Fiorano platform, a digital business backplane to efficiently track individual customer venues and provide relevant Business Intelligence to their customers, also a competitive edge for Delaware North. The Business Intelligence and real-time data provided by Delaware North to its clients is critical to their marketing campaigns, allowing them to assess operational efficiency and make adjustments to improve top line revenues. The new infrastructure of customer-centric interconnected systems allows operational excellence, optimization, efficiency and opens up new areas of opportunity. Delaware North intends to steadily extend the digital business backplane across its global locations which can ready their systems for today’s highly connected and digitized economy.

Processes, data, agility, prioritization, technology, integration, information, business and IT alignment and digitization among others are all conditions for better customer experiences.

Scaling Microservices Architectures in the Cloud

With the velocity of data growing at the rate of 50% per year, the issue of scaling a Microservices architectures is critical in todays’ demanding enterprise environments. Just creating the Microervices is not sufficient. Scaling a microservices architecture requires careful choices with respect to the underling infrastructure and as well as the strategy on how to orchestrate the Microservices after deployment.

Choosing the right Infrastructure topology

While designing an application composed of multiple Microservices, the architect has multiple deployment topology options with increasing levels of sophistication as discussed below:

1. Deployment on a single machine within the enterprise or cloud

Most legacy systems, and many existing systems today, are deployed using this simplest of topologies. A single, typically fairly powerful server with a multi-core/processor is chosen as the hardware platform and the user relies on symmetric multiprocessing on the hardware to execute as many operations concurrently as possible, while the Microservice client applications themselves may be hosted on different machines possibly hosted across multiple clouds. While this approach has worked for the first generation of emerging cloud applications, it will clearly not scale to meet increasing enterprise processing demands since the single server becomes a processing and latency bottle neck.

2. Deployment across a cluster of machines in a single enterprise or cloud environment

A natural extension of the initial approach is to deploy the underlying infrastructure that hosts the Microservices across a cluster of machines within an enterprise or private cloud.  This organization provides greater scalability, since machines can be added to the cluster to pick up additional load as required.  However, it suffers from the drawback that if the Microservice client applications are themselves distributed across multiple cloud systems, then the single cluster becomes a latency bottleneck since all communication must flow through this cluster. Even though network bandwidth is abundant and cheap, the latency of communication can lead to both scaling and performance problems as the velocity of data increases.

3. Deployment across multiple machines across the enterprise, private and public clouds

The communications latency problem of the ‘single cluster in a cloud’ approach described above is overcome by deploying the software infrastructure on multiple machines/clusters distributed across the enterprise and public/private clouds as required. Such an organization is shown in the figure below. This architecture ensures linear scalability because local Microservices in a single cloud/enterprise environment can communicate efficiently via the local infrastructure (typically a messaging engine for efficient asynchronous communication or, if the requirement is simple orchestration, then a request/reply REST processing engine). When a Microservice needs to send data to another Microservice in a different cloud, the transfer is achieved via communication between the “peers” of the underlying infrastructure platform. This leads to the most general-purpose architecture for scaling Microservices in the cloud, since it minimizes latency and exploits all of the available parallelism within the overall computation.


Cloud Diagram


Orchestration and Choreography: Synchronous vs. Asynchronous

In addition to the infrastructure architecture, the method of Orchestration/Choreography has significant affects on the overall performance of the Microservices application. If the Microservices are orchestrated using a classic synchronous mechanism (blocking calls, each waiting for downstream calls to return), potential performance problems can occur as the call-chain increases in size. A more efficient mechanism is to use an asynchronous protocol, such as JMS or any other enterprise-messaging protocol/tool (IBM MQ, MSMQ, etc.) to choreograph the Microservices. This approach ensures that there are no bottlenecks in the final application-system since most of the communication is via non-blocking asynchronous calls, with blocking, synchronous calls limited to things like user-interactions. A simple rule of thumb is to avoid as many blocking calls as one can.


Microservices Architecture: Scalability, DevOps, Agile development

With the emergence of the digital economy, there’s a big buzz around Cloud, Big Data and the API economy. In all the noise, we sometimes still forget that to deploy systems across these domains, one still has to has to create services and compose multiple services into a working distributed applications. Microservices have emerged as the latest trend in development based on these increasingly demanding requirements, based on the perceived failure of Enterprise-wide SOA (“Service Oriented Architecture”) projects.

SOA was all the hype since the early 2000’s but disappointed for many reasons in spite of many successful projects. SOA was (and still is in many quarters) perceived as being too complex: developers spent months just deciding on the number and nature of interfaces of a given service. Often, services were too big and had hundreds of interfaces, making them difficult to use; at the other extreme, some developers designed services that had just a few lines of code, making them too small. It was difficult for users to decide on and choose the granularity of a service for the most part. Microservices solve these and several other problems with classic SOA.

Before getting into details, it is important to define the modern meaning of the term “Application”. Applications are now “Collections of Components/Services, strung together via connections in the form of asynchronous message-flows and/or synchronous request/rely calls”. The participating Services may be distributed across different machines and different clouds (on-premise, hybrid and public).

The Emergence of Microservices
Microservices emerged from the need to ‘make SOA work’, to make SOA productive, fast and efficient and from a need to deploy and modify systems quickly. In short, Microservices support agile development. Key concepts that have emerged over the past ten years are:

(a) Coarse-grained, process-centric Components: Each Microservice typically runs in a separate process as distinct from a thread within a larger process. This ensures the component is neither too small nor too large. There isn’t any hard and fast rule here, except to ensure that Microservices are not ‘thread-level bits of code’.

(b) Data-driven interfaces with a few inputs and outputs. In practice, most productive Microservices typically have only a few inputs and outputs (often less than 4 each). The complexity of the SOA world in specifying tens or even hundreds of interfaces has disappeared. Importantly, inputs and outputs are also ‘coarse grained’ – typically XML or JSON documents, or data in any other format decided by the developer. Communication between Microservices is document-centric – an important feature of Microservices architecture. For those experienced enough to appreciate the point, one can think of a Microservice as a “Unix pipe” with inputs and outputs.

(c) No external dependencies: The implementation of each Microservice contains all the necessary dependencies, such as libraries, database-access facilities, operating-system specific files, etc. This ensures that each Microservice can be deployed anywhere over a network without depending on external resource libraries being linked in. The sole communication of a Microservice with the external world is via its input and output ‘ports’.

(d) Focused functionality: A Microservice is typically organized around a single, focused capability: e.g. access/update a Database, Cache input data, update a Bank account, Notify patients, etc. The lesson from “complex SOA” is that the Services should not become too large, hence the term “Microservices”

(e) Independent interfaces: Each Microservice typically has a GUI Associated with itself, for end-user interaction and configuration. There is no relationship between the GUIs of different Microservices – they can all be arbitrarily different. Each Microservice is essentially a ‘product’ in its own right in that it has defined inputs and outputs and defined, non-trivial functionality.

Benefits of Microservices and Microservices integration
In a Microservices architecture, Applications are composed by connecting instances of Microservices via message-queues (choreography) or via request/reply REST calls (orchestration). Compared to classical monolithic application design, this approach offers many benefits for development, extensibility, scalability and integration.

—> Easy application scalability : Since an application is composed of multiple micro services which share no external dependencies, scaling a particular micro service instance in the flow is greatly simplified: if a particular microservice in a flow becomes a bottleneck due to slow execution, that Microservice can be run on more powerful hardware for increased performance if required, or one can run multiple instances of the Microservice on different machines to process data elements in parallel.

Contrast the easy Microservices scalability with monolithic systems, where scaling is nontrivial; if a module has a slow internal piece of code, there is no way to make that individual piece of code run faster. To scale a monolithic system, one has to run a copy of the complete system on a different machine and even doing that does not resolve the bottleneck of a slow internal-step within the monolith.

It should be noted that integration platforms based on synchronous, process-centric technology such as BPEL, BPMN, and equivalents also suffer from this scalability problem. For instance, if a single step within a BPEL process is slow, there’s no way to scale that independent step in isolation. All steps within a process-flow need to be executed on the same machine by design, resulting in significant hardware and software costs to run replicated systems for limited scalability.

—> Simplified Application Updates : Since any given Microservice in the Application can be upgraded/replaced (even at runtime, provided the runtime system provides the required support) without affecting the other ‘parts’ (i.e Microservices) in the application, one can update a Microservices-based application in parts. This greatly aids agile development and continuous delivery.

—> Multi-Language development : since each Microservice is an independent ‘product’ (since it has no external implementation dependencies), it can be developed in any programming language; a single Microservices Application may thus include one or more Microservices developed in Java, C, C++, C#, Python and other languages supported by the Microservices platform

—> Easier Governance : since each Microservice runs in its own process and shares no dependencies with others, it can be monitored, metered and managed individually. One can thus examine the status, input, outputs and data flows through any Microservice instances in an application without affecting either the other Microservices or the running system.

—> Decentralized Data Management : In contrast to the classical three-tier development model which mandates a central data repository, thereby restricting the service-developer, each Microservice can store data as it pleases : a given Microservice can use SQL, a second MongoDB, a third a Mainframe database, etc. Just as there are no implementation-dependencies between Microservices, there are also no “database use” dependencies or even guidelines. The developer is thus free to make choices that fit the design of the particular Microservice at hand.

—> Automated Deployment and Infrastructure Automation : Implementation independence allows assemblies of Microservices can be ‘moved’ from one deployment environment to another (for example, Development —> QA —> Stating —> Production), based just on profile settings. This ‘one click’ move between environments greatly aids Dev-ops and agile development

Each of these topics can and likely will be the subject of a separate blog post. Until then, enjoy your development!

API Management for Everyone

API Management

Today people don’t like talking about ESBs anymore. Instead, the buzz is around cloud, big data, the application programming interface (API) economy, and digital transformation. Application integration is still a core enterprise IT competency, of course, but much of what we’re integrating and how we’re integrating it has shifted from the back office to the omnichannel digital world.

And here’s Fiorano, with one foot still in the traditional ESB space, especially in the developing world where even basic integration is a challenge – and the other foot squarely in the modern digital world. Now they’re launching an API management tool into a reasonably mature market.

On first glance, this move might seem rather foolish, as this market is already crowded, with each of the aforementioned behemoths participating, as well as CA, Axway, Intel, SOA Software, Apigee, WSO2, MuleSoft, and several others, who have all been hammering out the details for a few years now.

But there’s method to Fiorano’s madness. That critical architectural decision that enabled them to compete a dozen years ago has turned out to be extraordinarily prescient, as it separates their approach to API management from the pack as both more cloud-friendly as well as user-friendly than the rest.

Peer-to-Peer with Queues

The secret to Fiorano’s product successes is its unique queue-based, peer-to-peer architecture. Queuing technology, of course, has been with us for decades, but traditionally provided reliability only to point-to-point integrations.

The rise of ESBs in the 2000s saw many vendors building centralized queue-based buses that basically followed a star topology. To scale such architectures and avoid single points of failure required various complex (read: expensive and proprietary) machinations that limited the scalability of the approach.

By building a peer-to-peer architecture, in contrast, Fiorano never relied on a single centralized server to run their bus. Instead, the platform would spawn peers as needed that knew how to interact with each other directly, thus avoiding the central chokepoint inherent to competitors’ architectures. The queues connecting the peers to each other as well as to other endpoints provided the reliability and fault tolerance to the architecture.

The result is an approach that is inherently cloud-friendly – even though the minds at Fiorano built it before the cloud hit the marketplace. Each peer can go on premise or in a cloud instance, and thus scale elastically with the cloud.

Today, as the cloud becomes a supporting player in the digital world and user preferences drive an explosion of technology touchpoints, Fiorano has managed to put in place the underlying technology that now supports the API management needs of modern digital environments.

The API Management Story

I also covered the API Management market starting in 2002, when vendors called it the Web Services Management market. Then it transformed into SOA Management, then Runtime SOA Governance, and now API Management (although Gartner awkwardly uses the term Application Services Governance).

After all, Web Services are a type of API, and managing them is an aspect of governance. Today, we’d rather refer to services as APIs in any case, as our endpoints are more likely to be RESTful, HTTP-based interfaces than SOAP-based Web Services.

This rather convoluted evolutionary path for the API Management market explains why there are so many players – and why many of them are the old guard incumbents. But it also indicates that many of the products in the market are likely to have older technology under the covers, perhaps better suited for first-generation SOA technologies than the modern cloud/digital world.

Fiorano, however, has avoided this trap because of their cloud/digital friendly architecture, as the diagram below illustrates. At the heart of the Fiorano API Management Architecture are both the Gateway Servers, which handle the run time management tasks, as well as the Management Servers, tasked with supporting policy creation, publication, and deployment.

Both types of servers take advantage of Fiorano’s peer-to-peer architecture, allowing cloud-based elasticity and fault tolerance, the flexibility to deploy on-premise or in the cloud, as well as unlimited linear scalability.