Connecting the Dots …. The growing business of Virtual Health Care services

fiorano HealthcareConsumers of all ages are using technology in all aspects of their lives, and health care is following the same track at a blistering pace. The current rage amongst millennials ranges from Fitbit watches and other wearables that track every part of your day-including activity, exercise, food, weight and sleep-to help you find you are fit, stay motivated, and see how small steps make a big impact-to Applications that allow you to receive medication alerts or reminders, and measure, record, and transmit data about medications or treatments.

Looking at the commercial market, a recent study found that the average estimated cost of a telehealth visit is $40 to $50 per visit compared to the average estimated cost of $136 to $176 for in-person acute care. The initial telehealth visit is resolving patient issues at an average of 83 percent of the time. (References) (Yamamoto, 2014).

Regulatory and Insurance industry shifts are also increasingly accommodating virtual visits as part of their overall well-being and care programs. Thus, it is making increasing business sense to pursue the digital aspect of health care as part of an overall corporate strategy. The future is in connected health care, a system that is connecting patient, healthcare provider, general practitioner, laboratory test, radiology and other imaging results to close the loop of naturally occurring latencies in the system of data exchange leading to delays in diagnosis, prescription, cure and ultimately, the quality of health care and even well-being.

The technology to access and track health care information have also proliferated from websites to smartphone, tablet applications, digital medical assistants, and personal medical devices or fitness monitors, with more and more people preferring the on-the-go option of monitoring their fitness and health.

With big data available from multiple sources on treatments, symptoms and prescriptive treatment outcomes, combining all this into an AI (Artificial Intelligence) environment seems almost a given predictable path that will evolve. But going beyond the various limitations and drawbacks, perhaps, of such a route to healthcare, the question becomes whether healthcare alternatives and treatment to far flung populations with unavailability of doctors, facilities, medication or even political unrest – is something better than nothing?

Connecting the data in an intelligent way, providing visibility to health care providers, building algorithms that can consider every option to give an outcome that improves the care of patients are some of the main challenges. Lack of interoperability of systems, data locked in legacy applications and failure to log and share relevant treatment outcomes are some of the areas that will need to be addressed.

Many health care organizations have already implemented a virtual health strategy. However, to be effective, organizations need to start proactively putting infrastructure and strategies in place and ensure that appropriate technologies and platforms are integrated into the care delivery model; data availability and integration need to be as real-time as possible allowing care givers to respond timely to treatment plans and their adjustment to positively impact patient health.

All over the world, Fiorano technology has been used by Healthcare organizations to easily unlock data and connect with their legacy systems to transform into Digital, Cloud and API-first businesses. The Fiorano Healthcare Solution supports interoperability and allows secure sharing of healthcare data in the high-volume environments of next generation Healthcare markets. It has been implemented at Babylon, that has an aggressive bold mission to democratize healthcare and to put an accessible and affordable health service in the hands of every person on earth. To learn more about how you can connect your systems together quickly and efficiently, contact us at Fiorano.

Why traditional ESBs are a mismatch for Cloud-based Integration

Cloud ESB

The explosive adoption of cloud-based applications by modern enterprises has created an increased demand for cloud-centric integration platforms.  The cloud poses daunting architectural challenges for integration technology like: decentralization, unlimited horizontal scalability, elasticity and automated recovery from failures. The traditional ESBs were never designed to solve these issues.  Here are a few reasons why ESBs are not the best bet for cloud-based integration

Performance and Scalability
Most ESBs do simplify integration but use a hub-and-spoke model that limits scalability since the hub becomes a communication bottleneck.  To scale linearly in the cloud, one requires a more federated, distributed, peer-to-peer processing approach towards integration with automated failure recovery. Traditional ESBs lack this approach.

JSON and REST
ESBs evolved when XML was the dominant data-exchange format for inter-application communication and SOAP the standard protocol for exposing web services. The world has since moved on to JSON and today, mobile and enterprise APIs are exposed using REST protocols. ESBs that are natively based on XML and SOAP are less relevant in today’s cloud-centric architecture.

Security and Governance
These are key concerns for any enterprise that chooses to move to cloud. With multiple applications in the cloud, enterprises are not always comfortable with centralized security hubs. Security and governance need to be decentralized to exploit the elasticity of the cloud. Old-guard middleware products were typically deployed within the firewall and were never architected to address the issues of decentralized security and governance.

Latency and Network connectivity
When your ESB lives in the external cloud, latency becomes a critical challenge as end-points are increasingly distributed across multiple public and private clouds. Traversing a single hub in such an environment leads to unpredictable and significant performance problems which can only be addressed with new designs built ground-up with Cloud challenges in mind.

Microservices – The issue of Granularity: Atomic or Composite?

While implementing Microservices architecture, the “granularity” of a service has always been the subject of more than a few debates in the industry. Analysts, developers and solution architects still ponder over defining the most apt size of a service/component (The term “Service” and “Component” are used interchangeably in the discussion that follows). Such discussion usually ends up with two principal adversaries:

  • Single-level components
  • Two-level components

Single-level, “Atomic” components:  An “Atomic” component consists of a single blob of code, together with a set of defined interfaces (inputs and outputs).  In the typical case, the component has a few (two or three) inputs and outputs.  The Service-code of each Atomic component typically runs in a separate process. Figure 1 shows an Atomic component.

Atomic-Microservices-Diagram-01

Two-level, “Composite” components: A composite-service consists of a single ‘outer’ service, with a set of interfaces. This outer service further contains one or more ‘inner’ components that are used in the implementation of the main, outer component.  The Composite-service runs in a separate process by default, while each of the inner components run in a separate thread of the Composite component process.  Proponents of this approach point to the fact that by componentizing the implementation of the composite component, one has greater flexibility and more opportunities to reuse implementation artifacts within Microservice implementations. Figure 2 illustrates a Composite component.

Atomic-Microservices-Diagram-02Atomic Microservices are as simple as they get.  It’s just a single blob of code, in a programming language of your choice.  Depending on the underlying Microservices infrastructure, you may have to implement a threading model yourself, or you may be able to leverage the threading model of the underlying Microservices framework (for instance, Sessions provide a single-threaded context in the case of a JMS-based Microservices platform).  Overall, Atomic Microservices offer a relatively low level of complexity for development, being as it were a single logical module.

On the contrary, Composite Microservices have an almost romantic appeal for many developers, who are enchanted with the concept of “reusing” multiple smaller inner-components to implement a larger single component.  Unfortunately, although this approach is good in theory it has several drawbacks in practice.  For starters, the execution model is complicated, since the underlying framework has to be able to identify the separate threaded contexts for the Inner components that comprise the single composite component. This carries significant performance overhead and complicates the platform framework. For reference, in the early 2000’s, the BPEL (Business Process Execution Language) in vogue followed this approach, which proved to be very heavyweight in practice.  Another issue with composite components is that there is no simple model for deployment; since composite components are more difficult to auto-deploy as agents across the network, unlike Atomic components.

Provided that the services run as separate processes, in our experience the Atomic components represent a better choice for Microservice-project implementations.