The Interoperability Panacea
It is a bit fascinating to hear terms such as interoperability advocate or those professing such a passion for something that is widely believed to be the solution. As with many things in the health and social care space, the problem itself isn’t resolved by technology, but by the factors around data sharing, coordination, and accessibility. And while the demand for social data has increased, predominant use cases remain elusive.
Perhaps the chicken did come before the egg?
There is a chasm that exists between community organizations and health and/or community-based exchanges. However, it is mistakenly thought of as an issue with interoperability or policy, when in fact many stakeholders are motivated to share data and improve care coordination--they just don’t know where to start.
Organizations in the social care space are constantly adapting to new service requirements, contract obligations, and constraints of vendor systems. Enterprise resource planning (ERP) involves the alignment of systems to both their operational needs and organizational structure, but in a continuously changing operational environment, such as an organization with multiple grants and fee-for-service models, alignment can seem unattainable. Furthermore, many of these organizations may need to communicate with an external network, which may further influence the way data is collected and stored to better accommodate the data exchange, but at the same time, it may incur more effort for the organization to document and structure such data.
Comparatively, network resource planning (NRP) efforts are put forth by data broker organizations that seek to be efficient stewards of data such that they are not paying costly data storage fees for low demand, large data sets and that queries are developed in a way that are not computationally expensive.
Thus, it is inefficient and expensive for large networks to accommodate social data that may not be structured in an atomic unit smaller than a text narrative or local assessments/forms with no binding strength to broader standards. Therefore, the types of data needed by many organizations are not scalable and the design and testing of such solutions are costly to implement-- the economics of running large networks don’t support incremental social use case development.
So should a set of community organization use cases drive network adoption or should a network direct a presentation model that community organizations can coalesce around?
So far, neither of these models has worked well. In fact, social referral vendors are likely the strongest bellwethers of current social data needs, and yet organizations continue to be faced with a perpetual dilemma of not being able to refer to an organization with enough detail, therefore creating a follow-up with other methods such as phone or email, and/or the receiving organization not having any representatives in the referral system to receive the referral. Of note, there is a difference in organic adoption of users and mandated use of such technology, this could explain high prevalence, low utilization systems as well.
In mathematics, a multi-variable linear equation can only be solved by substitution, elimination, or a combination of the two. In this construct, you cannot solve a larger problem without isolating subsets and solving for those first. In many ways, most networks are attempting just one of these approaches, either by substitution, giving CBOs a common use case (ADT alerts), or elimination, ruling ones out completely (SUD data). Those familiar with computer science concepts can attest, that while you might eventually find the right answer with the plug-and-play approach, it is certainly not timely or efficient, and more disheartening, it will lead to ever-growing frustrations in care coordination roles leading to more turnover and technology fatigue.
Thus, the fundamental problem is that the networks that support large clinical data sets will always attempt to fit new use cases into clinical frameworks and community-based data networks will try to scale successful models to other areas where the social service environments and populations are vastly different. Meanwhile, federal collaboratives study social needs and attempt to “categorize” these into codes and define what should be collected, but this really serves to benefit payors and others who have a need downstream to summarize volumes and types of services, it does nothing for a case manager that is acutely aware of the services they provide. The real problem to study is what is being collected by an organization within a community, but this issue runs back into the enterprise resource problem, as many multi-service nonprofit organizations are perpetually trying to align their infrastructure and standardize their data.
So, here we are, discussing a quagmire that has carelessly touted itself as an interoperability problem when the real issue is that the way networks operate prohibits any incubation of use cases between subsets of organizations within a network. There is always a centralized hub that must enable some ability to direct queries and reference registries.
The adoption velocity of a new process depends on its current utility and future potential. As a new process is discovered to be more efficient than the current state, it takes on momentum. However, for it to be truly adopted, and the old process to be abandoned, it must solve some enduring problem and not just be a workaround gimmick.
Let’s consider the business model for Southwest Airlines. If you are not familiar with the strategy, you may assume they just competed with other major airlines, you would be wrong. Southwest employed a different strategy, they built a point-to-point, low-fare, high-frequency network, connecting those between major cities. So really, they competed with both airlines and ground transit for customers looking to get between smaller US cities that would take a long bus ride or multi-stop flight from a major city.
In many ways, the point-to-point versus hub and spoke model applies to the unique relationships between community organizations in different areas and the constraints of their information systems. Should they send through a central system, if so, does it take more time to navigate the system, and what information is lost? When vendors attempt to categorize and structure data entry, they constrain user data too much. This causes disintermediation--connecting outside the system--by other convenient methods like phone, email, or shared document folders. Therefore, by attempting to make community organizations conform to common structures and discreet data elements, you are inducing a model that will never be as efficient as an email or phone.
So instead of attempting to rapidly shoehorn social data into clinical structures or referral systems, a better question would be how can you develop a process that is more efficient than email or Google Drive? Now, one may contend that this is hard because they are so convenient and easy to use, but this is exactly the point, if a provided solution is less efficient, it will never be adopted, organically that is, without some policy intervention.
Perhaps the biggest issues that a solution would need to address are:
- No cost barrier to entry
- Secure access and encryption
- Local organizational registry (generally organizations in the same county boundaries)
- Ability to tag documents by type and organization
There are elements of this that mirror other types of technologies on the market, with the likely exception of them not being free. Now from a business standpoint, many will argue that this is an irrational stance, but email and data storage through many cloud-based providers is free, and the business model allows for such services to initially engage clients. The more time a user is in a system, the more time they discover features and enhancements they are willing to pay for. This is not a new model, it is a very classic description of how start-ups gain a foothold in a market, albeit, many vendors are reluctant to restart this process and believe that scale alone can influence adoption—Zoom and Slack would beg to differ when competing against Google and Microsoft.
So—what is the solution?
A fair question, perhaps I should get to a point. Vendors should have a space in their environment that is essentially similar to that of a file-shared drive for referring organizations. Its goal should be just to allow the unimpeded document exchange between other organizations in the community. The platform is not responsible for identity management, consent, or structuring data. I’ll pause for a second for those with nothing to add except shouting privacy. This is a process attempting to mirror what is already happening in communities, they are responsible for gathering consent and submitting referrals/documents, adding a technology system does not change this process, it never has, it just documents the action that has occurred. Back to the system design, it would be the equivalent of assigning root folders to organizations that allow organizations a few permission options, add files, view files, and view/modify. Yes, very low-tech indeed, but keep in mind, that it has to be simple, free, and controlled entirely by the organizations. This point-to-point data exchange capability is really the critical element. It enables a vendor to observe the transaction frequency between different organizations and doesn’t impose a barrier such that two organizations need to request permission through a third party so they can coordinate services. Furthermore, as these transactions are evaluated, high-frequency, common data types can be pulled up into a trial-use space that can be better structured and developed for broader exchange use as depicted in Figure 1.
Figure 1. Evolution of a document sharing to a structured document exchange
The value of this open environment may seem counterintuitive, a fancy SFTP drive hosted by a network vendor. But I consider the current state vastly more preposterous, convening workgroups with different community stakeholders at different times, with various infrastructures, and expecting a precise consensus to form on a broad use case. Clinical standards such as ADT or CCD were not created overnight at the consensus of hospitals and clinics, but really they were the product of market maturity when it became apparent that certain data elements or events would be broadly useful. Without a way to study transactions across organizations in a non-contrived way, vendors will continue to guess, dictate, and impose solutions on unique communities that will not be adopted and be further subjugated by the phone and email process. In many ways, it is ironic, for an industry that is heavily influenced by documenting observations, the meaningful relationships between those helping the most in need remain undocumented.

Comments
Post a Comment