Through some of my last posts, I had promised our readers —twice— that I was going to write an article on defensive patent mechanisms in cryptocurrency-related markets…well, I am sorry to inform you that you will have to wait a bit more…
How many times have you read about data silos and vendor lock-in? How many times have you read about data silos being a blocking stone for IoT-related markets? Well, it appears that companies are saying “goodbye” to data silos and “hello” to data sharing. Sharing (is caring?) is the new hype in data-related markets!
But wait…what does it mean to competition (and innovation) dynamics?
The competition trend in data-related markets points towards setting ‘open’ (i.e. controlled) data interoperability instruments. These might take the form of individually open sourced APIs; API-focused closed partnerships and ‘open’ consortia; or broadly, data marketplace/infrastructure as a service. Interoperability instruments act as a core factor of attraction enabling data trading. One could tag interoperability as a market enabler (sometimes), but I think it is even sharper to tag it as a cross-market booster.
Some companies understood that, despite being considered highly valuable, data means (almost) nothing without a software infrastructure which, at the end of the day, will be the technical and monetization enabler of companies holding data.
Therefore, companies are competing to set market-defined data-related standards (from protocols to entire platforms) to benefit from first-mover advantages (nothing new). But for it to happen, they have to sell ‘openness’ to the market. However, openness might sometimes mean conditioned access to the technology (either paying or not). And depending on the project, it does not always mean access to the innovation/standardization process. Vicente Zafrilla and I published a paper some months ago on the interpretation of ‘openness’ in standardization settings (working version in SSRN).
Some companies target niche software layers, such as offering an API enabling real-time financial data analysis. Others with their own cloud capabilities target vertical integration, by offering a multi-sided platform where companies are able to store, offer/monetize, and purchase data from others. Google targets both moves in the fintech sector, for instance.
From the angle of data aggregation/analysis services, Google integrated BAND Protocol’s ‘oracle’ services (i.e. APIs monitoring real-time data) focused on providing crypto-related data (e.g. price fluctuations). Google leverages the provided external data in its cloud for its machine learning system to provide real-time data analysis.
From the angle of data-market services, Google is trying to consolidate itself as a data infrastructure/marketplace provider in the financial services industry with the release last week of its new tool, Datashare (among other tools). The company identified, on the one hand, fragmentation in data quality and considerable costs in data infrastructure; and on the other, a strong increase (90%) in cloud use for data-related activities by financial firms in the next 4 years.
The concept of “data marketplace” might take different forms. For instance, blockchain-based data marketplaces such as Nokia Data Marketplace (based on Hyperledger), or Ocean Protocol based on data tokenization are progressively making ground. Besides, blockchain consortia also target data marketplaces, the Mobile Open Blockchain Initiative has a specific working group developing standards for a blockchain-based data marketplace in the mobility space.
Another recent example in the “data sharing (is caring?)” chasm is the Databricks’ one. Databricks open sourced last week its Delta Sharing protocol by means of an Apache 2.0 license. With a permissive open source license, the company uses open source as a strategic competitive factor aiming at quickly securing massive adoption of its protocol.
Why? The more companies build with or upon the protocol, the more probabilities Databricks will have to interoperate with potential customers and to offer its variety of services (data-related of course). At the same time, the more companies use Databricks’ protocol, the more competitive pressure its competitors will face as they will be forced to adopt the protocol, fork it, or compete with a substitute version.
The trick to secure fast-paced network effects is that competitors, consequently, will not be able to compete just by focusing on the parameters of price (it is open source…), quality and innovation. They will compete against an already consolidated community/ecosystem of users implementing Databricks’ protocol. One could think about the Supreme Court’s ‘winning’ statement in the Google v Oracle decision with regards to the competitive relevance of switching costs when there is a market-defined standard – i.e. Java SE.
All in all, my friend, the interesting guy in data markets is not the one who shares, nor the one who refuses to share, but the one who enables the share.
Wish you a nice week.