As enterprise supply chains and consumer demand chains have beome globalized, they continue to inefficiently share information “one-up/one-down”. Profound "bullwhip effects" in the chains cause managers to scramble with inventory shortages and consumers attempting to understand product recalls, especially food safety recalls. Add to this the increasing usage of personal mobile devices by managers and consumers seeking real-time information about products, materials and ingredient sources. The popularity of mobile devices with consumers is inexorably tugging at enterprise IT departments to shifting to apps and services. But both consumer and enterprise data is a proprietary asset that must be selectively shared to be efficiently shared.
About Steve Holcombe
Unless otherwise noted, all content on this company blog site is authored by Steve Holcombe as President & CEO of Pardalis, Inc. More profile information:
Follow @WholeChainCom™ at each of its online locations:
November 6, 2012 — Pardalis, Inc. announced the issuance today of the following patent by the United States Patent & Trademark Office:
Common point authoring system for the complex sharing of hierarchically authored data objects in a distribution chain, U.S. Patent No. 8,307,000.
The issuance of this patent represents another milestone in the continued, global expansion of Pardalis' parent patent, U.S. Patent No. 6,671,696, and its continuation patents and related applications.
The Pardalis '696 patent was issued by the United States in 2003 and is entitled Informational object authoring and distribution system. Pardalis' 696 patent is the parent patent for the Common Point Authoring™ system. The prior art that Pardalis' patents have been distinguished from stretch back to the 1987 filing of Xerox's Updating local copy of shared data in a collaborative system (U.S. Patent 5,220,657), the 1995 publication of CrystalWeb--A distributed authoring environment for the World-Wide Web (Computer Networks and ISDN Systems), and the 1999 publication of DAPHNE--A tool for Distributed Web Authoring and Publishing (the American Society for Information Science).
"The underlying philosophy of the Common Point Authoring system is to provide people with as much granular control over their information and data experience as is possible," said Steve Holcombe, CEO, Pardalis Inc. "The irony is that in order to increase the flow of proprietary information in supply chains, more granular control over that information must be provided in information sharing systems of any kind. Pardalis' patents apply to authoring by either human participants, or the machines that they automatically program, of immutable informational objects describing the pedigree of uniquely identified products in supply chains."
The critical means and functions of the Common Point Authoring™ system are directed to a system in which an author can create data which is then fixed (immutable) and users can access that immutable data but cannot change it without the creator's permission. They provide for user-centric authoring and registration of uniquely identified, immutable objects for further granular publication, by the choice of each author, among networked systems. The benefits of CPA include minimal, precise disclosures of personal and product identity data to networks fragmented by information silos and concerns over 'data ownership' about products and their ingredients or components.
"There is increasing interest in the application of social networks to the enterprise," Holcombe said. "For instance the selective sharing of Google Plus is a strong step in the direction of providing more granular controls in information sharing. Salesforce.com has linked up with Facebook for targeted advertising delivery that will merge social and business-contact data. The Wikidata Project is creating a free knowledge base by first fixing data elements at a single location with authorizations that may be read and edited by humans and machines alike. All of these activities are pushing in the direction of providing more efficient market mechanisms for the sharing of proprietary information in the Cloud. The more granular the control over information, the greater the chances that information about products in global supply chains will be efficiently shared. The ramifications for global sustainability are tremendous."
Filings relevant to Pardalis' USPTO issued patents are being successfully pursued under the Patent Cooperation Treaty (PCT) in the following countries: Australia, Brazil, Canada, China (PRC), Europe, Hong Kong, India, Japan, Mexico and New Zealand.
About Pardalis, Inc.
Pardalis' Common Point Authoring™ system provides an object-oriented solution for introducing trust and provenance in web communications. For more information, see Pardalis' Global IP.
A Glimmer of Market Validation for Selective Sharing
In late 2005 Pardalis deployed a multi-tenant, enterprise-class SaaS to a Texas livestock market. The web-connected service provided for the selective sharing of data assets in the U.S. beef livestock supply chain. Promising revenues were generated from a backdrop of industry incentives being provided for sourced livestock. The industry incentives themselves were driven by the specter of mandatory livestock identification promised by the USDA in the wake of the 2003 "mad cow" case.
At the livestock market thousands of calves were processed over several sessions. Small livestock producers brought their calves into the auction for weekly sales where they were RFID tagged. An affordable fee per calf was charged to the producers which included the cost of a RFID tag. The tags identifiers were automatically captured, a seller code was entered, and affidavit information was also entered as to the country of origin (USA) of each calf. Buyers paid premium prices for the tagged calves over and above untagged calves. The buyers made money over and above the affordable fee per calf. After each sale, and at the speed of commerce, all seller, buyer and sales information was uploaded into an information tenancy in the SaaS that was controlled by the livestock market. For the first time ever in the industry, the livestock auction selectively authorized access to this information to the buyers via their own individual tenancies in the SaaS.
That any calves were processed at all was not possible without directly addressing the fear of information sharing that was held by both the calf sellers and the livestock market. The calf sellers liked that their respective identities were selectively withheld from the calf buyers. And they liked that a commercial entity they trusted – the livestock market – could stand as a kind of trustee between them and governmental regulators in case an auctioned calf later turned out to be the next ‘mad cow’. In turn the livestock market liked the selectiveness in information sharing because it did not have to share its confidential client list in an “all or nothing” manner to potential competitors on down the supply chain. At that moment in time, the immediate future of selective sharing with the SaaS looked very bright. The selective sharing design deployed by Pardalis in its SaaS fixed data elements at a single location with authorizations controlled by the tenants. Unfortunately, the model could not be continued and scaled at that time to other livestock markets. In 2006 the USDA bowed to political realities and terminated its efforts to introduce national mandatory livestock identification.
And so, too, went the regulatory-driven industry incentives. But … hold that thought.
Talking in Circles: Selective Sharing in Google+
Google+ is now 1 year old. In conjunction with Google, researchers Sanjay Kairam, Michael J. Brzozowski, David Huffaker, and Ed H. Chi have published Talking in Circles: Selective Sharing in Google+, the first empirical study of behavior in a network designed to facilitate selective sharing:
"Online social networks have become indispensable tools for information sharing, but existing ‘all-or-nothing’ models for sharing have made it difficult for users to target information to specific parts of their networks. In this paper, we study Google+, which enables users to selectively share content with specific ‘Circles’ of people. Through a combination of log analysis with surveys and interviews, we investigate how active users organize and select audiences for shared content. We find that these users frequently engaged in selective sharing, creating circles to manage content across particular life facets, ties of varying strength, and interest-based groups. Motivations to share spanned personal and informational reasons, and users frequently weighed ‘limiting’ factors (e.g. privacy, relevance, and social norms) against the desire to reach a large audience. Our work identifies implications for the design of selective sharing mechanisms in social networks."
While selective sharing may be characterized as being available on other networks (e.g. ‘Lists’ on Facebook), Google is sending signals that making the design of selective sharing controls central to the sharing model offers a great opportunity to help users manage their self-presentations to multiple audiences in the multi-tenancies we call online social networks. Or, put more simply, selective sharing multiplies opportunities for online engagement.
For the purposes of this blog post, we adopt Google’s definition of "selective sharing" to mean providing information producers with controls for overcoming both over-sharing and fear of sharing. Furthermore, we agree with Google that that the design of tools for such selective sharing controls must allow users to balance sender and receiver needs, and to adapt these controls to different types of content. So defined, we believe that almost seven years since the Texas livestock market project, a tipping point has been reached that militates in favor of selective sharing from within supply chains and on to consumers. Now, there have been a lot of things happen over the last seven years that bring us to this point (e.g., the rise of social media, CRM in the Cloud, the explosion of mobile technologies, etc.). But the tipping point we are referencing "follows the money", as they say. We believe that the tipping point toward selective sharing is to be found in the incentives provided by affiliate networks like Google Affiliate Networks.
Google Affiliate Networks
Google Affiliate networks provide a means for affiliates to monetize websites. Here’s a recent video presentation by Google, Automating the Use of Google Affiliate Links to Monetize Your Web Site:
Presented by Ali Pasha & Shaun Cox | Published 2 July 2012 | 47m 11s
The Google Affiliate Network provides incentives for affiliates to monetize their websites based upon actual sales conversions instead of indirectly based upon the number of ad clicks. These are web sites (e.g., http://www.savings.com/) where ads are the raison d'etre of the web site. High value consumers are increasingly scouring promotional, comparison, and customer loyalty sites like savings.com for deals and generally more information about products. Compare that with websites where ads are peripheral to other content (e.g., http://www.nytimes.com/) and where ad clicks are measured using Web 2.0 identity and privacy sharing models.
In our opinion the incentives of affiliate networks have huge potential for matching up with an unmet need in the Cloud for all participants - large and small - of enterprise supply chains to selectively monetize their data assets. For example, data assets pertaining to product traceability, source, sustainability, identity, authenticity, process verification and even compliance with human rights laws, among others, are there to be monetized.
Want to avoid buying blood diamonds? Go to a website that promotes human rights and click on a diamond product link that has been approved by that site. Want to purchase only “Made in USA” products? There’s not a chamber of commerce in the U.S. that won’t want to provide a link to their members’ websites who are also affiliates of an incentive network. Etc.
Unfortunately, these data assets are commonly not shared because of the complete lack of tools for selective sharing, and the fear of sharing (or understandable apathy) engendered under “all or nothing” sharing models. As published back in 1993 by the MIT Sloan School in Why Not One Big Database? Ownership Principles for Database Design: "When it is impossible to provide an explicit contract that rewards those who create and maintain data, ‘ownership’ will be the best way to provide incentives." Data ownership matters. And selective sharing – appropriately designed for enterprises – will match data ownership up with available incentives.
Remember that thought we asked you to hold?
In our opinion the Google Affiliate Network is already providing incentives that are a sustainable, market-driven substitute for what turned out to be unsustainable, USDA-driven incentives. We presume that Google is well aware of potential synergies between Google+ and the Google Affiliate Network. We also presume that Google is well aware that "[w]hile business-critical information is often already gathered in integrated information systems, such as ERP, CRM and SCM systems, the integration of these systems itself (as well as the integration with the abundance of other information sources) is still a major challenge."
We know this is a "big idea" but in our opinion the dynamic blending of Google+ and the Google Affiliate Network could over time bring within reach a holy grail in web communications – the cracking of the data silos of enterprise class supply chains for increased sharing with consumers of what to-date has been "off limits" proprietary product information.
A glimpse of the future may be found for example in the adoption of Google+ by Cadbury UK, but the design for selective sharing of Google+ is currently far from what it needs to attract broad enterprise usage. Sharing in Circles brings to mind Eve Maler’s blog post, Venn and the Art of Data Sharing. That’s really cool for personal sharing (or empowering consumers as is the intent of VRM) but for enterprises Google+ will need to evolve its selective sharing functionalities. Sure, data silos of commercial supply chains are holding personal identities close to their chest (e.g., CRM customer lists) but they’re also walling off product identities with every bit as much zeal, if not more. That creates a different dynamic that, again, typical Web 2.0 "all or nothing" sharing (designed, by the way, around personal identities) does not address.
It should be specially noted, however, that Eve Maler and the User-Managed Access (UMA) group at the Kantara Initiative are providing selective sharing web protocols that place "the emphasis on user visibility into and control over access by others". And Eve in her capacity at Forrester has more recently provided a wonderful update of her earlier blog post, this one entitled A New Venn of Access Control for the API Economy.
But in our opinion before Google+, UMA or any other companies or groups working on selective sharing can have any reasonable chance of addressing "data ownership" in enterprises and their supply chains, they will need to take a careful look at incorporating fixed data elements at a single location with authorizations. It is in regard to this point that we seek to augment the current status of selective sharing. More about that line of thinking (and activities within the WikiData Project) in our earlier “tipping point” blog post, The Tipping Has Arrived: Trust and Provenance in Web Communications.
What do you think? Share your conclusions and opinions by joining us at @WholeChainCom on LinkedIn at http://tinyurl.com/WholeChainCom.
I ended Part I stating that industry had been essentially leaving the customer out of the equation, too. What I meant was that enterprise class systems like the Customer Relationship Management (CRM) Systems offered by so many companies ...
... are a significant part of the problem.
Customer Relationship Management is about companies trying to manage their prospect and customer relationships. CRM systems contribute (or, maybe I should say, reinforce) one-up/one-down information sharing in supply chains and ipso facto the Bullwhip Effect. And Michael Hinshaw makes the point that even though billions have been spent on CRM over the last 15 years ($9+ billion in 2008 alone), overall customer satisfaction has remained flat. To the right is a simpler version of CRM.
The flip-side to CRM is envisioned to be Vendor Relationship Management (VRM). VRM would provide to people – individuals who recognize their value as customers, and wish to better define the terms of their relationships – the software, tools and ability to manage their vendor relationships, as well as their interactions and experiences.
To the left is a simple picture of VRM in which a consumer is able to conveniently manage multiple vendor relationships. The critical thought leadership for VRM is found with Doc Searls and the VRM Project at Harvard's Berkman Center but VRM in the marketplace still largely remains a vision.
Picking back up from Part I on the concept of viewing food safety regulators as a kind of consumer, and mashing together VRM (from the perspective of customers) with a whole chain traceability system for supply chains (from the perspective of food safety regulators) it would more or less have to look like this:
"OK," you say, "that's a nice, neat, REALLY simple picture but isn't this already happening on Facebook? Can't the Customer, Producer, Wholesaler, Retailer, and even the Government Regulators all become Facebook friends and experience right now this mashed-together vision of VRM and whole chain traceability? And isn't this what Social CRM is all about?"
No, no and ... no.
The challenge is not one of fixing the latest privacy control issue that Facebook presents to us. Nor is the challenge fixed with an application programming interface for integrating Salesforce.com with Facebook. The challenge is in providing the software, tools and functionalities for the discovery in real-time of proprietary supply chain data that can save people's lives and, concurrently, in attracting the input of exponentially more valuable information by consumers about their personal experiences with food products (or products in general, for that matter). Supply chain VRM (SCVRM)? Whole chain VRM (WCVRM)? Traceability VRM (TVRM)? Whatever we end up calling it, we know we will be on the right track when we see a flattening out of the Bullwhip Effect, won't we?
On the one hand, Facebook is highly relevant to this discussion because (a) it has over 500 million users, many of them businesses and government agencies, and (b) because it has helped to raise the expectations of its users regarding the availability of - and their hunger (no pun intended) for - real-time information. On the other hand, we are a long way from seeing headlines that read "Facebook immediately identifies and confirms source of salmonella contaminated peppers" or "Facebook tracks food ingredients in dioxin scare" or "Facebookers receive real-time e. coli food recall notices based on their hamburger actual purchases".* For that to happen, we need a few more ingredients added to the mix and one of them is the metadata ...
... by which each of the participants may be empowered to keep the degree of control over their data that will free it up for real-time access (and analysis) by others. Yes, it's ironic. Give more control to consumers so as to get more, better quality data from them about their experiences with food products? Makes perfect sense to Doc Searls and the VRM folks. They get that VRM is the ironic reflection of CRM.
The other ingredients? I'll finish up with those in the next - and final - journal entry. But I will say that I'll be returning to those interesting comments made by Walmart's Frank Yiannas .....
* Actually, for an example of an implementation that is technically achievable right now, see my earlier blog Consortium seeks to holistically address food recalls. Substitute in "Facebook" for "Food Recall Bank".