Search
Subscribe

Bookmark and Share

About this Blog

As enterprise supply chains and consumer demand chains have beome globalized, they continue to inefficiently share information “one-up/one-down”. Profound "bullwhip effects" in the chains cause managers to scramble with inventory shortages and consumers attempting to understand product recalls, especially food safety recalls. Add to this the increasing usage of personal mobile devices by managers and consumers seeking real-time information about products, materials and ingredient sources. The popularity of mobile devices with consumers is inexorably tugging at enterprise IT departments to shifting to apps and services. But both consumer and enterprise data is a proprietary asset that must be selectively shared to be efficiently shared.

About Steve Holcombe

Unless otherwise noted, all content on this company blog site is authored by Steve Holcombe as President & CEO of Pardalis, Inc. More profile information: View Steve Holcombe's profile on LinkedIn

Follow @WholeChainCom™ at each of its online locations:

Entries in Business Models (25)

Monday
Jan072013

The Roots of Common Point Authoring (CPA)

Common Point Authoring (CPA) is timely and relevant for amerliorating the fear factors revolving around data ownership. Those fears are multiplying from the every increasing usage of unique identification on the Internet as applied to both people (e.g., social security numbers) and products (e.g., unique electronic product numbers and RFID tags).

Q&A: What is an informational object?

Consider the electronic form of this document (the one you are reading right now) as an example of a informational object. Imagine that you are the author and owner of this informational object. Imagine that each paragraph of this object has a granular on/off switch that you control. Imagine being able to granularly control who sees which paragraph even as your informational object is electronically shared one-step, two-steps, three-steps, etc., down a supply chain with people or businesses you have never even heard of. Now further imagine being able to control the access to individual data elements within each of those paragraphs.

The methods for CPA were first envisioned in regards to transforming the authoring of paper-based material safety data sheets (MSDSs) in the chemical industry into a market-driven, electronic service provided by chemical manufacturers for their supply chain customers. You may think of MSDSs as a type of chemical pedigree document authored by chemical manufacturers and then handed down a multi-party supply chain as it follows the trading of the chemical.

At the time, we crunched some numbers and found that MSDSs offered as a globally accessible software service could be provided to downstream users for significantly less than what it cost them to handle paper MSDSs. But we further recognized that our business model for global software services wouldn’t work very well unless the fear factors revolving around MSDSs offered as a service were technologically addressed.

That is, we asked the question, “How can electronic information be granularly controlled by the original author (i.e., creator) as it is shared down a supply chain?”

When it comes to information sharing in multi-tenancies, the prior art (i.e., the prior patents and other published materials) to CPA at best refers to collaborative document editing systems where multiple parties share in the authoring of a single document. A good example of the prior art is found in a 1993 Xerox patent entitled 'Updating local copy of shared data in a collaborative system' (US Patent 5,220,657 - Xerox) covering:

“A multi-user collaborative system in which the contents as well as the current status of other user activity of a shared structured data object representing one or more related structured data objects in the form of data entries can be concurrently accessed by different users respectively at different workstations connected to a common link.”

By contrast, CPA's methods provide for the selective sharing of informational objects (and their respective data elements) without the necessity of any collaboration. More specifically, CPA provides the foundational methods for the creation and versioning of immutable data elements at a single location by an end-user (or a machine). Those data elements are accessible, linkable and otherwise usable with meta-data authorizations. This is especially important when it comes to overcoming the fear factors to the sharing of enterprise data, or allowing for the semantic search of enterprise data. To the right is a representation from Pardalis' parent patent, "Informational object authoring and distribution system" (US Patent 6,671,696), of a granular, author-controlled, structured informational object around which CPA's methods revolve.

That is, the critical means and functions of the Common Point Authoring™ system provide for user-centric authoring and registration of radically identified, immutable objects for further granular publication, by the choice of each author, among networked systems. The benefits of CPA include minimal, precise disclosures of personal and product identity data to networks fragmented by information silos and concerns over 'data ownership'.

When it comes to "electronic rights and transaction management", CPA's methods have further been distinguished from a significant patent held by Intertrust Technologies. See Methods for matching, selecting, narrowcasting, and/or classifying based on rights management and/or other information (US Patent 7,092,914 - Intertrust Technologies). By the way, in a 2004 announcement Microsoft Corp. agreed to take a comprehensive license to InterTrust's patent portfolio for a one-time payment of $440 million.

CPA's methods have been further distinguished worldwide from object-oriented, runtime efficiency IP held by these leaders in back-end, enterprise application integration: Method and system for network marshalling of interface pointers for remote procedure calls (US Patent 5,511,197 - Microsoft), Reuse of immutable objects during object creation (US Patent 6,438,560 - IBM), Method and software for processing data objects in business applications (US Patent 7,225,302 - SAP), and Method and system to protect electronic data objects from unauthorized access (US Patent 7,761,382 - Siemens).

For more information, see Pardalis' Global IP.

Friday
Jan042013

Why Google Must - And Will - Drive NextGen Social for Enterprises

Preface

This is our third "tipping point" publication.

The first was The Tipping Point Has Arrived: Trust and Provenance in Web Communications. We highlighted there the significance of the roadmap laid out by the Wikidata Project. It was our opinion that:

"[a]s the Wikidata Project begins to provide trust and provenance in its form of web communications, they will not just be granularizing single facts but also immutabilizing the data elements to which those facts are linked so that even the content providers of those data elements cannot change them. This is critical for trust and provenance in whole chain communications between supply chain participants who have never directly interacted."

The second post was The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications. We there emphasized the emerging market-based opportunities for information sharing between enterprises and consumers:

"We know this is a big idea but in our opinion the dynamic blending of Google+ and the Google Affiliate Network could over time bring within reach a holy grail in web communications – the cracking of the data silos of enterprise class supply chains for increased sharing with consumers of what to-date has been "off limits" proprietary product information."

Introducing Common Point Social Networking

For the purposes of this post we introduce and define Common Point Social Networking:

Common point social networking provides the means and functions for the creation and versioning of immutable data elements at a single location by an end-user or a machine which data elements are accessible, linkable and otherwise usable with meta-data authorizations.

The software developers reading this post may recognize similarities with Github. Github is perhaps the canonical proxy for fixed, common point sharing adoption. Software developers publish open source software development projects, providing source code distribution and means for others to contribute changes to the source code back to a common repository. Version control provides a code level audit trail.

In July 2012 Github took a $100M venture capital investment from Andreessen Horowitz. There’s no doubt that some of this funding will be used by Github to compete in the enterprise space. But we further offer here that Google is better positioned to lead the current providers of enterprise software and cloud services in introducing a new generation of online social networks in the fertile ground between enterprises and consumers. We propose that Google so lead by introducing and/or further encouraging a roadmap of means and functions it is already backing in the Wikidata Project. We have identified an inviting space for common point social networking to serve as a bridge between Google's Knowledge Graph and the emerging GS1 standards for Key Data Elements (KDEs). 

A Sea Change in Understanding

In 2012 there was a sea change in understanding that greater access to proprietary enterprise data is necessary for creating new business models between enterprises and consumers. Yet there remains confusion on how to do so. There is much rhetorical cross-over these days between the social networking of "personal data" and "enterprise data" but enterprise data is - and will long remain - different from personal data. Again, in our opinion, enterprise data is overwhelmingly a proprietary asset that must be selectively accessed at a granular level from a fixed, common point to have any chance of being efficiently shared.

GS1 and Whole Chain Traceability

From 2010 through 2011, Pardalis Inc. catalyzed a successful research funding strategy in a series of “whole chain traceability” funding submissions seeking to employ the use of granular, immutable data elements in networked communications.[1] The computer networking aspects of this food supply chain research was based upon a granularization of critical tracking events (CTEs) with a high-level derivation of Pardalis’ patented processes for registering immutable data elements and their informational objects at a fixed location with meta-data authorizations. See Whole Chain Traceability: A Successful Research Funding Strategy. At the solicitation of co-author Holcombe, GS1 gave an early letter of support to this process, and GS1 was subsequently kept “in the loop”, too. This successful research funding strategy has from all appearances subsequently been given a favorable nod by GS1 in one of its recent publications, Achieving Whole Chain Traceability in the U.S. Food Supply Chain - How GS1 Standards make it possible. Here’s an excerpt -

"To achieve whole chain traceability, trading partners must be able to link products with locations and times through the supply chain. For this purpose, the work led by the Institute of Food Technologists described two foundational concepts: Critical Tracking Events (CTEs) and Key Data Elements (KDEs). With GS1 Standards as a foundation, communicating CTEs and KDEs is achievable."

So who is GS1, you ask? GS1 is "the international not-for-profit association dedicated to the development and implementation of global standards and solutions to improve the efficiency and visibility of supply and demand chains globally and across multiple sectors." You know that unique barcode symbology you see on the products you purchase? That barcode is standardized by GS1 and may include KDEs.

We applaud the introduction of KDEs by GS1. The inclusion of KDEs is a necessary step for moving beyond the lugubrious one-up/one-down information sharing that is overwhelmingly prevalent in today’s enterprise supply chains. Enterprises have long been comfortable with one-up/one-down pushing generic products down the chain. But it is a mode of information sharing that doesn’t fit well at all into today’s consumer demand chains that desire to pull real-time, trustworthy information. Furthermore, one-up/one-down information sharing significantly contributes to the "bullwhip effect" within supply chains that cost enterprises in a number of ways as explained in more detail in The Bullwhip Effect:

"The challenge is not one of fixing the latest privacy control issue that Facebook presents to us. Nor is the challenge fixed with an application programming interface for integrating Salesforce.com with Facebook. The challenge is in providing the software, tools and functionalities for the discovery in real-time of proprietary supply chain data that can save people's lives and, concurrently, in attracting the input of exponentially more valuable information by consumers about their personal experiences with food products (or products in general, for that matter) …."

But KDEs by themselves will not necessarily rid supply chains of the bullwhip effect. Without implementing a more social, fluid nature to the sharing of information in supply chains, KDEs may even increase the brittleness of one-up/one-down information sharing between database administrators, just more granularly so with "digital sand". For instance, industry standards for granular XML objects may be a bane … or a bon. It largely depends on the effectiveness of hierarchical administrative decision-making processes overseeing each data silo. Common point social networking holds forth a promise for implementing KDEs in a manner that overcomes the bullwhip effect.

But even with the most efficient and effective management processes, it is almost unimaginable to us that the first movement toward enterprise-consumer social networking will come from incumbent enterprise software systems. Sure, the first movement could potentially come from that direction, but we’ve just had too many experiences with enterprises and software vendors to put much faith in that actually happening. Conversely, we can much more easily imagine a first movement toward nextgen social from the "navigational search" demands of consumers. In our second tipping point blog we illustrated this point in some respect with Google Affiliate Networks. This time we are making our point with Google’s Knowledge Graph.

Navigational Search As A Business Model

Google's Knowledge Graph was announced this year as having being added to Google's search engine. Knowledge Graph is a semantic search system. Of course it’s not the only semantic search system. Bing incorporates semantic search. So do Ask.com and Wolfram Alpha. Siri provides a natural language user interface. But no matter what the semantic search engine, the search results are revealed as a list of ranked, relevant “answers” (or perhaps no answer at all because there isn’t one to give). Searching for real answers in real-time is still kind of a navigational mess either in commission or omission.

"For the semantic web to reach its full potential in the cloud, it must have access to more than just publicly available data sources. It must find a gateway into the closely-held, confidential and classified information that people consider to be their identity, that participants to complex supply chains consider to be confidential, and that governments classify as secret. Only with the empowerment of technological ‘data ownership’ in the hands of people, businesses, and governments will the Semantic Cloud make contact with a horizon of new, ‘blue ocean’ data." Cloud Computing: Billowing Toward Data Ownership - Part II.

Knowledge Graph is a "baby step" toward navigational search that provides a kind of Wikipedia "look and feel" experience designed to help users navigate more easily toward specific answers. Ever used the "I’m Feeling Lucky" button provided by Google? That button taps into Google's semantic search system to provide a navigational search resulting in a single result. This is an attempt to provide a purposeful effect instead of an exploratory effect to your search request. Yes, it's still a "hit or miss" artifice but - make no mistake - it is has been introduced for pushing forward navigational search as a business model.  Google's business intent for navigational search is to discourage you from going to other search engines for your search needs. Knowledge Graph is designed to cut short a process of discovery which may take you away from Google to a competitive search engine. This move toward navigational search is exactly why we are proposing that now is the time for common point social networking. Without common point social networking, navigational search will largely remain a clever, albeit unsatisfactory, solution for what consumers really want. What consumers want is real-time, meaningful, trustworthy information about the products they buy or are interested in buying. As Amit Singhal, Senior Vice-President of Engineering at Google, says:

"We’re proud of our first baby step - the Knowledge Graph - which will enable us to make search more intelligent, moving us closer to the "Star Trek computer" that I've always dreamt of building. Enjoy your lifelong journey of discovery, made easier by Google Search, so you can spend less time searching and more time doing what you love."

Conclusion: Whole Chain Communications from Navigational Search

So much of the information that consumers desire about the products they buy - or may buy - is currently locked up in enterprise data silos. But the realistic prospects for common point social networking means that navigationally searching for enterprise data - as a business model - is no longer an impossible challenge akin to Starfleet Academy's Kobayashi Maru. The ultimate goal for Google's navigational search is essentially that of providing not just whole chain traceability but real-time, whole chain communications for consumers via their mobile devices. The ultimate goal for GS1's standards for granular whole chain traceability is to similarly provide opportunities for real-time, navigational search.

 Google’s Knowledge Graph indeed represents the first step of a toddler. To fully develop a “Star Trek Enterprise computer” Google must drive nextgen social for enterprises by fostering the placement of common point social networking between the the bookends of navigational search and whole chain traceability. There is no other technology company better positioned or more highly motivated to do so. And we believe that it will. In backing the Wikidata Project, Google is already on a pathway to promoting common point social.

_______________________________

Authors:

 

Steve Holcombe
Pardalis Inc.

 

 

Clive Boulton
Independent Product Designer for the Enterprise Cloud
LinkedIn Profile

 

_______________

Endnotes
1. In these funding submissions co-author Holcombe introduced and defined the phrase of "whole chain traceability" in reference to his company's patents.
Wednesday
Jul112012

The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications

By Steve Holcombe (@steve_holcombe) and Clive Boulton (@iC)

A Glimmer of Market Validation for Selective Sharing

In late 2005 Pardalis deployed a multi-tenant, enterprise-class SaaS to a Texas livestock market. The web-connected service provided for the selective sharing of data assets in the U.S. beef livestock supply chain.  Promising revenues were generated from a backdrop of industry incentives being provided for sourced livestock. The industry incentives themselves were driven by the specter of mandatory livestock identification promised by the USDA in the wake of the 2003 "mad cow" case.

At the livestock market thousands of calves were processed over several sessions. Small livestock producers brought their calves into the auction for weekly sales where they were RFID tagged. An affordable fee per calf was charged to the producers which included the cost of a RFID tag. The tags identifiers were automatically captured, a seller code was entered, and affidavit information was also entered as to the country of origin (USA) of each calf. Buyers paid premium prices for the tagged calves over and above untagged calves. The buyers made money over and above the affordable fee per calf.  After each sale, and at the speed of commerce, all seller, buyer and sales information was uploaded into an information tenancy in the SaaS that was controlled by the livestock market. For the first time ever in the industry, the livestock auction selectively authorized access to this information to the buyers via their own individual tenancies in the SaaS.

That any calves were processed at all was not possible without directly addressing the fear of information sharing that was held by both the calf sellers and the livestock market. The calf sellers liked that their respective identities were selectively withheld from the calf buyers. And they liked that a commercial entity they trusted – the livestock market – could stand as a kind of trustee between them and governmental regulators in case an auctioned calf later turned out to be the next ‘mad cow’. In turn the livestock market liked the selectiveness in information sharing because it did not have to share its confidential client list in an “all or nothing” manner to potential competitors on down the supply chain. At that moment in time, the immediate future of selective sharing with the SaaS looked very bright. The selective sharing design deployed by Pardalis in its SaaS fixed data elements at a single location with authorizations controlled by the tenants. Unfortunately, the model could not be continued and scaled at that time to other livestock markets. In 2006 the USDA bowed to political realities and terminated its efforts to introduce national mandatory livestock identification.

And so, too, went the regulatory-driven industry incentives. But … hold that thought.

Talking in Circles: Selective Sharing in Google+

Google+ is now 1 year old. In conjunction with Google, researchers Sanjay Kairam, Michael J. Brzozowski, David Huffaker, and Ed H. Chi have published Talking in Circles: Selective Sharing in Google+, the first empirical study of behavior in a network designed to facilitate selective sharing:

"Online social networks have become indispensable tools for information sharing, but existing ‘all-or-nothing’ models for sharing have made it difficult for users to target information to specific parts of their networks. In this paper, we study Google+, which enables users to selectively share content with specific ‘Circles’ of people. Through a combination of log analysis with surveys and interviews, we investigate how active users organize and select audiences for shared content. We find that these users frequently engaged in selective sharing, creating circles to manage content across particular life facets, ties of varying strength, and interest-based groups. Motivations to share spanned personal and informational reasons, and users frequently weighed ‘limiting’ factors (e.g. privacy, relevance, and social norms) against the desire to reach a large audience. Our work identifies implications for the design of selective sharing mechanisms in social networks."

While selective sharing may be characterized as being available on other networks (e.g. ‘Lists’ on Facebook), Google is sending signals that making the design of selective sharing controls central to the sharing model offers a great opportunity to help users manage their self-presentations to multiple audiences in the multi-tenancies we call online social networks. Or, put more simply, selective sharing multiplies opportunities for online engagement.

For the purposes of this blog post, we adopt Google’s definition of "selective sharing" to mean providing information producers with controls for overcoming both over-sharing and fear of sharing. Furthermore, we agree with Google that that the design of tools for such selective sharing controls must allow users to balance sender and receiver needs, and to adapt these controls to different types of content. So defined, we believe that almost seven years since the Texas livestock market project, a tipping point has been reached that militates in favor of selective sharing from within supply chains and on to consumers. Now, there have been a lot of things happen over the last seven years that bring us to this point (e.g., the rise of social media, CRM in the Cloud, the explosion of mobile technologies, etc.). But the tipping point we are referencing "follows the money", as they say. We believe that the tipping point toward selective sharing is to be found in the incentives provided by affiliate networks like Google Affiliate Networks.

Google Affiliate Networks

Google Affiliate networks provide a means for affiliates to monetize websites. Here’s a recent video presentation by Google, Automating the Use of Google Affiliate Links to Monetize Your Web Site:


Presented by Ali Pasha & Shaun Cox | Published 2 July 2012 | 47m 11s

The Google Affiliate Network provides incentives for affiliates to monetize their websites based upon actual sales conversions instead of indirectly based upon the number of ad clicks. These are web sites (e.g., http://www.savings.com/) where ads are the raison d'etre of the web site. High value consumers are increasingly scouring promotional, comparison, and customer loyalty sites like savings.com for deals and generally more information about products. Compare that with websites where ads are peripheral to other content (e.g., http://www.nytimes.com/) and where ad clicks are measured using Web 2.0 identity and privacy sharing models.

In our opinion the incentives of affiliate networks have huge potential for matching up with an unmet need in the Cloud for all participants - large and small - of enterprise supply chains to selectively monetize their data assets. For example, data assets pertaining to product traceability, source, sustainability, identity, authenticity, process verification and even compliance with human rights laws, among others, are there to be monetized.

Want to avoid buying blood diamonds? Go to a website that promotes human rights and click on a diamond product link that has been approved by that site. Want to purchase only “Made in USA” products? There’s not a chamber of commerce in the U.S. that won’t want to provide a link to their members’ websites who are also affiliates of an incentive network. Etc.

Unfortunately, these data assets are commonly not shared because of the complete lack of tools for selective sharing, and the fear of sharing (or understandable apathy) engendered under “all or nothing” sharing models. As published back in 1993 by the MIT Sloan School in Why Not One Big Database? Ownership Principles for Database Design: "When it is impossible to provide an explicit contract that rewards those who create and maintain data, ‘ownership’ will be the best way to provide incentives." Data ownership matters. And selective sharing – appropriately designed for enterprises – will match data ownership up with available incentives.

Remember that thought we asked you to hold?

In our opinion the Google Affiliate Network is already providing incentives that are a sustainable, market-driven substitute for what turned out to be unsustainable, USDA-driven incentives. We presume that Google is well aware of potential synergies between Google+ and the Google Affiliate Network. We also presume that Google is well aware that "[w]hile business-critical information is often already gathered in integrated information systems, such as ERP, CRM and SCM systems, the integration of these systems itself (as well as the integration with the abundance of other information sources) is still a major challenge."

We know this is a "big idea" but in our opinion the dynamic blending of Google+ and the Google Affiliate Network could over time bring within reach a holy grail in web communications – the cracking of the data silos of enterprise class supply chains for increased sharing with consumers of what to-date has been "off limits" proprietary product information.

A glimpse of the future may be found for example in the adoption of Google+ by Cadbury UK, but the design for selective sharing of Google+ is currently far from what it needs to attract broad enterprise usage. Sharing in Circles brings to mind Eve Maler’s blog post, Venn and the Art of Data Sharing.  That’s really cool for personal sharing (or empowering consumers as is the intent of VRM) but for enterprises Google+ will need to evolve its selective sharing functionalities. Sure, data silos of commercial supply chains are holding personal identities close to their chest (e.g., CRM customer lists) but they’re also walling off product identities with every bit as much zeal, if not more. That creates a different dynamic that, again, typical Web 2.0 "all or nothing" sharing (designed, by the way, around personal identities) does not address.

It should be specially noted, however, that Eve Maler and the User-Managed Access (UMA) group at the Kantara Initiative are providing selective sharing web protocols that place "the emphasis on user visibility into and control over access by others".  And Eve in her capacity at Forrester has more recently provided a wonderful update of her earlier blog post, this one entitled A New Venn of Access Control for the API Economy.

But in our opinion before Google+, UMA or any other companies or groups working on selective sharing can have any reasonable chance of addressing "data ownership" in enterprises and their supply chains, they will need to take a careful look at incorporating fixed data elements at a single location with authorizations. It is in regard to this point that we seek to augment the current status of selective sharing. More about that line of thinking (and activities within the WikiData Project) in our earlier “tipping point” blog post, The Tipping Has Arrived: Trust and Provenance in Web Communications.

What do you think? Share your conclusions and opinions by joining us at @WholeChainCom on LinkedIn at http://tinyurl.com/WholeChainCom.

Thursday
Jan262012

Whole Chain Traceability: A Successful Research Funding Strategy

The following work product represents a critical part of the first successful strategy for obtaining funding from the USDA relative to "whole chain" traceability. It is the work of this author as weaved into a USDA National Integrated Food Safety Initiative (NIFSI) funding submission of the Whole Chain Traceability Consortium™ led by Oklahoma State University and filed in June 2011. This work highlights the usefulness of Pardalis' U.S. patents and patents pending to "whole chain" traceability. It highlights the efficacy of employing granular information objects in the Cloud for providing consumer accessibility to any agricultural supply chain. In August 2011 notification was received of an award ($543,000 for 3 years) under the USDA NIFSI for a project entitled Advancement of a whole-chain, stakeholder driven traceability system for agricultural commodities: beef cattle pilot demonstration (Funding Opportunity Number: USDA-NIFSI RFA (FY 2011), Award Number: 2011-51110-31044).

With the funding of the NIFSI project, the USDA has funded a food safety project that is distinguishable from the Food Safety Modernization Act projects being funded by the FDA and conducted by the Institute of Food Technologists (IFT). Unlike the IFT/FDA projects, the scope of the funded NIFSI project uniquely encompasses consumer accessibility to supply chain information.

A useful explanation of the benefits of a “whole chain” traceability system may be made with critical traceability identifiers (CTIDs), critical tracking events (CTEs) and Nodes as described in the IFT/FDA Traceability in Food Systems Report. CTEs are those events that must be recorded in order to allow for effective traceability of products in the supply chain. A Node refers to a point in the supply chain when an item is produced, process, shipped or sold. CTEs may be loosely defined as a transaction. Every transaction involves a process that may be separated into a beginning, middle and end.

While important and relevant data exists in any of the phases of a CTE transaction, the entire transaction may be uniquely identified and referenced by a code referred to as a critical tracking identifier (CTID). For example, with the emergence of biosensor development for the real-time detection of foodborne contamination, one may also envision adding associated real-time environmental sampling data from each node.

What is not described or envisioned in the IFT/FDA Traceability in Food Systems Report is the challenge of using even top of the line “one up/one down” product traceability systems that, notwithstanding the use of a single CTID, are inherently limiting in the data sharing options provided to both stakeholders and government regulators. Pause for a moment and compare the foregoing drawing with the next drawing. Compare CTID2 in both drawings with CTID2A, CTID2B, etc. in the next drawing. The IFT/FDA food safety projects described above are at best implementing top of the line "one up/one down" product traceability systems with the use of a single CTID. But with “whole chain” product traceability, in which CTID2 is essentially assigned down to the datum level, transactional and environmental sampling data may in real-time be granularly placed into the hands of supply chain partners, food safety regulators, or even retail customers.

The scope of “whole chain” chain information sharing within the funded USDA NIFSI project goes well beyond the “one up/one down” information sharing of the IFT/FDA projects. The NIFSI project addresses a new way of looking at information sharing for connecting supply chains with consumers. This is essentially accomplished with a system in which a content provider creates data which is then fixed (i.e., made immutable) and users can access that immutable data but cannot change it.

The granularity of Pardalis' Common Point Authoring (CPA) system (as is necessary for a “whole chain” product traceability system) is characterized by the following patent drawing of an informational object (e.g., a document, report or XML object) whose immutable data elements are radically and uniquely identified. The similarities between the foregoing object containing CTID2A, CTID2B, etc., and the immutable data element identifiers of the following drawing, should be self-evident.

For the purposes of the NIFSI funding opportunity, the Pardalis CPA system invention was appropriately characterized as a “whole chain” product traceability system.  A further, high-altitude drawing, characterized the application of the invention to a major U.S. agricultural supply chain:

Several questions were required in the USDA's NIFSI "Review Package" to be addressed before actual funding. The responses to two of those questions were crafted by this author. They are worth inserting here ....

Question 1: A reviewer was skeptical that the system would be capable of handling different levels of data (consumer, producer, RFID, bar code) seamlessly.

There is an assumption in the reviewer’s opinion that data is different because it is consumer, producer, RFID, bar code, etc. The proposed pilot project is based on a premise that data is data. The difference in data that is perceived by the reviewer is not in its categorization per se but in its proprietary nature. That is, it is perceived to be different because it is locked up (often in categories of consumer, producer, RFID, bar code, etc.) in proprietary data silos along the supply and demand chains. It is reasonable to have this viewpoint given the prevalence of "one-up/one-down" data sharing in supply chains. As stated in the Positive Aspects of the Proposal, “[t]he use of open source software and the ability to add consumer access to the tracability (sic) system set this proposal apart from other similar proposals.” The proposed pilot project will demonstrate how an open source approach to increasing interoperability between enterprise data silos (buttressed by metadata permissions and security controls in the hands of the actual data producers) will provide new "whole chain" ways of looking at information sharing in enterprise supply and consumer demand chains. For instance, consumers could opt for retailers to automatically populate their accounts from their actual point-of-sale retail purchases. Consumers could additionally populate accounts in a multi-tenancy social network (like Facebook) using smartphone bar code image capturing applications. Supplemented by cross-reference to an industry GTIN/GLN database, the product identifiers would be associated with company names, time stamps, location and similar metadata. This could empower consumers with a one-stop shop for confidentially reporting suspicious food to FoodSafety.gov. Likewise, consumers could be provided with real-time, relevant food recall information in their multi-tenancy, social networking accounts, and their connected smartphone applications.

Question 2: A member of the panel was skeptical that the consumer accessibility would be largely attractive as this capability currently has limited appeal among consumers.

We recognize this viewpoint to be a highly prevalent opinion within an ag and food industry predominantly sharing data in a “one-up/one-down” manner. When one uses a smartphone today to scan an item in a grocery store, the probability of being able to retrieve any data from the typical ag and food supply chain is very low. However, we have been highly influenced in our thinking by the existing data showing that many consumers do not take appropriate protective actions during a foodborne illness outbreak or food recall. Furthermore, 41 percent of U.S. consumers say they have never looked for any recalled product in their home. Conversely, some consumers overreact to the announcement of a foodborne illness outbreak by not purchasing safe foods. We have been further influenced by how producers of organic and natural products are adopting rapidly evolving smartphone and mobile technologies as a way of communicating directly with consumers, and increasing their market share. We contend that by increasing supply chain transparency with real-time, whole chain technologies, “consumer accessibility” will become more and more appealing.  We contend this to be especially true when there is a product recall and the products are already in the home. And so, again, our high interest in working with FoodSafety.gov.

The foregoing strategy and comments may be freely cited with attribution to this author as CEO of Pardalis, Inc. It is offered in the spirit of the "sharing is winning" principles of the Whole Chain Traceability Consortium™ (now being rebranded as @WholeChainTrace™). However, no right to use Pardalis' patent or patents pending is conveyed thereby. If you wish to be a research collaborator with Pardalis, or to license or use Pardalis' patented innovations, please contact the author.

Go to Part II

Friday
Aug052011

A New Way of Looking at Information Sharing in Supply & Demand Chains

The Internet is achieved via layered protocols. Transmitted data, flowing through these layers are enriched with metadata necessary for the correct interpretation of the data presented to users of the Web. Tim Berners-Lee, inventor of the Web says, “The Web was originally conceived as a tool for researchers who trusted one another implicitly …. We have been living with the consequences ever since ….” “[We need] to provide Web users with better ways of determining whether material on a site can be trusted ….”

Our lives have nonetheless become better as a result of Web service providers like Google and Facebook. Consumers are now conditioned to believe that they can – or should be able to - search and find information about anything, anytime. But the service providers dictate their quality of service in a one-way conversation that exploits the advantages of the Web as it exists. What may be considered trustworthy content is limited to that which is dictated by the service providers. The result is that consumers cannot find real-time, trustworthy information about much of anything.

Despite all the work in academic research there is still no industry solution that fully supports the sharing of proprietary supply chain product information between “data silos”. Industry remains in the throes of one-up/one down information sharing when what is needed is real-time “whole chain” interoperability. The Web needs to provide two-way, real-time interoperability in the content provided by information producers. Immutable objects have heretofore been traditionally used to provide more efficient data communications between networked machines, but not between information producers. Now researchers are innovatively coming up with new ways of using immutable objects in interoperable, two-way communications between information content providers.

A New Way of Looking at Information Sharing in Supply & Demand chains

Pardalis’ protocols for immutable informational objects make possible a value chain of two-way, interoperable sharing that makes information more available, trustworthy, and traceable. This, in turn, incentivizes increases in the quality and availability of new information leading to new business models.