Can we trust AI to improve the quality of our product data?

Artificial Intelligence

Thursday 28 February 2019 
Etienne Sola Etienne Sola
4 minutes
Can we trust AI to improve the quality of our product data?

In the field of product information quality management, introducing AI requires paying particular attention to the interaction between AI and man, to maintain a high level of trust.

In trying to reproduce human intelligence, we discovered through the artificial intelligence (AI) projects that machines, just like people, make errors. Losing favor initially, AI made a comeback a few years ago thanks to the trust that man now has in machines. In the domain of product information quality management, introducing AI requires paying particular attention to the interaction between AI and man, to maintain confidence.

The return of AI

Artificial intelligence is nothing new. This scientific discipline originated in the 1950s with the work of Alan Turing, becoming particularly popular in the 60s and 70s. It then saw a long period of disuse, almost becoming “has-been”, AI was no longer of interest. I personally saw the end of the artificial intelligence courses in my school of engineering in the early 2000s. Our future employers weren't interested in the topic. AI was removed from the curriculum when I left in 2003.

Why has artificial intelligence made a comeback in recent years as the essential technology of our future?

A lot of people are saying that the power of calculation, a technology now available and accessible to all, enables us to (finally) drive artificial intelligence projects. This is one essential element, but there are other factors that explain the return of AI to center stage.

Trust is back

An essential factor in the realm of AI, the confidence that man has in “the machine” is the basis on which any AI project should be founded.

The “MYCIN” project

Developed in the 1970s at Stanford University, this expert system could elaborate medical diagnoses for patients by analyzing their symptoms and the results of their medical exams. Analyses were carried out to check the capacities of the MYCIN system. MYCIN achieved a level of diagnosis reliability of around 65%. This level was far higher than that of medical field experts which were somewhere between 42.5% and 62.5%. Despite this high level of diagnostic reliability, MYCIN was never used. It was considered as being imperfect and the potential generator of legal responsibility problems in case of incorrect diagnosis. In short, Humans were not ready to trust a machine which, despite being imperfect, still performed better than them.

The man/machine partnership

The aim of artificial intelligence is to reproduce human intelligence as accurately as possible. The proverb “to err is human” shows that we are conscious of the limits of our own capacities. As such, AI should also be allowed some leeway for error, which would only reflect human error.

Today, technology is all around us. It has even become an essential part of our daily lives, or at least that's how it seems. The level of trust that human beings put in machines has progressed enormously. Our use and reliance on price comparison websites for finding the cheapest flight, hotel or holiday package is a great example of this. We avoid hours of fastidious work, even if the task itself is quite simple. In this case, the degree of reliability for the human will be based on their “capacity” to compare all available offers at the same time. A very low capacity when compared to the machine on which they rely.

The considerable development of the internet is what naturally brought mankind closer to machines. Machines with which they dialogue every day. AI projects made their comeback in this context.

Limited trust

Let's use the example of autonomous vehicles, one of the most high-profile topics in the world of AI.

The recent accidents of Tesla vehicles in “autopilot” mode have increased driver fears. In 2018, 73% of Americans were afraid to travel in an autonomous vehicle. However, the statistics presented by Tesla show that with one fatal accident for 209 million kilometers covered, their autonomous vehicle is more reliable than all other vehicles in the United States, which have an average of one fatal accident for 152 million kilometers driven. Around the world, this figure is one fatal accident for every 97 million kilometers!

Despite these fears, AI is making headway in the industry. The autonomous vehicle is a reality of tomorrow. Mankind will soon be driven by artificial intelligence.

The AI paradox

Trust in AI has come on considerably, so where's the limit? Man doesn't want machines to make mistakes instead of them. The paradox is that the conscience of humans in their own limits, encourages them to continue to invent machines which will reduce their rate of errors.

AI and the quality of product information

In the field of product information quality management, computerized solutions have seen the light of day over the past few years (PIM, MDM, DAM…). They allow us to intricately manage all product characteristics (marketing, technical, logistics, sales...) by implementing a unique centralized reference source.

The main objective of these systems is to optimize the quality of product information.

An incorrect image (photo of the blue model with the reference for the red model) on an online store will have consequences: negative comments from buyers, product returns, high processing costs. Incorrect price indications are also a classic error with potentially drastic consequences.

To improve the quality, and therefore the reliability of product information and avoid these problems, adding AI to the PIM / MDM seemed like a good idea. AI could analyze images of products and check their compliance with the rest of the product fact sheet.

But if AI makes errors and injects them into the product repository, it will destroy the efforts of the teams that put together this centralized data base. In the field of PIM / MDM, AI cannot be implemented as an “autopilot” to analyze data and to detect and correct errors.

For the PIM / MDM solution, the implementation of an interaction between the AI and the human being is mandatory. Control and decision-making must remain the task of humans. Following the example of recent medical diagnostic tools, AI must be considered as an assistance to enable humans to increase their performance and not as an autonomous machine to replace the human. AI should be associated with traditional algorithms with proven reliability that have already gained the trust of the users. Finally, AI, must systematically be capable of estimating its own reliability, and as such, also know when to “shut up” rather than talking rubbish.

The introduction of AI into PIM and MDM solutions requires setting up interfaces for data exchange, inspection and validation if we don't want it to be discredited for misunderstood issues of trust.

Read also
E-commerce: cash in on Black Friday

E-commerce: cash in on Black Friday

Black Friday is a US born event launching the holiday shopping period with exceptional offers. Discover our 5 tips for getting the most from this period and boosting your sales.

PIM and B2B companies: manage your product data efficiently
Product Information Management

PIM and B2B companies: manage your product data efficiently

B2B companies speak to professionals with specific problems and restrictions linked to their core business. These professionals have very precise expectations and want to access detailed product information...

Choosing a PIM / MDM with Gartner and Forrester
Guide and advises

Choosing a PIM / MDM with Gartner and Forrester

Discover the advantages and the limitations of the Forrester Wave PIM and the Gartner Magic Quadrant MDM and how a business can use them intelligently to search for the PIM / MDM which will best suit their needs.