17 October 2018

Artificial intelligence threatened by the fake

Fake is one of the most frequently used expressions in the English language in recent weeks. Often associated with the word “news” to reference biased information, this term could also apply to Artificial Intelligence (AI) or companies allegedly specialized in AI.

 

 

In the middle of summer, The Guardian, a British daily newspaper, focused on six of these companies, including Spinvox, a startup started in 2008 and positioned as an expert in this technology. Christina Domecq’s company claimed to use AI to convert voice messages into SMS. In fact, there was no AI, just thousands of operators enrolled in call centers located abroad.

 

To clear themselves of customs, some companies that were caught by authorities alleged the “Amazon jurisprudence”. Indeed, behind their services for transcribing meetings or identifying people in photos, there are people called “click workers” or “Amazon mechanical Turks”. The poor working conditions and low wages have been for these workers have been a topic of debate, Amazon has been totally transparent. Moreover, the firm has put its cards on the table by creating software accessible to everyone to access this “Human Intelligence”.

 

“Fake it until you make it”

Obviously AI does not have a monopoly on fake. In recent years, the environmental issue has also become the scene of imposters. For example, many economic actors pretend to be eco-responsible by embedding green and blue colours associated with sustainable development on their packaging without changing industrial practice.

AI makes you dream and is recognized as the missing link between reality and science fiction. The media swears by it and investors have made it a golden calf.

 

The proof: last June, the Serena Capital investment fund published the third edition of its annual study on investments in AI start-ups in Europe. And the figures speak for themselves. The startups that raised AI-based funds in have gone from 71 to 211 with an average value of nearly €4 million. While failing to develop true AI, some have chosen the fake to get their share of the cake while shaking behind the scenes to acquire real AI as quickly as possible. It is the famous “fake it until you make it”.

 

This context favours abuse, but it would be in the best interest ecosystem to seek inspiration in the way where the world of science combats fraud in its specialized journals.

 

A labelled artificial intelligence

According to Finnish researchers Cenyu Shen and Bo-Christer Bjork, the number of dubious articles published in the scientific press increased eightfold between 2010 and 2014, from 50,000 to 400,000. A leap linked, partly due to the obligation of scientists to publish their research results as regularly as possible in scientific media or the risk disappearing from the radar.

 

To counter this phenomenon, evaluation agencies have been created. In France, these are under the authority of the High Council for the Evaluation of Research and Higher Education (HCERES). As its president, Michel Cosnard, recently explained, “quality certificates issued by agencies can be purchased as well as source citations. To truly protect the world of science, it is important to return to the basics by selecting experts of high-quality to assess the reliability of peer publications.

 

A similar approach could emerge in the world of AI. Why not consider the creation of an authentication label under the impetus of the State Secretariat for Digital? Any technology that uses data and algorithms to autonomously perform complex intellectual tasks previously performed by or with humans is considered AI. When it is made public, this label will make it easier to identify the real AI players, and eliminate the fraudulent, while also keeping investment funds from betting on deceitful startups. In many areas, particularly those where consumer safety is at stake, labels have proved their worth. Why not consider their use in the case of AI?

 

While the modalities of application of this mechanism are questionable, the purpose cannot be. If we want to promote the acceptance of AI, we must start by cleaning our own house.