France’s President Emmanual Macron last month laid out a vision for artificial intelligence (AI) dominance in a major speech, a vision that he hopes will place top global AI resources in France and not China or the U.S. But Macron's vision, especially when it comes to realistic privacy goals, seems to ignore key parts of what AI truly is.
Macron gave an interview to Wired where he elaborated on his AI strategy. Let's consider his key points:
"But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. […] This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk.
“When you look at artificial intelligence today, the two leaders are the U.S. and China. In the U.S., it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving.
“On the other side, Chinese players collect a lot of data-driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as U.S. or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution. That’s the condition of having a say in designing and defining the rules of AI. "
At this point, Macron has identified the risks. But his response to that? Open algorithms, which is where his argument veers from the real issue.
"We will increase the collective pressure to make these algorithms transparent. We will open data from government, publicly funded projects, and we will open access from this project and we will favor, incentivize the private players to make it totally public and transparent.
"Obviously, some of them will say, 'There is a commercial value in my algorithm. I don't want to make it transparent.' But I think we need a fair discussion between service providers and consumers, who are also citizens and will say: 'I have to better understand your algorithm and be sure that this is trustworthy.'"
The problem with AI privacy is not primarily about the algorithms. Yes, some dishonorable marketers may try and sneak in coding that will covertly do something other than what they claim to be doing. But the real problem is with the data and how companies use it, and forcing algorithms to be transparent won't help that.
Consider an example within the fintech and financial market. Today, many of these companies are pushing AI bots on their sites that will attempt to gather extensive personal financial information on customers, with the official goal of delivering better recommendations for budgeting, saving, retirement, and various other financial decisions. They don't merely want to know salary and mortgage costs. They want to know if the customer has loaned a relative money and, if so, how much and when and how likely the relative is to pay it back. They want to know if the consumer has recently let a hard-on-his-luck friend move into their home, causing a drain on the consumer's take-home pay.
A thorough examination of that algorithm will confirm what the bank is telling customers. But once that data has been collected and analyzed and reported to the bank, what’s to stop the bank from then using the data for other purposes, such as denying that customer the loan that will be requested in six months?
These are the same concerns that apply to the most-often referenced suspects in the AI abuse realm, which the French President referred to as GAFA, an acronym for Google, Apple, Facebook, and Amazon. Knowing how those players are collecting information doesn't answer the real question of what they will eventually do with it.
Another privacy concern that isn't addressed by open algorithms is how AI can convert innocuous non-private and non-sensitive information into highly-sensitive data.
When you sign up for a loyalty card at a grocery store, the fine print outlines how your shopper data can be used. If you don’t consider your grocery purchases to be private, you may gladly agree to this trade. But what if a company took that data and looked for hints of political opinion in those items? And then sold that information to companies who only want to hire people who think like their current management? At that point, you're not being hired because of your political opinion, but because you bought a food item overwhelmingly purchased by people with that opinion.
Speaking of grocery item analytics, let's say that you do care and want to be left alone. This is an extreme example but imagine you take out a lot of cash from the bank (but not enough at any one time to trigger U.S. Treasury reporting requirements) and take a series of public transportation trips to your destination a few thousand miles away (to avoid having your car license plates spotted and logged). You use an assumed name and only pay with cash.
Law enforcement professionals have used grocery records to detect patterns. Your old grocery store offers your history, which indicates certain brands of shampoo and flavors of ice cream and lots of other purchase patterns. Software then looks at grocery purchases across the country, trying to find purchases matching your pattern. Once found, they note the typical times for those purchases and then, bingo, you are good as found.
Macron, though, is absolutely right with his Pandora's Box comparison. The flaw in his thinking is that the awesome power of AI is, in fact, controllable. It's not. It's not controllable any more than it is stoppable. Speaking of privacy limitations with AI is to fundamentally misunderstand AI and the nature of technology and business.
The French president would likely argue that this argument proves his point, that it's an explicitly American view that data is owned by businesses. He would likely argue that such American thinking is precisely why he feels the need to create a European environment for properly handling AI privacy.
The flaw in that thinking is that companies like GAFA limit their efforts to the United States, which they clearly don't. French citizens will be making Amazon purchases, conducting Google searches, spelling out their private lives to Facebook, and sharing full geolocation data with Apple. France can't address this global issue with a one-nation—or even a one-continent—approach.
AI—and everything it will do to privacy—is a fait accompli. If AI privacy improvements are needed, the focus needs to be on restrictions on using the data beyond the original intent, with penalties. Do anything shy of that and open algorithms will do little more than show us all the coding that is dismantling privacy.
Evan Schuman has been a technology writer for a lot longer than he'll admit. Beyond writing a weekly column for Computerworld and security pieces for SCMagazine and PCMagazine, he moderates podcasts, webcasts and live events on B2B tech topics.