As of May 2020, it is the most significant data protection fine ever issued.
GDPR experience so far (May 2018- May 2019)
GDPR assessment after one year
As reported by the EU commission in May 2019:
- 3 countries are still in the process of adopting the GDPR (Greece, Slovenia, and Portugal)
- 144,376 queries and complaints to data protection authorities have been filed
- 89,271 data breaches have been reported
- 5 fines have been issued (for a total of €52m)
What can we see here?
- There is no avalanche of multi-million fines as predicted by scaremongers. The only big one is related to Google for its lack of transparency about the way it collects personal data for advertising.
- GDPR did not change nor block everything, as feared by many. It's an evolutionary process. Over 1000 US sites blocked EU citizens in 2018. This situation is no longer the case.
- The GDPR is having an influence worldwide and, more specifically, in the United States. The debate is heating up in the country for two reasons: the introduction of the GDPR and California's CCPA.
- The GDPR is not only about consent. That's probably one aspect that has been misunderstood.
- However, the UK and France have seen a flood of businesses reporting themselves for violations.
- It's interesting to note that Japan has put in place a set of rules (adequacy decision) to bridge the differences between its data protection system and the GDPR in January 2019.
Biometric data protection in the United States
In the United States, there is no single, comprehensive federal law regulating the collection and use of personal data in general, or biometric data in particular. Instead, the country has a patchwork system of federal and state laws and regulations that can sometimes overlap or contradict one another.
But that's not all.
Government agencies and industry groups have developed self-regulatory guidelines, drawn from best practices and which are now taken into account by regulators.
Apple, Facebook, Google, and Microsoft have been self-regulating for some time, even though these companies have been investing heavily in the creation of powerful facial recognition technologies.
Facebook, for example, has an agreement with the Federal Trade Commission. Under this, the company has to first obtain "affirmative, express consent" before going beyond a user's specified privacy settings.
In July 2018, Microsoft President Brad Smith called for federal regulation for facial recognition software use and urged Congress to study it and oversee its implementation.
This unusual blog post illustrates how powerful technologies involving artificial intelligence — such as facial recognition — have set off a controversial battle among tech executives.
Identification without consent in 45 states
As of May 2020, it is legal in 45 states for software to identify an individual using images taken without consent while they are in public. New York, California, Washington, Illinois, and Texas don't allow it for commercial use.
A New York State law called the Stop Hacks and Improve Electronic Data Security (SHIELD) became effective 21 March 2020.
This evolution of New York State’s existing data security law defines private information as personal data including:
- Social security number
- Driver’s license or state ID’s number
- Financial account information
- Biometric data
- Username or email address in combination with a password or security question.
The law (New York State bill S55575B aka SHIELD act) requires the implementation of a cybersecurity program and protective measure for NY State residents.
The act applies to businesses that collect the personal information of NY residents.
With the act, New York now stands beside California.
California was the fourth state to pass a biometric privacy law in 2020. It covers any business entity that collects biometric identifiers for commercial purposes.
So what's the situation in most states?
Facial recognition, for example, can be performed inconspicuously from a distance without the individual actively providing any information.
There's already facial recognition software that shops can use to signal pre-identified shoplifters or to identify customers that return goods too often. And it doesn't take much to imagine that - thanks to Facebook - these shops could quickly get immediate information on their customers when they enter the store: who they are, where they live, income or credit score.
From a privacy perspective, these practices conflict with critical principles such as anonymity, consent, and purpose.
Let's dig a little deeper.
Many parties to address the issue
The question of consent and how to manage biometric data is sensitive, and it seems as if virtually every agency in Washington is addressing at least part of the issue:
- The National Institute of Standards and Technology for the evaluation of biometric technologies.
- The Federal Trade Commission for data security with the FTC Act (15 U.S.C. §§41-58). This consumer protection law prohibits unfair or deceptive practices. It's been applied to offline and online privacy and data security policies.
- The Food and Drug Administration for the security of implants.
- The Department of Health and Human Services with the Health Insurance Portability and Accountability Act (42 U.S.C. §1301 et seq.) for medical information. The HIPAA Privacy Rule of 2003 regulates the use and disclosure of Protected Health Information (PHI) held by "covered entities." They may disclose protected health information to law enforcement officials for law enforcement purposes and administrative requests; or to identify or locate a suspect, fugitive, material witness, or missing person.
Four states have enacted a protection law for biometric identifiers, and several others are debating one.
So, US regulators have to focus increasingly the use of biometric data.
Four significant steps in 2019-2020
Things have been moving fast in the last months in the US.
At least four significant privacy legislation fronts are worth mentioning:
- The California Consumer Privacy Act and NY's SHIELD
- The 2008 Illinois Biometric Protection Act (BIPA) and the 25 January 2019 ruling in the Rosenbach v. Six Flags Entertainment Corporation case.
- Federal legislative hearings
- The anti-surveillance ordinance signed on 6 May 2019 by San Francisco's Board of Supervisors.
#1 California's new privacy law
The California Consumer Privacy Act (CCPA) is a bill passed in June 2018. It enhances privacy rights and consumer protection for residents of California. The CCPA becomes effective on 1 January 2020.
California is the fifth-largest economy in the world and home of many tech giants. It is also traditionally a trend-setting state for data protection and privacy in the US.
The law, effective on 1 January 2020, is frequently presented as a potential model for a US data privacy law. In that sense, the CCPA has the potential to become as consequential as the GDPR.
The regulation inspires many national laws outside the EU, including Chile, Japan, Brazil, South Korea, Argentina and Kenya.
CCPA definition of biometric data is a bit broader than that of GDPR: “an individual’s physiological, biological or behavioural characteristics, including an individual’s DNA, that can be used, singly or in combination with each other or with other identifying data, to establish individual identity.”
The rights provided to California consumers to protect their personal information and biometric data include:
- Accessing the data (right of disclosure or access),
- Deleting them (right to be forgotten),
- Taking them (data portability – the data must be received in a commonly used and readable format),
- Requesting businesses not to sell their personal information,
- Opting out (Opt-in is the primary consent standard mandated by European GDPR),
- Right of action (penalties).
For a more detailed comparison between CCPA and GDPR, we suggest this excellent document.
#2 Illinois's BIPA and the Rosenbach v. Six Flags case
Illinois’ BIPA is the most robust biometric privacy law in the United States.
The case was significant because the Illinois Supreme Court ruled that a plaintiff didn't need to show additional harm to impose penalties on a BIPA violator. A loss of statutory biometric privacy rights is enough.
In other words, when companies collect biometric data like fingerprints or faceprints without opt-in consent, they can be sued.
The Electronic Frontier Foundation praised the ruling, calling it a key privacy victory.
#3 Federal hearings and activity
It seems California has strongly motivated members of Congress.
Federal legislative hearings and activity are aiming at combating the challenge, created by a “patchwork” of separate, individual state privacy laws.
The House Oversight and Reform Committee held its third hearing ion 15 January 2020 about facial recognition.
But could California's privacy law be a model for the US as Government Technology put it?
But with the coronavirus pandemic, it's unlikely legislators return to this topic in 2020.
According to Fortune Magazine (May 2020 issue), at this point, we’re looking at something for next year.
#4 San Francisco's ban on facial recognition
The anti-surveillance ordinance signed on 6 May 2019 by San Francisco's Board of Supervisors is the first ban by a major city on the use of face recognition technology.
It prohibits its government from using facial-recognition technology. This includes SFPD.
Since the passage of the ordinance, the debate is hot in many cities and states.
Should other locality follow this example? Is this a step backwards for public safety? Is the ban just a "pause button" to better analyze the risks of such technology?
Somerville (Massachusetts) in June and Oakland (California) in July took the same decision. San Diego went along the same way in December 2019 with a three-year moratorium.
So, stay tuned for the outcome of all these discussions and, in the meantime..let's move to India.
India and the emerging global consensus on biometric data protection
On 24 August 2017, India made it very clear as the Supreme Court ruled privacy a ‘fundamental right’ in a landmark case.
A September 2018 supreme court judge eventually ruled that it is unconstitutional for private companies to use Aadhaar data, impacting the massive country’s biometric identification program.
Just think about the size of this project.
Aadhaar was first unveiled back in 2009. Today, some 1.27 billion people have an Aadhaar number, accounting for more than 99% of India’s total adult population.
The principle is simple.
Biographic and biometric data are captured from all Indian residents aged over 18. This means name, date of birth, gender, address, a photograph, and ten fingerprint and two iris scans.
Each resident is then issued with their own, unique 12-digit Aadhaar number. It’s a residential and not a citizenship card and not compulsory so far. It’s a single, universal, digital identity number that any registered entity can use to “authenticate” an Indian resident. But the ID is not the card, it’s the number, and it’s purely digital and hence verifiable online.
You’re there. In India, it’s about you being the identity, not the card.
So should this project be limited to a national ID scheme? It seems not.
On 28 February 2019, India’s Modi government approved the change of the law governing the country’s biometric ID program. In particular, the changes allow Aadhaar to be used by private entities— after a September 2018 supreme court judge ruled that it was unconstitutional.
New Aadhaar amendments were passed in July 2019. They allow for the usage of the Aadhaar number for verification (such as electronic know your customer process or e-KYC), by private parties, and thereby doesn’t prevent them from collecting the Aadhaar details of an individual.
A global consensus on privacy?
Privacy demands rigorous accountability. We see the emergence of a global consensus in many countries, its fundamental principle being that mismanagement of personal information will not be tolerated and that companies that do not protect data adequately could be hit with hefty fines.
Let’s hope that these new laws and regulations can keep pace with digital change.
Thales and digital security
An expert in strong identification with more than 200 civil ID, population registration, and law enforcement projects that incorporate biometrics, Thales can act as an independent authority in proposing and recommending the most suitable solution for each application.
Thales attaches a great deal of importance to the assessment of risks and to the capacity of private operators to manage such risks. Similarly, legal and social implications are also significant.
Although Thales keeps an open mind concerning biometric techniques, it remains no less convinced that, whatever the choice of biometric, this technology offers significant benefits for guaranteeing identity.
More resources on privacy laws