Sunday, 12 May 2013

Biometrics: will the Center for Global Development reconsider?

A recently published report on India's identity management scheme says that: "accurate, biometric-based, identification is quite feasible for large countries, including the US".

The suggestion below is that the conclusion should read: "subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed and as long as you're relaxed about the fact that anyone could have any number of entries on the population register and the fact that the discipline of biometrics is out of statistical control".

Will the authors consider issuing a revised version of their report?

-----  o  O  o  -----

On the rare occasions when trials have been conducted, the performance of biometrics technology has been disappointing. For example, when 10,000 of us took part in a UK government-run trial in 2004, about 20% of participants couldn't have their identity verified by their fingerprints.

That's useless. For example, the plan at the time was to use biometrics to confirm people's right to work in the UK. You can't tell 20% of the working population that it's illegal for them to work.

Ever optimistic, the biometrics industry is always announcing that the corner has been turned and that it's safe now to believe their promises. Is that true at last?

Consider Performance Lessons from India’s Universal Identification Program [Updated 13.12.18, change of web address, now here], a 12-page report by Alan Gelb and Julia Clark (Gelb and Clark, G&C).

It's about India's Unique Identification project (UID, also known as "Aadhaar") which relies on biometrics. UID/Aadhaar is the brainchild of UIDAI, the Unique Identification Authority of India. UIDAI are currently trying to register the biometrics of all 1.2 billion Indians.

G&C conclude that:
UID’s performance suggests that accurate, biometric-based, identification is quite feasible for large countries, including the US (p.8).

UID shows that countries with large populations can implement inclusive, precise, high-quality identity systems by using existing technology (p.9).
Those conclusions are electric.

If they're correct.

But are they?

Why do G&C conclude that biometrics is now ready for large-scale deployment?

-----  o  O  o  -----

They have identified "160 [biometrics] programs in 70 countries that together cover over 1 billion people and include a wide range of applications – financial access, public payroll management, social transfers [?], health insurance and tracking and voter rolls – as well as national identification systems" (p.1).

Do they say that biometrics is ready for the big time because UIDAI have successfully implemented financial access systems which depend on biometrics? Or public payroll management systems? Or ...

Certainly not.

In fact G&C are at pains to say that:
UID is still at a relatively early stage, and links to the delivery of public programs are only now getting under way (p.2).

It remains to be seen how robust the system is against active efforts to spoof it by providing faked fingerprints or iris images, to capture biometric data in transmission or to penetrate the database (p.2).

Having a unique Aadhaar number issued by UIDAI itself entitles the holder to no specific privileges or programs (p.3).

UID is still at an early stage. Only one fifth of the population has been enrolled and the linkage to public programs is just beginning (p.8).
Their logic doesn't depend on any practical successes of Aadhaar. There aren't any.

What G&C base their conclusions on is the performance of biometrics in the compilation of the Indian population register so far. If we are to answer the question whether their conclusions are correct, we need to look at the UIDAI statistics which measure the reliability of biometrics.

Before we do that, we need to update G&C's conclusions. There's a rider to add. Their p.2 warnings about spoofing and eavesdropping on telecommunications and burgling the population register need to be incorporated – the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack.

-----  o  O  o  -----

Slide rules ready? G&C say (p.5):
How many people would be denied enrolment because of a wrong determination that they had already enrolled? The False Rejection Rate (FRR) of the identity system is critical, especially with a large population. Since every new enrollment has to be checked against every existing enrollment, the number of comparisons increases with the square of the population ... Extrapolating this to our hypothetical Ughana population ...
Wrong.

Think in terms of ice cream. How many unique combinations of two ice cream flavours can you make from a choice of five flavours (A, B, C, D and E)? G&C suggest that the answer is 25, "the square of the population". It isn't. It's 10 (AB, AC, AD, AE, BC, BD, BE, CD, CE and DE), 5!/((5-2)! x 2!).

G&C have a peculiar habit. They're talking about India, with its population of 1.2 billion, but half the time when they use statistics they apply them to Ughana, a country they have invented. Why?

It confuses the readers. It may also confuse the writers. Forswearing Ughana and sticking to India, how many comparisons would have to be made to compare each one of 1.2 billion sets of biometrics against all the rest? Answer, 719,999,999,400,000,000 and not G&C's implied answer 1,440,000,000,000,000,000, which is out by a smidgeon over 100%.

New rider on the conclusions –  the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right.

-----  o  O  o  -----

How did G&C get themselves into this bind?

It was in the midst of a discussion about false accept rates and false reject rates.

Aadhaar is all about comparing the biometrics captured by a fingerprint scanner or an iris scanner with the biometrics stored in the population register. Either they match or they don't.

Say your Aadhaar number is 782474317884, that there's an election on and that you have turned up at a voting centre. The biometrics associated with 782474317884 are retrieved from the population register and checked to see if they match your freshly scanned biometrics. If they do, you can vote. It's a one-to-one comparison, an "authentication" process.

Two ways the process can go wrong (among others):
  • Either the process says the biometrics don't match, you are not who you claim to be, you are not President Lincoln according to Aadhaar, even though you are in reality. That's a false reject. 
  • Or, alternatively, the process can say that, yes, you are who you claim to be, you are President Lincoln, when, in fact, you're not, you're an impostor. That's a false accept.
The False Accept Rate (FAR) and False Reject Rate (FRR) are two measures of the reliability of any biometrics system. They are inversely proportional. This is the "Detection Error Tradeoff" that G&C talk about on p.4. As one goes up, the other goes down. You can't get them both low at the same time.

Take a look at UIDAI's 27 March 2012 report on authentication (p.4). Using one or two fingers to authenticate yourself, UIDAI expect the Aadhaar system to be between 93.5% and 99% accurate. I.e. FRR will be between 1% and 6.5%. That's with a FAR of 0.01%. FRR is high, FAR is low(ish).

Varying FAR from high to low and FRR the other way is achieved by changing the matching threshold. You can set the system to insist on a very high score before asserting that President Lincoln's freshly scanned fingerprints match the set already stored on the population register. That would give a low FAR and a high FRR. Or you can set a very low threshold and achieve the opposite. And all points in between.

This is odd.

In the world we're used to, if you are President Lincoln then you are President Lincoln and that's all there is to it. It doesn't depend on the matching threshold set by some state functionary.

In the world of Aadhaar, depending on the threshold chosen, sometimes you will be President Lincoln (low threshold, easy to achieve a match, low FRR, high FAR) and sometimes you won't (high threshold, hard to achieve a match, high FRR, low FAR). It all depends. At the limit, the functionary could fix it so that no-one was President Lincoln. Or that everyone was.

When we said above that "either they match or they don't", that was a tease. That's the way people imagine biometrics systems to work. All cut and dried. In fact, it's discretionary.

The concept of identity in Aadhaar is different from the concept in the real world. Identity becomes discretionary, something that can be granted or revoked by twiddling the dial on a gizmo.

There's another rider to add to G&C's conclusions – the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed.

-----  o  O  o  -----

That's authentication.

Identification is different.

Identification is the process you go through when you are enrolled into Aadhaar. Before identification, you don't exist as far as Aadhaar is concerned. If public services in India ever start to depend on Aadhaar and you don't have an Aadhaar number, you won't get any public services. Why would the state provide benefits to someone who doesn't exist? At the very least you will look very suspicious.

Identification is a one-to-many process. When you first enrol someone in the register, their biometrics have to be checked for uniqueness. Instead of checking them against just one set of biometrics, they have to be checked against every set already registered.

The errors that can be made by the biometrics system are very similar (among others, yes you are already enrolled when really you're not or no you're not already enrolled when really you are) but the process has such existential consequences that it's normal to talk of false negative identification rate (FNIR) and false positive identification rate (FPIR), rather than FAR and FRR, to distinguish it from mere quotidian authentication.

UIDAI talk of FRRs between 1% and 6.5% for authentication using fingerprints whereas, when it comes to identification, their FPIR figure is 0.057%. That's two orders of magnitude different. Identification is a strict process and, by comparison, authentication is flabby.

G&C unfortunately use FAR and FRR for both identification and authentication which obscures the important distinctions between the two processes.

-----  o  O  o  -----

FNIR and FPIR are inversely proportional, like FAR and FRR.

How good are the biometrics UIDAI are using at creating a reliable population register?

It's a problem Professor John Daugman has looked at. Not in connection with Aadhaar in particular. But in general. For any biometrics-based identity management scheme.

Remember, to establish uniqueness for every one of the 1.2 billion sets of biometrics on India's population register, you have to make 719,999,999,400,000,000 comparisons.

Suppose, says Professor Daugman, that there's a mistake 1 time in a million such that a false positive identification is made. Then Aadhaar will throw up 719,999,999,400 false matches.

These can't be resolved by the computer – it's the computer that threw up the false matches in the first place. They have to be resolved by human investigations.

Humans aren't going to complete 719,999,999,400 investigations. It's impractical. The identity management scheme will drown in a sea of false positives, as the professor puts it.

Is there a one-in-a-million chance of a mistake?

Professor Daugman thinks that it's a lot worse than that if you rely on face recognition as a biometric. There's far too little randomness in faces, there are far too few degrees of freedom, for face recognition to support enormous numbers like 719,999,999,400,000,000. (Never mind Ughana, that doesn't stop the UK government wasting money on face recognition.)

Fingerprinting is better in this sense than face recognition, but still not good enough to avoid drowning in a sea of false positives. (That doesn't stop the UK government wasting money on glitzy new fingerprinting systems.)

Irises on the other hand do have enough randomness, he says, there are enough degrees of freedom to stay afloat. Which is good news for UIDAI – Aadhaar uses a combination of both fingerprints and irises.

-----  o  O  o  -----

Is Aadhaar in the clear? Which is it? Sink or swim?

According to UIDAI's report on identification (p.4), on 31 December 2011 when there were 84 million sets of biometrics on the population register, the FPIR was 0.057%, the FNIR was 0.035% and "it is unnecessary and inaccurate to attempt to infer UIDAI system performance from other systems which are ten to thousand times smaller".

It may be unnecessary and it may be inaccurate but it's impossible to resist the temptation – compared to any other biometrics-based scheme known to man, these figures for Aadhaar are astonishing. Certainly no salesman worth his or her salt will ignore it.

It looks as if there would be only 684,000 false positive identifications to investigate by the time the population register is full, and not 719,999,999,400.

684,000 is manageable. As UIDAI say (p.18):
... at a run rate 10 lakhs enrolments a day, only about 570 cases need to be manually reviewed daily to ensure that no resident is erroneously denied an Aadhaar number. Although this number is expected to grow as the database size increases, it is not expected to exceed manageable values even at full enrolment of 120 crores. The UIDAI currently has a manual adjudication team thatreviews and resolves such cases.
[1 lakh = 100,000 and 1 crore = 10,000,000]
How do UIDAI know that the FPIR was 0.057% when the register had 84 million entries?

Presumably they had recorded 47,880 cases of false positive identifications to date.

You'd think that. But you'd be wrong. UIDAI tell us that (p.18):
An FPIR of 0.057% was measured when the gallery size was 8.4 crore (84 million) and probe size was 40 lakhs (4 million). The false rejects (legitimate residents who are falsely rejected by the biometric system) were a count of 2309 out of the 40 lakh probes
They did a test. They probed the gallery with 4 million sets of biometrics and they got 2,309 false positive identifications.

Funny way to do it.

Perhaps we shall be told that there's an agreed protocol in the biometrics industry such that this is an acceptable way of determining FPIR. Even so, why not report the actual number of false positive identifications recorded?

That statistic should be available in the case of Aadhaar – G&C tell us that (p.2):
UIDAI places a heavy emphasis on data quality throughout the process. It collects as much operational data as possible, including on the details of each individual enrolment as it is carried out, process by process. This is included, together with biometric and demographic data, in the packet of information sent from the enrollment point to the data center.
Why not tell us how many false positive identifications there were as well as the result of the test probe? Why were there 4 million sets of biometrics in the probe and not 5 million, or 3 million? How were the 4 million chosen?

The questions mount and the answer gradually comes into focus – in order to inspire confidence, UIDAI's figures need to be audited by independent experts and certified like a set of company accounts.

And, like company accounts, they should be audited every year. These figures from 31 December 2011 are getting very long in the tooth.

Another rider – subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed.

-----  o  O  o  -----

UIDAI say that the incidence of false positive identifications is manageable and that they expect it to remain manageable. I.e. they're not drowning in a sea of false positives.

G&C have this footnote, #7, on p.5 of their report:
For a huge population like India’s, even this small level of error would result in some 3.1 million false rejections if continued through the program. UIDAI plans to contain the numbers by eliminating some sources of error unearthed by the initial study, and also by relaxing the [FNIR] if needed to further reduce the [FPIR]. Handling false rejections has reportedly been a manageable problem to date.
"UIDAI plans to contain the numbers by ... relaxing the [FNIR] if needed to further reduce the [FPIR]". What? "Relaxing the [FNIR]"?

What does that mean? In order not to drown in false positives, UIDAI will let false negatives go up? UIDAI have got to get the population register completed and if that means tolerating lots of duplicate entries, too bad, so be it, let uniqueness go hang? If that isn't what it means, then what?

How relaxed? Very relaxed? What level does FNIR have to rise to, to keep FPIR down at 0.057%? Do UIDAI even know? Should they change their name to the Multiple Identification Authority of India?

"It is unnecessary and inaccurate to attempt to infer UIDAI system performance from other systems which are ten to thousand times smaller"? On the contrary, it is only sensible to question UIDAI's performance claims.

The riders are piling up now – subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed and as long as you're relaxed about the fact that anyone could have any number of entries on the population register.

-----  o  O  o  -----

If a supplicant turns up at an Aadhaar registration centre and is the victim of a false positive identification, you're going to know about it. They're going to demand their Aadhaar number and they're going to stay there and jump up and down until they get it. At least they will if they're legitimate and not impostors.

It's different with false negative identification. If an impostor turns up at the centre and his or her earlier registration is not detected by Aadhaar, then they're not going to tell you. You won't know. Impostors don't have the same desire to keep the performance statistics up to date as upright people do.

The upshot is that you can't measure FNIR. Not in the field.

You can submit a batch of sample biometrics and see how well the system performs. How successful it is at finding these deliberately seeded duplicates on the register. And that's what UIDAI did (pp.18-19):
To compute FNIR, 31,399 known duplicates were used as probe against gallery of 8.4 crore (84M). The biometric system correctly caught 31,388 duplicates (in other words, it did not catch 11 duplicates). The computed FNIR rate is 0.0352%. Assuming current 0.5% rate of duplicate submissions continues, there would only be a very small number of duplicate Aadhaars issued when the entire country of 120 crores is enrolled. Aadhaar expects to be able to increase the quality of all collections as the system matures. Consequently, we expect the potential number of false acceptances to decrease further below this already operationally satisfactory number.
That's fine. But if the actual number of "duplicate submissions" is higher than UIDAI assume and the "false acceptances" are more numerous than they expect, no-one will know. All UIDAI can say is, when we did this test, we got this result. Whether that is an accurate measure of FNIR out there in the operational system in the real world, nobody knows.

What we do know – G&C tell us – is that UIDAI have been "relaxing" the FNIR to keep FPIR low. The confidence we can have in UIDAI's figure for FNIR is severely limited.

-----  o  O  o  -----

It's worse than that.

G&C tell us on p.1 of their report that:
Although there has been extensive laboratory testing of different hardware and software for a variety of biometrics, including fingerprints, iris, face and voice, testing under carefully controlled conditions does not provide adequate information on real-world performance, which can be affected by many factors (Wayman et al 2010).
The paper they cite, Fundamental issues in biometric performance testing: A modern statistical and philosophical framework for uncertainty assessment, is written by three world-class experts – James L. Wayman, Antonio Possolo and Anthony J. Mansfield.

As G&C tell us, the experts conclude that technology tests and scenario tests tell us nothing about how well or how badly a biometrics system will perform in the operational environment. As they put it, biometrics is out of "statistical control".

To put it another way, UIDAI's FNIR and FPIR test probes are a waste of time.

Tony Mansfield is the UK's top biometrics authority and Jim Wayman is the US's. And Antonio Possolo is the top man on measurement at the US National Institute of Standards and Technology (NIST). They're practitioners. They have decades of experience. They advise governments. Their own and others. They know what they're talking about.

And what they're talking about is biometrics being out of statistical control.

That implies many things. Among others, consider the following.

Messrs Wayman, Possolo and Mansfield refer to the USA PATRIOT Act in their paper (p.20). By law, NIST have to certify biometrics systems before they are deployed in the national defence.

That may be the law but, if the technology is out of control then NIST have a problem obeying the law. They could refuse to certify any biometrics systems and then none would be deployed. That's one option. They have chosen another option. The certificate they issue says:
For purpose of NIST PATRIOT Act certification this test certifies the accuracy of the participating systems on the datasets used in the test. This evaluation does not certify that any of the systems tested meet the requirements of any specific government application. This would require that factors not included in this test such as image quality, dataset size, cost, and required response time be included.
That's the best they can manage in the circumstances. The result of the test is the result of the test and that's all we know. How the system will perform in the field is anyone's guess. According to three world-class experts, in biometrics, that is the state of the art.

Final rider – subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed and as long as you're relaxed about the fact that anyone could have any number of entries on the population register and the fact that the discipline of biometrics is out of statistical control.

-----  o  O  o  -----

It is premature to conclude that biometrics have proved themselves in Aadhaar:
  • Let's wait and see if any bank is confident enough to authorise payments on the basis of biometrics alone. No password. No PIN. No token. Just biometrics.
  • Let's wait and see if legitimate voter participation is increased by Aadhaar.
  • India's various food and fuel distribution programmes and its temporary employment programmes for the long-term unemployed are plagued by large-scale corruption. Let's wait and see if Aadhaar reduces the level of corruption or simply automates it.
  • And let's wait for an independent audit of UIDAI's results.
G&C have already identified 160 biometrics programmes in 70 countries. This latest report of theirs will be embraced by biometrics salesmen the world over as an unsolicited testimonial from a respected source and will be used to raise funds for more programmes. (G&C driving up the false accept rate?)

G&C work for the Center for Global Development, a Washington-based think-tank and lobbyist which aims to "reduce global poverty and inequality through rigorous research and active engagement with the policy community to make the world a more prosperous, just, and safe place for us all".

It's hard work finding good homes for aid money. There are legitimate doubts about the reliability of biometrics. Aid money isn't necessarily well spent on biometrics systems.

Michela Wrong, a journalist who has covered Africa for two decades, reported on the March 2013 elections in Kenya complete with biometric registration of electors and electronic voting. She had this to say:
I suddenly realised I was watching a fad hitting its stride: the techno-election as democratic panacea ... EU and Commonwealth election monitors hailed the system as a marvel of its kind, an advance certain to be rolled out across the rest of Africa and possibly Europe, too. The enthusiasm was baffling, because almost none of it worked.
The Economist magazine have let down their scepticism guard and become active in the unsolicited testimonials market – please see The Economist magazine sticks its nose into Indian politics, comes away with egg on its face and The Economist magazine's chickens, now on their way home to roost.

That was some time ago. They remain dazzled by technology to this day: "India has registered 275m of its 1.2 billion people in one of the world’s most sophisticated ID schemes (it includes iris scans and fingerprints)". Why do they think that the inclusion of biometrics is ipso facto "sophisticated"?

They should talk to Michela Wrong.

-----  o  O  o  -----

G&C have spotted what the Economist have missed:
  • The Wayman, Possolo and Mansfield paper.
  • UIDAI relaxing the FNIR.
  • The element of smoke and mirrors in biometrics – they talk about the "fiction of infallibility" (p.9) and the "pretense of uniqueness in the ID system" (p.10) and the possibility that "in the longer run, as its mystique evaporates, the identity system will no longer be trusted by anyone, eliminating any value" (p.10).
Above all, quite rightly, G&C call for more countries to release data on the performance of biometrics in the field – "distressingly little data is available on [biometrics] performance, either for identification or for authentication" (p.1) and "there is now no excuse for other countries not to share data—or for donors not to insist on it when financing identification programs" (p.10).

The biometrics salesmen won't like that conclusion of G&C's and they won't mention it, please see UIDAI and the textbook case study of how not to do it, one for the business schools. (Neither will the UK government.)

All that healthy scepticism, and yet G&C conclude that biometrics is ready for large-scale deployment:
  • Did they check with NIST or the FBI before publishing their report? Those organisations know quite a lot about biometrics and might have provided some useful input.
  • Did they contact Messrs Wayman, Possolo and Mansfield? If G&C believe them when they say that biometrics is out of statistical control, then there's not much point filling up their report with useless statistics, is there? If they don't believe them, why not?
  • Would G&C be so generous with their testimonials if Aadhaar was an aeroplane safety system, for example?
  • Would they feel qualified to comment if they were dealing with the pharmaceutical industry rather than the biometrics industry?
  • Would they be more sceptical if they were dealing with research funded by the tobacco industry?
  • Why does biometrics get the kid gloves treatment?
  • And what is this fake distinction G&C make between countries with a large population and a small one? The biometrics tested in the UK failed with a trial population of 10,000 participants. Biometrics is a technology. At least it's supposed to be. Either it works or it doesn't. Cars work in the US. And they work in India. If biometrics isn't good enough for the US, it's not good enough for India. Or Uganda or Ghana. Which are two different countries. Ask Michela Wrong.
All that healthy scepticism, and yet G&C conclude that: "UID shows that countries with large populations can implement inclusive, precise, high-quality identity systems by using existing technology".

No.

It shows nothing of the sort.

Is there any chance of G&C reissuing their report with revised conclusions?

1 comment:

Alan Gelb said...

We agree with a number of points raised by David Moss. One is the importance of releasing field performance data; other programs should be held to this standard. We recognize that biometrics is not a panacea. Our previous working paper that reviewed some 160 cases noted several problematic examples, particularly in the area of elections. It is far too early to assess the UID program record in delivering more effective and inclusive services. Where we differ from Moss is that we see the data that it has released on inclusion and accuracy as a very significant benchmark for biometric systems in developing countries, and a major advance on the use of laboratory data. These appear to be the most extensive field data released so far.

The UID data are of interest for other countries; the hypothetical example of Ughana illustrates what such a system should be able to achieve for a “typical” country with about 30 million people. It is easy to scale the results for country size. We estimated that for a country as large as India there would be somewhat over 3 million false positives during enrolment, a large number for manual follow-up but probably doable. For a small country like Haiti the number would only be around 300.

On multiple identities, no system will be able to guarantee 100 percent accuracy. Certainly not the systems in place in the rich countries where identity theft is hardly unknown! The question is not “whether it works or not” but the precision of one system versus another and relative cost-effectiveness. For some applications, such as access to a health insurance program, one might accept a modest level of duplicate or false identities. For others, such as access to a nuclear facility, we want to minimize them – just as we would want very high standards for aeroplane safety, to take the example cited by Moss. These might involve different biometrics and also passwords or other identifiers; the most demanding applications can apply whatever other additional checks they choose outside the scope of national identification. For a national ID system the reported rate of 0.035 percent for UID seems low enough to discourage most deliberate efforts to acquire multiple identities.

Any identification system will have to cope with people who are unable to enroll using biometrics and with identification and authentication errors. The UID data offer useful pointers to likely numbers.

UID does not, therefore, provide answers to every question -- it is far too early for that and we do not claim that it does. It remains to be seen how the program is or is not picked up by various applications and how it negotiates the political winds that arise with any system of identification. But we hold to our conclusion that the data released provides a very significant benchmark on the capabilities of biometric systems in developing country conditions and one that should be studied carefully by other countries.

To correct the record, we do not assert that the number of bilateral comparisons is the square of the population, n. It is 0.5*n*(n-1) which rises (as we note) with the square of n. As n becomes large, it approaches 0.5*n*n; since no identification system will cover 100% of population, we rounded n off to 1 billion for India. If we accept the field estimate of 0.057% false positive rate against a data base of 84 million, the rate for a 1:1 comparison would have to be very small, in the range of 7 in one trillion. The implied precision can only be possible with the combined use of multiple biometrics, which is another of the lessons from the UID exercise.

Alan Gelb,
Senior Fellow,
Center for Global Development

Post a Comment