Giving fraud the finger: Are biometrics the final word in preventing aid diversion?
11 July 2019 at 7:00 am
As Pro Bono News reports on a rise in fraud allegations in the NDIS, and the UN attempts to control diversion in Yemen with biometrics, technologies seem to offer simple fraud control solutions. But do they actually, asks Oliver May.
In June, the World Food Programme threatened to suspend the distribution of food aid unless Houthi leaders agreed to the use of biometric technology to prevent diversion. This developing situation has focused fresh humanitarian attention on biometrics – and technology more generally – as a means to prevent fraud. That focus is welcome. While sexual exploitation and abuse have rightly commanded the headlines, many of the behavioural, cultural and operational factors driving that crisis are true of the sector’s relationship with fraud and corruption.
In this context “biometrics” means the use of physical human attributes to verify identity. Fingerprints, facial and voice recognition are amongst those which jump to mind for most people. Technologies like this suit tidy minds. They feel like elegant, absolute solutions for fraud. After all, the identity verification experience of the private sector has been compelling, and there are certainly potential applications in the social impact sector. But there are three key points that social impact organisations, whether working in Australia or beyond, need to bear in mind.
Firstly, the risk of overemphasis. There is no “silver bullet” solution for fraud. Not only are biometrics not immune to it (eye-catching examples of biometric fraud have included cloning passports and fingerprints, and the use of readily-available retail products to defeat these systems), but a project with biometrics does not necessarily have a lower or higher fraud risk profile to one that doesn’t – just a different profile.
Biometrics may offer a control for authenticating identities during distributions, for example, but fraud is agile and mutates. If a project uses biometrics, then be on the lookout for the “other side” to use different fraud methods. Anticipate, for example, a potential rise in beneficiary taxing by armed actors after the distribution, or for greater pressure on beneficiary selection in the first place. That’s why all project managers need to consider fraud risk across the whole project lifecycle and value chain – and in conjunction with an informed risk assessment, use a range of fraud deterrence, prevention, detection and response tools. This is known as the “holistic” approach. Technology is important, but never standalone.
Secondly, we need to be clear about how any tool lowers the risk of fraud. It can be hard to obtain robust evidence that any given technology or tactic in an aid project “prevents fraud”, partly because of the inherent difficulty in calculating undetected fraud in the first place. We have approaches (mainly variants on representative sampling, detected cases, or perception-based data), but they all have their own limitations. All tools purporting to reduce fraud should be scrutinized for their effectiveness through an evidence-based approach.
Finally, any technological or data solution – biometrics included – introduces its own integrity-related risks into aid work. Biometrics can involve capturing and storing the data of vulnerable populations like refugees or persecuted ethnic groups, creating clear risks of theft and exploitation that cannot be taken lightly. We need to be cautious about exchanging one set of integrity-related risks for another.
So, from a fraud control perspective, what are some of the considerations that your organisation should take into account, if it is contemplating using a technology like biometrics?
Firstly, understand the context. What are the risks and issues specific to the project and place in which any given technology is to be deployed? Badakhshan is not Brisbane. Yemen, Syria, Somalia and Afghanistan not only differ between each other, but are full of contextual diversity within their own borders too.
Secondly, understand your organisation. Have you explored the implications of using a technology, and do you have the capacity and capability to manage them? Biometrics might be right for your organisation, for example, but is your organisation right for biometrics?
And thirdly, understand integrity risks. Fraud is not a simple creature with simple solutions. Like all integrity risks, including sexual exploitation and abuse, it derives from people – and people are complicated, frail, and dynamic. Look for integrity risks across the whole project, and use technology – if appropriate – as one of several tools to control them.
Technologies like biometrics offer exciting opportunities to improve the integrity, effectiveness and efficiency of work in the social impact sector. But to capture their value, and avoid unintended consequences, they must be only used in the context of an informed, cautious and risk-based approach.
About the author: Oliver May was previously the head of counter-fraud for Oxfam GB. He is now a director in Deloitte’s forensic practice, where he helps not-for-profit, corporate and government clients to manage integrity risks. He blogs at Second Marshmallow and his book, Fighting Fraud and Corruption in the Humanitarian and Global Development Sector (Routledge, 2016), is out now. A follow-up book for international NGOs on managing terrorist financing risk is out in 2020.