Africa Could Become a Testing Ground for Tech-Enabled Social Engineering

Within a year, much of the world has adopted the norm of wearing masks to protect against the COVID-19 pandemic. Notwithstanding the political jostling that such face coverings have come to represent, it has become a social norm driven by circumstance.

Scholars have undertaken extensive work on the life cycle of norms to demonstrate how they cascade into society and eventually become internalised.

But to what extent does technology have a normative function – the power to shape human behaviour and deliver real-world consequences? In the absence of robust safeguards and in states with fragile democracies, could Africa become a testing ground for tech-enabled social engineering? Shaping norms or beliefs, governing how we vote, who we love and stirring up existing ethnic or religious cleavages?

Information disorders expert Eleonore Pauwels argues that the convergence of artificial intelligence and data-capture capability threatens to undermine institutions that form the bedrock of democracies.

The rapid emergence of artificial intelligence tech tools across Africa coupled with powerful social media platforms such as Facebook, Reddit and Twitter has made data a commodity. Some commentators describe it as the new oil. These tech tools include biometric databases for tracking population movements at borders, registering voters before elections or documenting key life events (births, marriages and deaths).

Machine learning technologies have the potential to override or shape human judgement and political agency

Besides capturing human behaviour, likes and preferences, technology potentially has the power to shape it, Pauwels argued at a webinar on surveillance and information disorder in Africa last month. Artificial intelligence and data capture technologies together form a powerful alliance that enables micro targeting and precision messaging, she says.

Institute for Security Studies (ISS) research shows that the ‘digital exhaust’ we leave behind on the internet – and the personal biometric information captured on CCTV cameras in shops or from centralised databases when we register to vote or apply for a driving licence – provides the raw material for data manipulation in Africa.

According to Pauwels, human beings are rapidly becoming ‘data points’ or ‘digital bodies and minds’ whose exact location and biometric features can be matched in real-time. This can have profound implications for personal privacy and security.

She says that unless checked, machine learning technologies have the potential to override or shape human judgement and political agency. This is especially true in settings where democratic checks and balances are still fragile. For this reason, numerous African countries including Zimbabwe and Kenya have been the focus of her work.

The purpose of analysing our ‘digital bodies and minds’ is, among others, to manipulate group conversations and behaviours either for political or commercial gain. This can create chaos or assert control, particularly during election times or periods of national emergency such as a war or pandemic.

Policymakers need to consider the blind spots of mass capture technologies

The ISS has demonstrated how potent algorithms can help amplify xenophobic narratives. The South African case study shows how messages could find reach far beyond what might be expected in the ‘real’ (rather than virtual) world. Pauwels’s research builds on this idea, highlighting the use of botnets by those wishing to control a message for viral propagation and to optimise search engine and algorithmic content regulation.

Such potent social manipulation tools are being monetised, enabling data capture companies such as the now-disbanded Cambridge Analytica to identify individuals’ ‘deepest fears, hatreds and prejudices to influence elections,’ Pauwels asserts.

Using Kenya’s 2013 and 2017 elections as a case study, she documents how existing ethnic tensions in Kenya were exploited by similar commercial entities, explaining that ‘in 2017 WhatApp groups, including non-political ones, were inundated with incendiary ethno-nationalist rhetoric, mis- and disinformation.’

This raw material – i.e. personal data – was illegally acquired by political parties and deployed as part of their communications strategy. This practice is outlined in more detail in a Strathmore University Centre for Intellectual Property and Information Technology Law report.

Another form of social manipulation comes from technology’s ability to automatically ‘generate new content such as photographs, video and text’ creating so-called deep fakes. These have significant implications for the future of propaganda, deception and social engineering.

In the rush to develop centralised biometric databases, algorithms need to be open to inspection

They can also generate fake intelligence scenarios, paving the way to what some scholars have described as digital dictatorships and providing a pretext for social control and securitising legislation aimed at curbing its use.

Such fake scenarios can enable illiberal states to silence dissent. The shutdown of social media platforms has already been observed in Uganda and Ethiopia in recent months with the justification that national security is under threat.

Social engineering may also take the form of determining what information citizens have access to via the internet. Freedom House argues that the system’s current weakness has played into the hands of less democratic governments looking to increase their control of the internet.

And the very existence of such data monitoring, using equipment provided by foreign entities such as China, can impose new surveillance norms on populations that host the latest technology. This ‘cyber nationalism’ potentially normalises pervasive digital surveillance, and there’s scope for much research to be done on the role of foreign actors in this sphere.

For all these reasons, policymakers need to consider the blind spots of such mass capture technologies. Although new data laws are coming on stream setting out strict rules regarding how data is captured, stored and limiting its reuse, the enforcement of new regulations will be severely tested.

In the rush to develop centralised biometric databases, algorithms need to be open to inspection and a new culture of ethical technology (possibly with incentives and sanctions) must be developed.

Karen Allen, Senior Research Adviser, Emerging Threats in Africa, ISS Pretoria

Source:

Leave a Reply

Your email address will not be published. Required fields are marked *