Arif Kornweitz, 2021

Function Creep

Originally published as part of the AI Now Institute’s “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI. The project is now offline.

Available as pdf here.

 

 

Illustration by Somnath Bhatt

 

Please see PDF for footnotes and references

 

Abstract
Changes in the functionality of algorithmic systems are often described as creep, a technical exception with unwanted and harmful consequences. However, algorithmic systems are built to be multi functional and creep is not merely a technical issue. Therefore function creep is better understood as a modus operandi of algorithmic systems, a testament to their technical mutability and political entanglement. To properly understand change in these systems, they need to be read in relation to existing spheres of power, from the ideological to the algorithmic level.

The functions of a system and the actions it is able to perform change over time. Function creep refers to a specific kind of change: the expansion of the functionality of an algorithmic system, a divergence of its initial purpose and actual use.1 For example, a system intended for regulating workers’ access to company grounds may be repurposed to track attendance or to detect if workers wear face masks and keep distance to prevent the spread of Covid-19.

While there is no commonly agreed on definition, the term is found in AI engineering3 and policy4 but also specifically in the discourse around AI fairness, accountability, and transparency5. Often, function creep is linked to potential privacy infringements resulting from the use of an algorithmic system. However, the use of the term is not always precise. For example, Andrejevic & Selwyn warn of the potential of face recognition systems: “in terms of function creep, [face recognition is a] technology with a host of potential lucrative markets – from shopping centres wanting to deliver personally-targeted advertisements to shoppers, through to employers wanting to keep tabs on the whereabouts of their workforce.”

Such warnings miss a crucial point. Face recognition and other algorithmic systems are built to be repurposed. The possibility to expand the set of functions of these systems is one of their core features. That a repurposing may cause rights infringements should be expected. As Bernal writes, the risk of function creep in commercial and law-enforcement contexts is present whenever data is held. Nonetheless, as will be shown below, function creep is regularly framed as an exception. Instead, function creep is better understood as the modus operandi of algorithmic systems, a testament to their technical mutability and political entanglement.

The Standard View
Most definitions of function creep describe the phenomenon unidimensionally, suggesting that it occurs due to technical properties of a system. For example, Koops states that, “function creep denotes an imperceptibly transformative and therewith contestable change in a data-processing system's proper activity.”8 Here, creep refers to an impactful but previously imperceptible widening of a specific set of functions of a system. This widening may be the result of new functions introduced purposefully by designers or discovered accidentally by users of a system. It can thus refer to intentionally added or unintentionally enabled functions. In both cases, the metaphor of creep is invoked to describe an initially imperceptible change towards a new function, with unwanted effects such as privacy or other rights infringements.

To reign in creep, researchers suggest limiting or specifying the use cases of an algorithmic system. Koops proposes that function creep can be prevented by regularly assessing a system’s functioning, discussing the effects of possible creep and, if creep occurs, “argue why they think the change is in line with what the system is supposed to do, and where needed, take appropriate measures to make it more acceptable.” According to the author, this might mean limiting functions that cause privacy infringements or arguing why these infringements are necessary.

There are also legislative attempts at preventing function creep in algorithmic systems. These focus on the ‘purpose limitation principle,’ solidified in Article 5(b) of the European General Data Protection Regulation (GDPR). According to this principle, personal data shall be “collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes.” In other words, purpose limitation aims to prevent function creep by restricting the use of data to specific and compatible objectives. Yet it is not always clear what constitutes an incompatible purpose and how compatibility is negotiated.
Broad mandates of law enforcement and intelligence agencies allow overruling the principle in the name of national security interests. As a result, law enforcement agencies may be able to access data from other agencies such as immigration services and may also include publicly available data. In these cases, policing measures may turn into surveillance efforts that comply to regulation because of legitimate security purposes.As Kak puts it, these data processing systems are structured to evade and remove the purpose limitations on data use.

Still, some suggestions for limiting function creep suggest hard-coding purpose limitation by restricting functionality. For example, Fantin and Vogiatzoglou14 call for purpose limitation to be adopted in the design phase of new AI systems. The authors suggest that, by designing an AI for specific use cases only, function creep could be redressed. Specifically, the authors propose that restrictions should be embedded in predictive analytics systems so that they pause in the case of compatibility issues and wait for a human assessment to clear the case. The authors do not take into account that the halting of a system may also

have unwanted effects on its subjects, e.g. if they need data processing to take place in order to be issued documents. Furthermore, the authors propose that the restrictions for purpose limitation should be separately defined for different types of crime.

It is this approach that encapsulates the standard view on function creep: assuming the benevolent use and complete control over the design and effects of an algorithmic system to an extent where any unintended consequences are exceptional. But this belief, that the use of a system can be controlled by design, is precisely what enables function creep. Understanding and redressing function creep thus requires a different approach.

Functionality is Boundless

Understanding functionality of an algorithmic system in terms of the actual output generated by that system at a specific point in time is insufficient to understand its effects. In particular, two convictions at the core of the standard view of function creep need to be refuted: that a system’s functionality can be clearly delineated and that creep is an exception. On closer inspection, the purposes for which algorithmic system is used are not merely stipulated by its technical capabilities. A wider understanding of function creep is necessary.

In the case of machine learning, the functionality of a model is partly derived from its ability to classify a certain input to a satisfying degree of statistical accuracy, e.g. indicated by a high F-score.15 A model that classifies road signs accurately enough can function in a self-driving vehicle, a model that classifies most tents in satellite imagery correctly can be used to estimate camp population. From an engineering standpoint, it makes sense to describe functionality in these terms.

Yet new functions dynamically arise as systems change.16 Whether those changes are designed or occur post-deployment matters much less than how the system can be flexibly deployed in real world settings. This is why dual-use technology, suitable for both military and civilian use, is subject to debate and regulation. Furthermore, the functions of an algorithmic system, the purposes for which it could be utilised, can creep beyond strictly algorithmic activity. Hypothetical functions of an algorithmic system may well have real effects, without any data being processed.

The emphasis on hypothetical use cases is not a speculative exercise. Algorithmic systems are not only used for different purposes, they are used by different actors. Assessing a system’s changing functionality without taking into account the wider context is futile, as a case involving the United Nations World Food Programme (WFP) illustrates.

In order to estimate need and distribute food aid, the WFP partners with local actors in conflict areas to enter biometric data of aid recipients into a database called SCOPE.17 When the WFP announced a partnership with US data analytics and defense firm Palantir to streamline logistics,18 Red Crescent workers on the ground in Syria raised serious concerns about their safety.19 The association of WFP with Palantir,

known for its adherence to the US government and ties to the intelligence sector,20 suggested the possibility of Syrian Red Crescent personnel supplying data to American agencies. As a result, Red Crescent workers feared retribution from the Assad government. Nevermind that Palantir did not have access to personal data,21 a new potential use case caused the real possibility of lives being harmed.

Here, function creep does not involve a change in a data-processing system’s proper activity. No actual algorithmic change had led to a new function. Nor did any legislative attempt at purpose limitation matter because there was no actual algorithmic activity. A new hypothetical function, an unrealised possibility for action, has placed the system and its stakeholders in a high-stakes web of geopolitics as well as situating it firmly in a history of technology-driven humanitarian and military interventions.

Creep is not Exceptional

Function creep is commonly declared to mark exceptions in a system that is otherwise functioning according to its specified purpose. It serves as a rhetorical device that signals control over a system’s functionality, suggesting that the repurposing of a technology for unwanted ends is a technical shortcoming. Yet, cases such as the WFP-Palantir partnership testify to the limited information that algorithmic functionality offers when assessing the actions a system can be used for. In reality, algorithmic systems are sociotechnical apparatuses with a dynamic array of functions, the conditioning of which is an exercise of control. These systems cannot be understood in isolation but need to be read in relation to existing spheres of power, from the ideological to the algorithmic level.

Rhetorically limiting the flexible functionality of algorithmic systems serves to uphold social hierarchies. As Mbembe22 puts it, technologies are increasingly tied in with complex networks of extraction and predation. It is this entanglement of algorithmic systems that drives the current fairness discourse in AI and it is from this context that rhetorical relapses into function creep need to be refuted. In these dominant narratives, exceptions are creepy, systems are distinct and change is controllable.

Function creep occurs when the use of an algorithmic systems deviates from what is considered its purpose. Dominant narratives and interests define what is considered a purpose, a correct use. These dominant stance is mirrored in operationalisations of human norms in ethical standards for algorithmic systems. For example, Sambasivan et al.point out that AI fairness is largely measured according to “Western concerns and histories - the structural injustices (e.g. race and gender), the data (e.g. ImageNet) [..] and enlightenment values”.23 When an AI system’s performance is weighed in relation to its purpose, that process includes an array of assumptions about dominant norms.

Creep, a deviation from purpose, essentially serves as a pointer to how and with which norms a purpose is constructed. Discussions on creep thus imply the question who defines which norms form the baseline. Looking at how and when creep is signalled allows us to understand how norms are constructed. When function creep occurs and purpose limitation fails, it reveals who would (not) have been protected by the dominant norms governing an algorithmic system and its use.

The above implies that a different understanding of function creep is possible. Rather than understanding creep as a phenomenon that goes unnoticed until it appears, creep can be actively sought out to trace where and for whom a system fails. An example is provided by a study about a machine learning system that determines health risk scores in medical settings. Obermeyer et al. show that an algorithm for assigning risk scores to patients exhibits racial bias because it uses health costs as a substitute for health needs. 24 As a result, because less money is spent on patients who self-identify as black, they are assigned a lower risk score than white patients and thus are rated healthier than equally sick white patients. Crucially, the algorithm did not include race as a variable. The researchers inferred the variable, an action that was not built into the system but is a common method in statistical analysis. By repurposing the system, effectively using function creep, Obermeyer et al. were able to establish harmful effects of norms that were implicit in the algorithm’s design.

Creep as Modus Operandi

While algorithmic systems are built to be repurposed,25 the harmful effects of this openness are compartmentalized as aberrations from norms. In dominant narratives, function creep serves as a rhetorical device to mark routinely occurring deviations as anomalies. But function creep is not an exception, it is the modus operandi of any algorithmic system.

Actively leveraging function creep allows tracing the ongoing negotiation and erosion of the functionality of algorithmic systems, as well as the entanglement of algorithmic systems with networks of power and their stakeholders. The ideological and algorithmic frameworks of purpose limitation reveal who does (not) matter according to the norms governing and embedded in an algorithmic system. Function creep is a pointer to this conditionality of fairness.

← back