Data Matters Privacy Blog UK consults on algorithmic processing

Algorithms touch on multiple aspects of digital life, and their use potentially falls under several distinct, yet converging, regulatory systems. More than ever, a ‘joint’ approach is needed to assess them, and key UK regulators are working together to try to formulate a coherent policy, setting an interesting example which could serve as a model for comprehensive approaches to digital regulation. .

28e April 2022, the UK Digital Regulation Cooperation Forum (DRCF) – a body of four UK regulators – has published two articles on algorithmic processing which focus on risk/benefit analysis and the ways in which algorithms can be audited and regulated.

What is DRCF?

The DRCF is a body that brings together four British regulators responsible for regulating digital services: the Competition and Markets Authority (CMA), the Office of the Information Commissioner (ICO), the Communications Office (Ofcom) and the Financial Conduct Authority (CIF). The main objectives of the DRCF, as defined in its work plan for 2022 to 2023, include:

  1. Consistency between regimes – where regulatory regimes intersect, the DRCF works to resolve potential tensions, providing clarity for people and businesses;
  2. Collaboration on projects – the DRCF aims to work collaboratively on areas of common interest and to jointly address complex issues; and
  3. Capacity building between regulators – by working together, the DRCF believes it can more effectively develop and retain the skills, knowledge, expertise and organizational capacity needed to deliver effective digital regulation to people and businesses.

Key Takeaways from the DRCF Algorithm Discussion Papers

Algorithmic processing is one of several priority areas of strategic joint work among DRCF members. The DRCF produced two discussion papers based on feedback provided in a series of bilateral engagements with its members. Key takeaways from the DRCF discussion papers include:

  1. Algorithms provide many benefits, both for individuals and for society, and these benefits can increase with continued responsible innovation.
    • When consumers see evidence and experience the benefits of algorithms (for example, increased productivity and better ways to search and summarize information), they trust the companies that facilitate these benefits, which drives markets and drives growth. economic growth.
  1. Harm resulting from algorithms can occur both intentionally and inadvertently.
    • Examples of intentional harm include the automation of spear phishing attacks and the creation of “deepfake” content. More often, however, the harm induced by the algorithm is unintended – for example, when the underlying dataset used by the algorithm is not representative and leads to biased results.
  1. Those who obtain and/or use algorithms often know little about their origins and limitations.
    • Buyers and users of algorithms often misunderstand how they were developed and how they work in different contexts. As such, they face difficulties in mitigating the associated risks.
  1. A lack of visibility can undermine accountability.
    • Customers are often unaware of the use of algorithms, for example, as part of a loan assessment or when they are recommended online content. This may cause difficulties for individuals to exercise their rights (for example, under the GDPR) and may mean that algorithm operators face insufficient scrutiny from data subjects, regulators, society civilian and sometimes even the management of the company itself.
  1. Human oversight is not an infallible guarantee against harm.
    • Human operators often have difficulty interpreting the results of algorithmic processing. Some also place too much faith in its effectiveness and do not examine its results enough.
  1. There are limits to DRCF members’ current understanding of the risks associated with algorithmic processing.
    • Due to the pace of innovation and ever-increasing use cases of algorithmic processing, there are significant knowledge gaps among DRCF members, as well as many misconceptions.
  1. There are several problems in the current – ​​nascent – ​​audit landscape.
    • The algorithm audit landscape generally lacks specific rules and standards, it is unclear what standards audits should follow, and auditors are often constrained by a lack of access to systems. In addition, following audits, actions are sometimes insufficient and there are currently few remedies. Regulators could play an important role in developing and crafting solutions to address these issues.

Potential actions / next steps

These discussion papers identify a number of opportunities for DRCF members to coordinate and collaborate to foster a more robust regulatory environment, which could include the potential development of algorithmic valuation practices, helping organizations communicate more information to consumers about how and where algorithmic systems are used and collaborating with researchers.

The DRCF consultation seeks feedback from stakeholders on the use of algorithms and how best for its members to approach their use from a regulatory perspective. Specifically, they sought stakeholder input on the following:

  1. general reflections on the conclusions of the article on algorithmic processing;
  2. other issues on which the DRCF could focus;
  3. the areas of interest on which the DRCF has the most potential to influence and which area of ​​interest it should prioritize;
  4. results that consumers and individuals would find useful from the DRCF to help them navigate the algorithmic processing ecosystem in a way that serves their interests;
  5. evidence on the disadvantages and advantages of algorithmic systems;
  6. advantages and disadvantages of each of the hypotheses described in the algorithmic audit article, related to the potential role of regulators in the algorithmic audit landscape;
  7. the hypotheses that the DRCF should test and explore further; and
  8. other actions DRCF should consider taking in the area of ​​algorithmic auditing.

It seems likely that this consultation will be fundamental to future government and regulatory policies, representing a crucial opportunity to contribute to the debate. The inauguration of the CMA Data, Technology and Analytics Conferencewhich will take place on June 15 and 16, will explore many of these questions and will be covered in an upcoming article.

To share


Source link