The case for global governance of AI: arguments, counter-arguments, and challenges ahead


Paper by Mark Coeckelbergh: “But why, exactly, is global governance needed, and what form can and should it take? The main argument for the global governance of AI, which is also applicable to digital technologies in general, is essentially a moral one: as AI technologies become increasingly powerful and influential, we have the moral responsibility to ensure that it benefits humanity as a whole and that we deal with the global risks and the ethical and societal issues that arise from the technology, including privacy issues, security and military uses, bias and fairness, responsibility attribution, transparency, job displacement, safety, manipulation, and AI’s environmental impact. Since the effects of AI cross borders, so the argument continues, global cooperation and global governance are the only means to fully and effectively exercise that moral responsibility and ensure responsible innovation and use of technology to increase the well-being for all and preserve peace; national regulation is not sufficient….(More)”.

Repository of 80+ real-life examples of how to anticipate migration using innovative forecast and foresight methods is now LIVE!


Launch! Repository of 80+ real-life examples of how to anticipate migration using innovative forecast and foresight methods is now LIVE!

BD4M Announcement: “Today, we are excited to launch the Big Data For Migration Alliance (BD4M) Repository of Use Cases for Anticipating Migration Policy! The repository is a curated collection of real-world applications of anticipatory methods in migration policy. Here, policymakers, researchers, and practitioners can find a wealth of examples demonstrating how foresight, forecast and other anticipatory approaches are applied to anticipating migration for policy making. 

Migration policy is a multifaceted and constantly evolving field, shaped by a wide variety of factors such as economic conditions, geopolitical shifts or climate emergencies. Anticipatory methods are essential to help policymakers proactively respond to emerging trends and potential challenges. By using anticipatory tools, migration policy makers can draw from both quantitative and qualitative data to obtain valuable insights for their specific goals. The Big Data for Migration Alliance — a join effort of The GovLab, the International Organization for Migration and the European Union Joint Research Centre that seeks to improve the evidence base on migration and human mobility — recognizes the importance of the role of anticipatory tools and has worked on the creation of a repository of use cases that showcases the current use landscape of anticipatory tools in migration policy making around the world. This repository aims to provide policymakers, researchers and practitioners with applied examples that can inform their strategies and ultimately contribute to the improvement of migration policies around the world. 

As part of our work on exploring innovative anticipatory methods for migration policy, throughout the year we have published a Blog Series that delved into various aspects of the use of anticipatory methods, exploring their value and challenges, proposing a taxonomy, and exploring practical applications…(More)”.

The limits of state AI legislation


Article by Derek Robertson: “When it comes to regulating artificial intelligence, the action right now is in the states, not Washington.

State legislatures are often, like their counterparts in Europe, contrasted favorably with Congress — willing to take action where their politically paralyzed federal counterpart can’t, or won’t. Right now, every state except Alabama and Wyoming is considering some kind of AI legislation.

But simply acting doesn’t guarantee the best outcome. And today, two consumer advocates warn in POLITICO Magazine that most, if not all, state laws are overlooking crucial loopholes that could shield companies from liability when it comes to harm caused by AI decisions — or from simply being forced to disclose when it’s used in the first place.

Grace Gedye, an AI-focused policy analyst at Consumer Reports, and Matt Scherer, senior policy counsel at the Center for Democracy & Technology, write in an op-ed that while the use of AI systems by employers is screaming out for regulation, many of the efforts in the states are ineffectual at best.

Under the most important state laws now in consideration, they write, “Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.”

Transparency around how and when AI systems are deployed — whether in the public or private sector — is a key concern of the growing industry’s watchdogs. The Netherlands’ tax authority infamously immiserated tens of thousands of families by accusing them falsely of child care benefits fraud after an algorithm used to detect it went awry…

One issue: a series of jargon-filled loopholes in many bill texts that says the laws only cover systems “specifically developed” to be “controlling” or “substantial” factors in decision-making.

“Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision,” they explain…(More)”

Potential competition impacts from the data asymmetry between Big Tech firms and firms in financial services


Report by the UK Financial Conduct Authority: “Big Tech firms in the UK and around the world have been, and continue to be, under active scrutiny by competition and regulatory authorities. This is because some of these large technology firms may have both the ability and the incentive to shape digital markets by protecting existing market power and extending it into new markets.
Concentration in some digital markets, and Big Tech firms’ key role, has been widely discussed, including in our DP22/05. This reflects both the characteristics of digital markets and the characteristics and behaviours of Big Tech firms themselves. Although Big Tech firms have different business models, common characteristics include their global scale and access to a large installed user base, rich data about their users, advanced data analytics and technology, influence over decision making and defaults, ecosystems of complementary products and strategic behaviours, including acquisition strategies.
Through our work, we aim to mitigate the risk of competition in retail financial markets evolving in a way that results in some Big Tech firms gaining entrenched market power, as seen in other sectors and jurisdictions, while enabling the potential competition benefits that come from Big Tech firms providing challenge to incumbent financial services firms…(More)”.

How do you accidentally run for President of Iceland?


Blog by Anna Andersen: “Content design can have real consequences — for democracy, even…

To run for President of Iceland, you need to be an Icelandic citizen, at least 35 years old, and have 1,500 endorsements.

For the first time in Icelandic history, this endorsement process is digital. Instead of collecting all their signatures on paper the old-fashioned way, candidates can now send people to https://island.is/forsetaframbod to submit their endorsement.

This change has, also for the first time in Icelandic history, given the nation a clear window into who is trying to run — and it’s a remarkably large number. To date, 82 people are collecting endorsements, including a comedian, a model, the world’s first double-arm transplant receiver, and my aunt Helga.

Many of these people are seriously vying for president (yep, my aunt Helga), some of them have undoubtedly signed up as a joke (nope, not the comedian), and at least 11 of them accidentally registered and had no idea that they were collecting endorsements for their candidacy.

“I’m definitely not about to run for president, this was just an accident,” one person told a reporter after having a good laugh about it.

“That’s hilarious!” another person said, thanking the reporter for letting them know that they were in the running.

As a content designer, I was intrigued. How could so many people accidentally start a campaign for President of Iceland?

It turns out, the answer largely has to do with content design.Presidential hopefuls were sending people a link to a page where they could be endorsed, but instead of endorsing the candidate, some people accidentally registered to be a candidate…(More)”.

A Literature Review on the Paradoxes of Public Interest in Spatial Planning within Urban Settings with Diverse Stakeholders


Paper by Danai Machakaire and Masilonyane Mokhele: “The concept of public interest legitimises the planning profession, provides a foundational principle, and serves as an ethical norm for planners. However, critical discourses highlight the problems of the assumptions underlying the notion of public interest in spatial planning. Using an explorative literature review approach, the article aims to analyse various interpretations and applications of public interest in spatial planning. The literature search process, conducted between August and November 2023, targeted journal articles and books published in English and focused on the online databases of Academic Search Premier, Scopus, and Google Scholar. The final selected literature comprised 71 sources. The literature showed that diverse conceptualisations of public interest complicate the ways spatial planners and authorities incorporate it in planning tools, processes, and products. This article concludes by arguing that the prospects of achieving a single definition of the public interest concept are slim and may not be necessary given the heterogeneous conceptualisation and the multiple operational contexts of public interest. The article recommends the development of context-based analytical frameworks to establish linkages that would lead towards the equitable inclusion of public interest in spatial planning…(More)”.

Murky Consent: An Approach to the Fictions of Consent in Privacy Law


Paper by Daniel J. Solove: “Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious….(More)”. See also: The Urgent Need to Reimagine Data Consent

Russia Clones Wikipedia, Censors It, Bans Original


Article by Jules Roscoe: “Russia has replaced Wikipedia with a state-sponsored encyclopedia that is a clone of the original Russian Wikipedia but which conveniently has been edited to omit things that could cast the Russian government in poor light. Real Russian Wikipedia editors used to refer to the real Wikipedia as Ruwiki; the new one is called Ruviki, has “ruwiki” in its url, and has copied all Russian-language Wikipedia articles and strictly edited them to comply with Russian laws. 

The new articles exclude mentions of “foreign agents,” the Russian government’s designation for any person or entity which expresses opinions about the government and is supported, financially or otherwise, by an outside nation. Prominent “foreign agents” have included a foundation created by Alexei Navalny, a famed Russian opposition leader who died in prison in February, and Memorial, an organization dedicated to preserving the memory of Soviet terror victims, which was liquidated in 2022. The news was first reported by Novaya Gazeta, an independent Russian news outlet that relocated to Latvia after Russia invaded Ukraine in 2022. It was also picked up by Signpost, a publication that follows Wikimedia goings-on.

Both Ruviki articles about these agents include disclaimers about their status as foreign agents. Navalny’s article states he is a “video blogger” known for “involvement in extremist activity or terrorism.” It is worth mentioning that his wife, Yulia Navalnaya, firmly believes he was killed. …(More)”.

People with Lived Experience and Expertise of Homelessness and Data Decision-Making


Toolkit by HUD Exchange: “People with lived experience and expertise of homelessness (PLEE) are essential partners for Continuums of Care (CoCs). Creating community models that acknowledge and practice inclusivity, while also valuing the agency of PLEE is essential. CoCs should work together with PLEE to engage in collection, review, analyzation, and use of data to make collaborative decisions impacting their local community.

This toolkit offers suggestions on how PLEE, community partners, and CoCs can partner on data projects and additional local data decision-making efforts. It includes resources on partnership practices, compensation, and training…(More)”

The Crime Data Handbook


Book edited by Laura Huey and David Buil-Gil: “Crime research has grown substantially over the past decade, with a rise in evidence-informed approaches to criminal justice, statistics-driven decision-making and predictive analytics. The fuel that has driven this growth is data – and one of its most pressing challenges is the lack of research on the use and interpretation of data sources.

This accessible, engaging book closes that gap for researchers, practitioners and students. International researchers and crime analysts discuss the strengths, perils and opportunities of the data sources and tools now available and their best use in informing sound public policy and criminal justice practice…(More)”.