• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

The Alexandria Archive Institute

OPENING THE PAST, INSPIRING THE FUTURE

  • About
    • History
    • Mission
    • People
    • Governance
    • Community
    • What We Do
      • Open Context
      • Technology Innovation
      • Research
      • Advocacy & Leadership
      • Education & Training
  • Impacts
    • Publications
  • News
  • Projects
    • Data Literacy Program
    • Digital Data Stories
    • Sustainability, Collaboration, & Network Building
    • Digging Digital Museum Collections
      • Resources
  • Digital Data Stories
  • NEH-NADAC
    • NADAC People
      • NADAC Faculty
      • NADAC Advisors
      • NADAC Core Team
    • NADAC Resources
      • NADAC Curriculum
    • NADAC Apply
  • Search
  • Donate Now!

A follow up on a NAIRR

November 12, 2021 by Paulina Przystupa

A book that looks like a neural network with "Following up on a National Artificial Intelligence Research Resource" written next to it

If you’re interested in AI applications here are some of our thoughts on the posted information from the Request for Information from the White House Office of Science and Technology Policy (OSTP) and National Science Foundation (NSF)

The comment period for the National Artificial Intelligence Research Resource (NAIRR) ended on 1 October. Recently, we received confirmation that our response is now part of the public record for that request. You can read our full response here but we wanted to talk about the comments from other folks.

If you take a look at the full list, The Alexandria Archive Institute (AAI) is among a number of techie juggernauts like Google, Consumer Reports, Palantir, and IBM. There are also a number of other groups who commented that demonstrate the breadth of impact this resource could have. These include groups like BeeHero (which is interested in saving bees using advanced technologies), the American Psychological Association, and the Stanford Libraries.

An important response we’d like to highlight is that from the American Civil Liberties Union (ACLU). Their response focuses on the harms that could arise from a NAIRR, from a legal standpoint. Importantly, the ACLU response points out the technochauvanistic tone of the request, as well as many of the request’s underlying assumptions. Specifically, it identifies that there was an inherent assumption that a NAIRR would have value and benefit communities.

The ACLU’s response highlights that there is no guarantee that this resource will be valuable or beneficial to communities. Particularly examining the resource in relation to other existing AI technologies, the exact opposite is likely. The ACLU also points out that the existing information that preceded the request suggests that issues of bias and other community harms may only be addressed at the end, rather than incorporated into the process. And if that is the case, a NAIRR may do more harm than good.

It is also interesting to note the kinds of institutions that are not represented in the responses. While, ourselves included, tech and data oriented industries are well represented, institutions or groups like the ACLU are underrepresented. Few of the listed groups are explicitly oriented towards racial and gender equity, fairness, bias, civil rights, transparency, and accountability, which is something that the NAIRR RFI wanted comments on.

For example, in a random sample of the responses, besides the ACLU, none came from institutions oriented specifically towards those issues. Instead, almost all were oriented towards sciences, AI, computing, and related fields. Although these responses did sometimes address issues of racial and gender equity, fairness, bias, civil rights, transparency, and accountability, none of the groups submitting responses were exclusively oriented towards addressing these issues.

This suggests that it may be valuable for those planning a NAIRR to specifically request input from such communities . This is because, as our response and the ACLU’s response identify, without active participation from communities focused on those issues, this resource is likely to repeat the same mistakes the technology industry on the whole makes regarding racist and other oppressive conclusions. And a resource that does that is not one that we need.

Share this:

  • Twitter
  • Facebook
  • LinkedIn

Category News| Publications| Technology Tags AI| Artificial intelligence

Previous
On the Road Again with Best Practices for Digital Scholarship
Next
(Future) Partner Highlight: You!

Primary Sidebar

  • About
    • History
    • Mission
    • People
    • Governance
    • Community
    • What We Do
      • Open Context
      • Technology Innovation
      • Research
      • Advocacy & Leadership
      • Education & Training
  • Impacts
    • Publications
  • News
  • Projects
    • Data Literacy Program
    • Digital Data Stories
    • Sustainability, Collaboration, & Network Building
    • Digging Digital Museum Collections
      • Resources
  • Digital Data Stories
  • NEH-NADAC
    • NADAC People
      • NADAC Faculty
      • NADAC Advisors
      • NADAC Core Team
    • NADAC Resources
      • NADAC Curriculum
    • NADAC Apply
  • Search

Footer

Contact

contact@alexandriaarchive.org
125 El Verano Way
San Francisco, CA 94127
415-425-7380

Visit

Support AAI

Donate