• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

The Alexandria Archive Institute

OPENING THE PAST, INSPIRING THE FUTURE

  • About
    • History
    • Mission
    • People
    • Governance
    • Community
    • What We Do
      • Open Context
      • Technology Innovation
      • Research
      • Advocacy & Leadership
      • Education & Training
  • Impacts
    • Publications
  • News
  • Projects
    • Data Literacy Program
    • Digital Data Stories
    • Sustainability, Collaboration, & Network Building
    • Digging Digital Museum Collections
      • Resources
  • Digital Data Stories
  • NEH-NADAC
    • NADAC People
      • NADAC Faculty
      • NADAC Advisors
      • NADAC Core Team
    • NADAC Resources
      • NADAC Curriculum
    • NADAC Apply
  • Search
  • Donate Now!

Testing, testing, testing

July 15, 2021 by Paulina Przystupa

Data Stories Logo

Digital Data Stories

As we mentioned in our quarterly report, things are mooving along with our first data story. However, before we let it roam on the range, we want to make sure it is up to a certain standard. The question is, what standard?

As students and instructors, we have a good idea of what tutorials are not good. Sometimes exercises just don’t work, but it’s hard to know that until we’ve actually shared the thing with the world. And while folks understand that no tutorial is perfect, we want to ensure, before we share our story with collaborators in classrooms this fall, that it works as expected.

Thankfully, we don’t have to debut the whole thing all at once. Instead, the Digital Data Stories (DDS) team decided to utilize an Evaluation Preparation Plan (EPP) framework to guide our external testing.

Screen cap that reads "Please answer the following questions BEFORE engaging with the Digital Data Story"

This is the beginning of our pre-survey for our first round of testers

An EPP gives us a living guideline for how to design each stage of testing for the data story. It’s broken into five stages that outline all the things we need for testing and the problems we want evaluated. Breaking things down this way means that we consider each aspect of the testing process, and things don’t get missed.

Those stages are:
1. Defining evaluation objectives
2. Selecting participants for the testing event
3. Deciding how we capture data about the testing event
4. Testing the plan with the local team
5. Assemble the testing event logistics

We can think of these sort of as the who, what, when, where, and why of testing, though in a slightly different order.

For evaluation, we start with the ‘why’ and the ‘what’. Why are we testing this now? What do we want to know? We wanted to test now because in the eyes of the team, the story is complete and we want it to be well used. Furthermore, we want to know whether our data story works for our audience as intended. Our hope is that folks with no archaeological experience can learn something about cattle bones from archaeological sites.

With that objective in mind, we moved to the ‘who’ and decided to focus our first round of testing on people who are new to archaeology and people who may have archaeological experience but lacked computer programming as part of their training.

A screencap of survey questions that are an example of a text based survey

Here’s a sampling of a text-based survey and one of the questions we’re asking testers

After that, we revisited the ‘what’ and added the ‘where’. We decided to collect data via a text-based survey, provided online with a testing packet. A text-based survey is one that collects text, as in typed up comments, rather than recorded audio or video responses. Key during this stage was writing survey questions that evaluated the preparation of our participants and how well our data story served them.

With those decisions in place, we moved on to stage 4 and shared the plan with our team to test the testing plan. Having wrapped that up in the last week, we sent the testing packets out and are ready to get responses.

What’s really cool about this system is that the EPP is a living document. So, as we test, we reexamine the framework to capture ideas about testing that we missed. This means, if we forget something this round, we can make sure to include it in the next round of testing. It also means we get better as we go along, building on feedback for testing and for future data stories!

So with the plan for evaluation in motion, defined by what we want to see, we’ve released the cows for our first round of evaluation. Check back soon to learn what we find out and for the official debut of our first data story!

Share this:

  • Twitter
  • Facebook
  • LinkedIn

Category News| Projects Tags data story| Evaluation| testing

Previous
Rebuilding Open Context – Again!
Next
Improving Ways Open Context Publishes Archaeological Data

Primary Sidebar

  • About
    • History
    • Mission
    • People
    • Governance
    • Community
    • What We Do
      • Open Context
      • Technology Innovation
      • Research
      • Advocacy & Leadership
      • Education & Training
  • Impacts
    • Publications
  • News
  • Projects
    • Data Literacy Program
    • Digital Data Stories
    • Sustainability, Collaboration, & Network Building
    • Digging Digital Museum Collections
      • Resources
  • Digital Data Stories
  • NEH-NADAC
    • NADAC People
      • NADAC Faculty
      • NADAC Advisors
      • NADAC Core Team
    • NADAC Resources
      • NADAC Curriculum
    • NADAC Apply
  • Search

Footer

Contact

contact@alexandriaarchive.org
125 El Verano Way
San Francisco, CA 94127
415-425-7380

Visit

Support AAI

Donate