
From the day the DLP Duo started working on creating Data Stories, we set out with a clear intention to make sure the work we released was properly evaluated. We’d seen too many projects and resources created, released, and never tested, especially not for those things that underline the heart of our work – usability, ethics, and reproducibility. So for each Data Story we designed a testing strategy, and have revised our ‘alpha’ and ‘beta’ products before ‘final’ release.
Sometimes our testing involved sending what were effectively pre-prints to a select group of folks who we knew had interest or expertise in some facet of the Data Story under design. Sometimes our testing involved closed groups administered pre and post Data Story surveys. Sometimes our testing involved open distribution of Data Story iterations, and requested (but not required) open surveys. In a lot of these cases, especially with the open surveys, we relied heavily on social media, and on the extensive network of archaeologists of all career stages who take part in archaeology via social media.
Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences!
Elon Musk
October 20, 2022
Unfortunately, over the last year, the social media landscape has drastically changed, and what we considered ‘solid’ communities have been sundered and split off into a range of services. The takeover of Twitter (sorry, X) by Elon Musk resulted in an upswing in far-right behaviors and a downswing in safety and moderation. A mass exodus occurred off-platform, with the archaeology social media community splitting into groups on Threads, Mastodon, and Bluesky. Some participants turned back to Facebook, and others took the opportunity to leave social media altogether. As yet, which site will end up being the ‘home’ of archaeology on social media is yet unknown. The scattering of community has made our tried-and-true methods for obtaining feedback more difficult, and we’ve seen serious drop-offs in evaluation engagement. So we’ve had to go back to the table to figure out a new strategy. We’ve settled on utilizing open peer review.
The concept of open peer review is simple; the identities of authors, reviewers, and editors are known to one another, and their comments and feedback are made available to readers. It’s a bit harder in practice. Finding researchers willing to put themselves out there to review is a challenge. Choosing how to respond to non-anonymised feedback isn’t what academics are typically trained to do.
For our upcoming Data Stories, we’re going to be utilizing a house-made version of open peer review. This is closest, in terms of method, to our original closed group strategy, but we’re hoping to take it further than we did before. Whereas before, we left the format for comment very amorphous and up to the respondent to determine, we’re going with a more formalized process. We’re currently working on a rubric to guide our first cohort of open peer reviewers. This should provide direction on the kind of evaluation we’re looking for on our Data Stories. It should create a framework for collaborative revision, and dialogue in the revision process. Or at least, that’s our hope!
If you’re interested in taking part as an open peer reviewer, or you have good examples of peer review structures or experiences, please get in touch! We’re always looking for ways to make our data, our process, and our results more open and available, and we’d love your help with it!