The New York Times’ Research & Development team is trialing IBM’s Hyperledger Fabric permissioned DLT (distributed ledger technology) system to help ensure the progeny of photographs distributed across digital platforms, where the risks of fake news and photo doctoring are manifold.
“The New York Times Research & Development team is launching The News Provenance Project to experiment with product design and user-facing tools to try to make the origins of journalistic content clearer to our audiences,” project lead Sasha Koren writes in a blog post regarding the launch.
Proper means of assuring the veracity of media content could not be more critical, says Koren:
“In a time of heightened political polarization and widespread social media use, the prevalence of misinformation online is a persistent problem, with increasingly serious effects on elections and the stability of governments around the world. In addition to false statements published as fact in text and photos that have been manipulated or republished out of context, instances of manipulated video are now on the rise. How should news organizations respond to this crisis?”
Permissioned distributed ledgers are often referred to as “blockchains” but for various reasons, the term is controversial.
Suffice it to say the system in this case, if it passes proof of concept, will allow multiple parties to access and use consensus to update photo records:
“Why blockchain? Its underlying structure as a ‘distributed ledger’ (a database that is not housed on one set of servers owned and operated by one entity, but by many entities and servers that are kept updated simultaneously) is useful for this project because it makes the records of each change traceable: files are not so much changed as built upon. Any updates to what is published are recorded in a sequential string (or ‘blocks’ in a ‘chain’) with the string of those changes adding up to create a provenance.”
“By experimenting with publishing photos on a blockchain, we might, in theory, provide audiences with a way to determine the source of a photo, or whether it had been edited after it was published.”
Koren says The Times solution was inspired partly by The Guardian’s decision to change how it displays dates on old articles after it saw traffic on old articles spiking when they had been, “shared as new, and with incorrect context, on Facebook.”
The Times joins a host of other media firms and non-profits trying to combat false reporting.
“As misinformation tools continue to evolve,” Koren writes, “so have strategies to identify and avoid it. Some recent standout examples include the Washington Post’s visual explainer of manipulated video and the Wall Street Journal’s creation of a team to help its journalists identify deepfakes. In addition, a number of news-centric nonprofits — including First Draft, which provides guidance in verifying content found on the social web, consortiums like Misinfocon, the Credibility Coalition, and many, many others — have emerged to study the issue from a variety of perspectives and provide much-needed research and training.”
Other news organizations are welcome to contribute to the trial:
“We’re working from within The New York Times Company, but not solely on behalf of it. A successful implementation will require collaboration and use among many organizations…(W)e’ll make what we learn publicly available in the hopes that it may be of interest and of use to other publishers. We would love to have more participants join us in this experimentation — particularly news outlets that publish original photos and that serve different audiences from The Times’s.”