New York Times unveils prototype system aimed at inspiring confidence in photojournalism

New York Times unveils prototype system aimed at inspiring confidence in photojournalism
ФОТО: dpreview.com

Misinformation is a big issue online, especially with how quickly false stories are shared. The New York Times R&D team has worked with the Content Authenticity Initiative (CAI) on a prototype system 'exploring tools to give readers transparency into the source and veracity of news visuals.

' If a picture is worth a thousand words, the picture must be verifiably truthful to its viewers.

As CAI points out, it used to be a given that a 'photograph never lies. ' However, that hasn't been true for a long time. It's easy for an image to be manipulated and tell a story far from the truth. It's hard to tell fact from fiction, and a fake or doctored image can make the rounds so quickly that you see it many times on your feed before you ever see the original image. That is if you ever see the real image at all. A study by Adobe found that there's a lack of trust in images and that people are concerned about seeing doctored content. The study also found that photographers are concerned with image theft and plagiarism.

The NYT R&T team shows their ‘secure sourcing’ prototype visualization.

For photographers, it's not just about the honesty of an image, it's about credit. Someone can screengrab a photographer's image and spread it around the web before the photographer ever has the opportunity to demand the financial compensation they deserve. After the image has been seen all over the internet, the value of their work has already been irreparably damaged. Santiago Lyon, Head of Advocacy & Education at CAI, writes, 'Regardless of source, images are plucked out of the traditional and social media streams, quickly screen-grabbed, sometimes altered, posted and reposted extensively online, usually without payment or acknowledgment and often lacking the original contextual information that might help us identify the source, frame our interpretation. and add to our understanding. '

Scott Lowenstein of NYT R&D says, 'The more people are able to understand the true origin of their media, the less room there is for 'fake news' and other deceitful information. Allowing everyone to provide and access media origins will protect against manipulated, deceptive, or out-of-context online media. '

Along with Adobe and Twitter, The New York Times Co. is a founding member of the CAI. The CAI and its partners 'are working to develop an open industry standard that will allow for more confidence in the authenticity of photographs (and then video and other file types). We are creating a community of trust, to help viewers know if they can believe what they see. ' To this end, the new prototype outlines a 'secure sourcing' workflow, which will preserve metadata with secure signatures at each step as an image is captured, edited in Adobe Photoshop, and published. As an image is published, links to an original image will be attached and signed by a social media platform.

Photograph by Niko Koppel for the NYT R&D project, integrated with CAI Content Credentials.

Lyon writes, 'This important work demonstrates how a well-respected news outlet like the NYT is experimenting with CAI technology, giving us a hint of what's possible at scale. This aligns with our goal of displaying a CAI logo next to images published in traditional or social media that gives the consumer more information about the provenance of the imagery, such as where and when it was first created and how it might have been altered or edited. '

Lyon continues, 'This will bolster trust in content among both consumers and capture partners (such as Qualcomm and Truepic), editing partners (in this case, our colleagues at Adobe Photoshop), and publishers, such as the New York Times and others. '

Eventually, the hope is that CAI logos can be placed next to images on traditional publishing and social media platforms, inspiring confidence in the provenance of images and explaining how an image was edited before being published. Ideally, viewers would be able to click on the CAI logo and find out about the image creator and see all the edits that have been made.

For the initiative and the NYT R&D prototype to work, widespread adoption is necessary. The overall distrust in the news and images will require considerable work to improve. Reliable, secure, and accessible records of image creation and edits will go a long way toward inspiring confidence in images.

.

image cai are

2021-5-10 17:54

image cai → Результатов: 1 / image cai - фото