Thiel said it was “a surprise” to get any hits on “a small Twitter dataset.” Researchers used a digital signature analysis called PhotoDNA and their own software program to scan for the images and did not view the images themselves.
Twitter has previously said it uses PhotoDNA and other tools to detect CSAM, but it did not comment to the Wall Street Journal about whether it still uses PhotoDNA. The Stanford researchers said Twitter told them it has detected some false positives in CSAM databases that the platform’s operators manually filter out. Twitter said researchers might see false positives going forward.
The platform has touted its efforts to combat child sexual exploitation. It reported that it suspended about 404,000 accounts in the month of January for creating or engaging with material involving CSAM.
Research on Twitter…
Read the full article:
Open the full article on the www.catholicnewsagency.com site