What is CSAM? Child Safety Terms & Definitions

As you continue to explore the right technical solutions to help your organization combat CSAM, you will undoubtedly encounter new terminology and tools to familiarize yourself with. This page is intended to provide you a quick overview of the following areas with links to helpful resources:


Key Issue Terms

  • Child Sexual Abuse Material (CSAM): Child sexual abuse material is material that depicts a child engaged in real or simulated sexual activity, or sexual parts of a child primarily for sexual purposes often for distribution within communities that are built specifically to normalize and request the explicit sexual abuse of children. This content often includes extreme acts and violence to children under the age of 12. Alternatively, CSAM may also be referred to as child sexual abuse imagery (CSAI) or child sexual exploitation (CSE).
  • Grooming: The act of deliberately establishing trust by an adult through forming an emotional connection with a minor with the intention of manipulating and carrying out sexual abuse.
  • Revictimization: Otherwise known as retraumatization, is the reoccurrence of sexual victimization after the abuse has occurred. For survivors of child sexual abuse, the distribution of their abuse content continues to exacerbate the trauma, fear, and vulnerability.
  • National Center for Missing and Exploited Children (NCMEC): The national clearinghouse in the United States for all issues related to the prevention of and recovery from child victimization.
  • CyberTipline: Centralized reporting hotline to report the online exploitation of children to NCMEC. Information shared on the report is made available to law enforcement for a possible investigation.

Technology

  • Hash Value: The output, or digital fingerprint, of a hashing algorithm that represents an image, video, or another form of data that can be used to determine if two files are identical or similar (for perceptual hashes).
  • Cryptographic Hash: An algorithm that computes hash values for files such that two files having the same hash value are virtually guaranteed to be exactly the same file. A small change to file results in a significant change to the hash value. They are often used to verify file integrity. Examples of cryptographic hash types are MD5 and SHA1.
  • Perceptual Hash: Commonly referred to as fuzzy or visually similar image matching, perceptual hashes generate hash values that can be compared for similarity. A small change to file results in a small change to the hash value. Examples of perceptual hash types are PhotoDNA and pHash
  • PhotoDNA: Developed by Dr. Hany Farid in partnership with Dartmouth College and Microsoft, PhotoDNA is a perceptual hash that remains the industry standard for known CSAM detection. To learn more about Microsoft PhotoDNA or apply for an on-premise license, please visit https://www.microsoft.com/en-us/photodna
  • Classifier: A classifier in machine learning is an algorithm that automatically orders or categorizes data into one or more of a set of “classes.” Safer's classifiers scan a file and assign it a score that indicates the likelihood that the file contains a child sexual abuse image or video.
  • Hash List: Lists of known hash values that can be used to scan against to identify matches. CSAM hash lists are available for finding known CSAM. For more on what hash lists are available, please see the Hash List section
  • False Positive: A result indicating that a particular condition is present or true when it is not. In the case of hashes and identifying CSAM, false positives are images that were incorrectly identified as CSAM because they happened to match the perceptual hash value for a CSAM image.

Hash Lists

  • ESP lists: Hashes representing previously detected CSAM content contributed by Electronic Service Providers as part of voluntary hash sharing initiatives among industry
  • NGO lists: Hashes representing previously reported CSAM contributed by child-serving non-profit organizations as part of voluntary hash sharing initiatives. The content represented by hashes on these lists has usually been verified by a trained analyst to ensure the integrity of the hash list
  • SE list: Hashes representing content that may not meet the legal definition of CSAM but is nonetheless sexually exploitative of children. The content represented by these hashes on these lists, although not CSAM, may be associated with a known series of CSAM or otherwise used in a nefarious manner. Matches against this list should not be reported but removed from platforms to protect the children depicted in the images/videos
  • SaferList: Hashes contributed by the Safer community to enable industry knowledge sharing. For companies to be the most efficient at detecting the content that harms and exploits children, detection intelligence needs to be multi-faceted. Hashes in SaferList may represent content that is CSAM, that is otherwise sexually exploitative of children, or false positives that may trigger false matches against actual SE and CSAM hashes. To learn more about Safer, please visit our product blog

Organizations


For more information about Safer, please visit our product blog or contact us to learn more.