The screening process begins with scanning software that monitors chats for words or phrases that signal something might be amiss, such as an exchange of personal information or vulgar language.
The software pays more attention to chats between users who don’t already have a well-established connection on the site and whose profile data indicate something may be wrong, such as a wide age gap. The scanning program is also “smart” — it’s taught to keep an eye out for certain phrases found in the previously obtained chat records from criminals including sexual predators.
If the scanning software flags a suspicious chat exchange, it notifies Facebook security employees, who can then determine if police should be notified.
Keeping most of the scanned chats out of the eyes of Facebook employees may help Facebook deflect criticism from privacy advocates, but whether the scanned chats are deleted or stored permanently is yet unknown.
The new details about Facebook’s monitoring system came from an interview which the company’s Chief Security Officer Joe Sullivan gave to Reuters. At least one alleged child predator has been brought to trial directly as a result of Facebook’s chat scanning, according to Reuters’ report.
When asked for a comment, Facebook only repeated the remarks given by Sullivan to Reuters: “We’ve never wanted to set up an environment where we have employees looking at private communications, so it’s really important that we use technology that has a very low false-positive rate.”
Facebook works with law enforcement “where appropriate and to the extent required by law to ensure the safety of the people who use Facebook,” according to a page on its site.
“We may disclose information pursuant to subpoenas, court orders, or other requests (including criminal and civil matters) if we have a good faith belief that the response is required by law. This may include respecting requests from jurisdictions outside of the United States where we have a good faith belief that the response is required by law under the local laws in that jurisdiction, apply to users from that jurisdiction, and are consistent with generally accepted international standards.
“We may also share information when we have a good faith belief it is necessary to prevent fraud or other illegal activity, to prevent imminent bodily harm, or to protect ourselves and you from people violating our Statement of Rights and Responsibilities. This may include sharing information with other companies, lawyers, courts or other government entities.”
Indeed, Facebook has cooperated with police investigations in the past. In April, it complied with a police subpoena from the Boston Police Department by sending printouts of wall posts, photos and login/IP data of a murder suspect.
Is Facebook doing a public service by monitoring chats for criminal behavior? Share your thoughts in the comments.