This article is from the WeChat public account: Big Data Digest (ID: BigDataDigest) < span class = "text-remarks">, author: Niu Wan Yang, Cao Peixin, Liu Junhuan, Mary, the original title: “No room N Korea evil, victims of minimum 11 years old! Women and children are being violent online, can we pin our hopes on AI “, picture from IC phone

We may be closer to evil than we think.

On March 22, a man code-named “Doc” (real name Zhao Zhubin) was arrested by the South Korean police, and a Large crime “hidden” hidden in the corner of the network .

Since 2018, a man named “Godgod” has started tweeting to find photos of “sexy” women in his eyes, posing as police intimidating victims to take nude photos, and then threatening them with these photos , Sexually exploit victims, and post these images of sexual exploitation in chat rooms on Telegram for members to watch and collect dues.

Members who have been charged for charging have made all kinds of abnormal requests even stronger. “Godgod” has also been met one by one. The number of people who have requested to join the chat room has also increased. Room 1 expands Room 2 and Room 3 … This is also the origin of Room N .

As more and more people join, the gang also establishes different rooms based on the contents of the release, “female teacher’s room”, “female nurse’s room”, “female middle school student room”, and even “girl’s room”.

Every chat room has 3 to 4 victims, and a chat room has 300 to 700 viewers. When the victims start to resist, they conveniently use the personal information they have to threaten the victims. Let them “ Live in this fear all their lives “.

In July last year, Zhao Libin founded the second room N and used his own means to improve the entire crime system. A large number of volunteers worked for him, and the number of victims also increased.

As of March 22, 2020, there were as many as 74 female victims killed by the Korean police, 16 of whom were minors. The youngest victims were elementary school students as young as 11 years old. Shameless, and chose to commit suicide.

Until the beginning of this year, two undergraduate undercover reporters who had been lurking inside room n reported the room N to the police after collecting enough evidence, and then reported this heinous crime in detail. .

It is even more shocking that more than 260,000 viewers have participated in Telegram Room N, accounting for 1% of South Korean men. But with the exception of reporters who eventually undercover, in up to 3During the year, no participant came forward to report this terrible organization.

Some South Koreans believe that everyone watching in the room is a murderer—260,000 members stand by and watch for crimes , so they sent a petition to Cheongwada to disclose the “Doctor” criminal group Identity and photos of members and all members of Room N.

Signs of victims? What can AI technology do?

The “House N” incident continued to ferment, making people feel the evil of human nature more terrible than any movie plot, and also triggered a national reflection on Internet crime and the use of technology platforms.

Although it is difficult to find commonalities, there are still some signs of behavior before online violations occur:

  • Ask some privacy questions: Frequently ask where you live and who you live with, so this requires attention.

  • Compliment you : Flattery and promises may be a sign of violations. For example, some people say they can help you become a model or a star.

  • Persuading you to talk privately: Offenders often try to persuade victims to leave ordinary chat rooms or popular social media and chat privately via text messages or more private tools.

  • Ask for a photo from you : Once these violations have a photo, they can use the victim.

    For law enforcement and parents, the hidden nature of mobile phones and chat tools makes it difficult for them to get these signs directly, but if technology companies can Warning Procedure “added to the investigation process, may be able to provide advance warning before a tragedy occurs.

    In the second half of 2018, Microsoft organized a hacking programming marathon (hackathon) . Engineers and legal experts are involved. The original intention of the contest was to build AI tools that can effectively detect these “Internet predators” and to build the trust of the victims.

    In order to understand and better understand the routine of female / child predators online, the team analyzed tens of thousands of child predators’ conversations with minors. The system can ask children to leave the platform and enter one-to-one Will sound a warning sound when in conversation mode .

    In the months following the hacking programming competition, Microsoft has further cooperated with other companies including Roblox and instant messaging service provider Kik to optimize AI tools so that it can talk to two people based on features. Evaluate, rate, and assign a risk level to them, If the risk is high enough, these sessions will be sent to an AI review .

    If the system recognizes the risk of undermining the safety of minors, such as mentioning a private meeting, AI will notify local law enforcement agencies. In most cases, flagged sessions will be reported to NCMEC ( National Center for Missing & Exploited Children (National Center for Missing & Exploited Children).

    Courtney Gregoire, Microsoft ’s chief digital security officer, said, “ If we think about how to solve this problem, we can already respond and report, but we can’t achieve prevention yet.

    Another Microsoft spokesperson claims that the AI ​​tool works in any text-based environment, including Apple’s iMessage and WhatsApp. He also said that Roblox, which already has a chat filtering system, is integrating Microsoft’s AI tools into their platform as a double guarantee.

    Apart from Microsoft, these companies are also trying to use technology to stop online infringements

    Facebook Global head of security Antigone Davis revealed to the Financial Times last year that the company has been working with child protection organizations over the past few years to develop protections for children on the platform Methods.

    Antigone Davis said the measures include alerts when someone asks for a private chat with a child on Messenger and Instagram DM, or tries to get contact information if a minor repeatedly refuses.

    In addition to Facebook, Twitter is also actively responding to policies related to reporting possible sexual assault on children. Twitter said that once an account is posted to publish child pornography, the official Twitter will not If the parties are notified, the deletion will be performed as soon as possible, and the reports of other users will also be reviewed in a timely manner.

    Google is even earlier, starting in 2013, Google established 500The $ 10 million fund was used for the “ Eradication of Child Abuse Images ” initiative and launched a $ 2 million child protection technology fund to encourage the development of better tools to destroy child pornography.

    In addition, Google has developed a database of child pornography and abusive image tags that can be shared with other search engines and child protection organizations. The database will help eliminate such content automatically.

    The then Google Giving Director, Jacqueline Fuller, (Jacquelline Fuller) stated in the blog, “This will enable companies, law enforcement Work better with charities to find and delete these images in a timely manner, and take proactive action against criminals. “

    Privacy security vs weak protection, Telegram’s awkward position

    Lindsey Olson is an executive director of the National Center for Missing and Exploited Children (NCMEC) , she said, although The law requires technology companies to report cases of apparent child sexual abuse on online platforms, but it does not require suspicious phishing offenders.

    This is also the awkward position faced by the third-party platform Telegram in the N room case .

    As a cross-platform instant messaging software, Telegram has always been known for exchanging encrypted and self-destructing messages with each other, sending all types of files such as photos and videos. It is understood that Telegram’s encryption mode is based on 256-bit symmetric AES encryption,RSA 2048’s encryption and Diffie-Hellman’s secure key exchange protocol. In addition, the messages passed by Telegram are functions, and the scalability is quite strong.

    Telegram has promised: as long as anyone can crack the intercepted communication content, it will provide a $ 100,000 bonus. So far, only one person has received this bonus, and what he found is only a hidden danger that may cause problems. In the words of Telegram’s founder Pavel Durov, “Telegram was born for privacy and security, and never compromises with any forces.”

    However, the outbreak of Room N was also based on a security system that the police and criminals could not monitor, which also seemed to “hit the founder’s face” because these ” “Encrypted” information may also be a breeding ground for criminals.

    As in the case of Room N, some male members thought that they only paid to watch adult movies, which did not constitute a crime. “They are the victims.” At the same time, they also accused women who were intimidated from the beginning Upload your own large-scale photos on Twitter. Telegram also faces such an allegation.

    Picture from Weibo-Other Girls

    Telegram was unable to retreat from the incident as a platform for the N room incident. As Weibo netizens said, this “absolute freedom” is like “falling back to the primitive jungle”.

    The cost is undoubtedly huge. Last year, worldwide such cases reported by technology companies and the public voluntarily increased from 12,000 in 2018 to 19,000. Olson believes these numbers are sufficient to show the seriousness of the problem.

    Olson said: “Now many minors and children have mobile phones, it is easier for invaders to find the person they want to violate, and it is easier for children to send explicit pictures of themselves. These people do not need to have physical bodies with children Contact can cause great harm to the child. “

    Can AI be smarter than “smart bad guys”?

    This type of sexual assault through Internet fraud threats has been committed not only in South Korea, but the Washington Post reported a similar case last week.

    According to the victim, Rhiannon McDonald, one night at the age of thirteen, she chatted online with a model scout. In just a few hours of chatting, she was coerced to send many nude photos of herself, and also provided Your home address. The next day, the “scout” appeared at his doorstep and sexually assaulted Rhiannon..

    McDonald, 30, said she has been caught in the shadow of this incident for many years, suffering from psychological panic, and two suicide attempts.

    Recently, she contacted the Marie Collins Foundation, a non-profit organization in the United Kingdom, which is committed to helping children who have been victims of online sexual abuse and sexual exploitation. Terrible experience.

    McDonald didn’t tell anyone what happened to her until the police showed up at her house six months later. Related to multiple similar cases, the police eventually arrested the person and found her information on his computer. The man was sentenced to 11 years in the end.

    McDonald finally said that the whole thing had not been told to her father, and she was afraid that her father would lose her temper. But then she realized that her father had never blamed her. She said, “He told me that he has been very blame because he feels that he should always monitor me. But it is really difficult for him to detect this.”

    Although the online platforms used by minors are different now (McDonald was an AOL Instant Messenger user at that time) , but the trick of child invaders is basically In the same way, the victims are usually spoofed first, and if they do not agree to share photos or meet in private, then the threat is immediately followed. For a long time, technology companies have also tried to avoid such cases, but it was almost impossible to detect these behaviors before the advent of artificial intelligence tools .

    As McDonald said, even the most dedicated parents have a hard time even knowing what their children do online. She hopes technology can help. “ If recognized by the AI ​​during the chat, there is a chance to save the child .

    The question is whether AI systems that protect children from chatting with strangers online can really evolve faster than the perpetrators.

    Microsoft said that in the future, AI will continue to teach the words of those who may invade children, but also pointed out the problem. Sometimes, it is even difficult for us to distinguish whether the words of the other party are malicious. Courtney Gregoire said, “Their rhetoric is constantly changing. How do you tell a true friendship? How can you see through their ugly faces at a glance?”

    Reported by South Korea Joint News