The coming will always come

The recent negative news about AI is a bit more. It’s not “combined boss voice deceives 1.73 million”, that is, “over-collection of information infringes on student privacy”… As a long-term technology media that focuses on AI progress, we are worried about AI and feel that another boot in our heart has fallen. – “The general will come.”

For a long time, everyone is more willing to focus on the technological advancement of AI. For example, “AI speech synthesis”, related news must have seen a lot, “Chat App launches the voice into a star function”, “I can imitate you with only one minute AI”, “Google voice clones realize emotional expression”, etc. Etc. All of them are accompanied by optimistic expectations for technology, and relevant technical achievements are also generously shared by researchers on the open source platform. The situation is very good, this “AI voice fraud” incident just gave us a wake-up call:

The advancement and popularity of technology has gone far beyond the recognition of ordinary people. In today’s highly intelligent society, the threshold of technology application is getting lower and lower, AI will inevitably become the target and accomplice of scammers, and it is only a matter of time to endanger the security of personal assets.

For individuals, in order to fight for being swindled by scammers, it is an indispensable lesson to understand the AI’s ability in advance. Today, there are AI scams that are difficult to identify…

Difficulty factor one star: forged mail

Fishing emails, that is, hackers who fake websites to send emails, which carry malicious Trojans or fake content to steal information, are no longer a new attack. Using existing security technologies to detect and defend against threats from phishing emails is almost effortless. Similar scams have rarely appeared in the public in the past two years.

However, if the use of mail is combined with artificial intelligence, allowing attackers to access the corporate network and persuade employees to authorize transfers, the consequences can be terrible.

In 2017, the University of Southern Oregon was tricked into making a $1.9 million transfer. They thought that the transfer was made to Anderson Construction, which was responsible for building the student entertainment center, but actually transferred to the liar’s bank account. The incident caused the FBI to issue a risk warning to other universities and institutions. Previously, there have been 78 similar scams, and cable makers such as Leoni and technology company Ubiquiti Networks have been tricked into hundreds of millions of dollars.

How is this business email scam (BEC) implemented?

First of all, it is easy for a liar to find an engineering project company that has business contacts with the organization, and then pretend that the established manufacturer sends a payment bill to the institutional finance department. After the agency believes it is true, it will transfer the subsequent funds to the liar’s bank account, etc. When I was cheated, I usually couldn’t get back.

The reasonThe ability to achieve such a realistic effect, in addition to the liar will register a domain name similar to the official to fake the e-mail address, the participation of artificial intelligence has also played a very big help.

Attackers can fully understand the business information of the target through social media such as Twitter, LinkedIn, Facebook, etc. Some companies and organizations’ official websites will also expose (expose) their own organizations and managers, and age, gender, Multi-dimensional data such as blog posts can be injected into the machine learning training model.

Thinking about it, the N ways of AI

For example, an executive publicizes his schedule, presentation plan, travel plan, etc. on Twitter, and the system can tell when he is attending a meeting or at work, adjusting his attack strategy, and then using the AI ​​language model. Generate consistent and compelling content. The most common is to ask for a change in the payment account or to make an urgent payment, while executives are having difficulty getting in touch with them on holidays or long flights, and unsuspecting victims can easily choose to obey orders because of “emergency.”

The resulting attacks can help attackers bypass some signature-based detection systems and successfully fool some of the current anti-spam telemetry technologies. Moreover, it can continue to learn, if the attack is effective, the information will be fed back into the model to further improve the accuracy of future attacks. The failed data will also be fed back so that the machine can learn what kind of information is not working.

Don’t think this routine is too simple. According to the FBI, the losses caused by forged email scams in 2018 exceeded $12.5 billion (the largest of which was $5 billion), more than double the amount in 2017. Most importantly, because there are no phishing pages or files, it is difficult for such scams to be identified by security software, and the addresses and content appear to be “legitimate.”

If you want to be fooled, you can only rely on the individual’s alertness. If you master the company’s finances, it is a “stupid white sweet” that is not very good for the technology. The results can be imagined…

Difficulty factor two stars: forged handwriting

If you say that you are vigilant and careful, and email fraud can be prevented with a high probability, then the personalized features of AI forged handwriting may be easy to be fooled even with your own friends and relatives.

The UCL University researchers in the UK developed the “My text in your hand writing” artificial intelligence algorithm, which can analyze a person’s glyphs and their