Facebook Vice President of Integrity Guy Rosen speaking at Fighting Abuse @Scale 2019

Fighting abuse presents unique challenges for large-scale organizations working to keep the people on their platforms safe. At Fighting Abuse @Scale 2019, engineers, data scientists, product managers, and operations specialists gathered in Menlo Park for a day of technical talks focused on state-of-the art technologies to fight fraud, spam, and abuse on platforms that serve millions or even billions of people. Speakers from Facebook, Google, LinkedIn, Microsoft, and YouTube discussed review systems, detection, mitigation, and more. 

If you missed the event, you can view recordings of the presentations below. If you are interested in future events, visit the @Scale website or join the @Scale community.

Welcome and opening remarks

Guy Rosen, Vice President of Integrity, Facebook

In his opening remarks, Guy explains that the challenges we face in fighting online abuse are only getting bigger as bad actors become more sophisticated. The goal of these events is to learn how we can tackle and solve these issues together as an industry. He also shares our thinking about these issues at Facebook, our multiyear roadmap, and the progress we’ve made so far.

Detecting and thwarting targeted spyware abuse at internet scale

Bill Marczak, Research Scientist, International Computer Science Institute, UC Berkeley 

Nation states and other well-resourced attackers increasingly abuse powerful commercial hacking and spyware tools to covertly surveil and invisibly sabotage entities they deem to be threats, including investigative journalists, human rights activists, and lawyers. Spear-phishing messages that convince targets to open malicious links or attachments have traditionally been a popular vector for this sort of compromise. In these cases, potential targets can be instructed to forward suspicious messages to researchers for analysis. Where researchers can publicly attribute specific cases of spyware abuse to specific governments or corporations, this can be a powerful deterrent against future misuse. However, a new trend towards so-called zero-click exploits, such as infecting targets through disappearing WhatsApp missed calls, or network injection, may rob targets of the opportunity to notice the surveillance and alert researchers. Fortunately, we still have several powerful methods to detect the use and abuse of these tools. Bill describes how Citizen Lab maps out — and in some cases identifies — spyware victims by using server fingerprinting, Internet scanning, and DNS cache probing.

Building and scaling human review systems

Danielle Marlow, Product Manager, Facebook

Human labeling of content is used in the anti-abuse space for three main purposes: measuring abuse, producing training data for machine learning (ML) models, and determining enforcement actions. Ensuring that a human labeling system produces high-quality data can be a difficult task, especially when operating at a scale of hundreds or thousands of reviewers. Danielle discusses aspects of a successful human review system, key considerations for each of the above use cases, and effective ways to represent information to reviewers for evaluation. She also covers the trade-offs between cost, accuracy, and quality measurement, and details steps to better understand content review systems.

Deep Entity Classification: An abusive account detection framework

Sara Khodeir, Software Engineer, Facebook

Facebook uses machine learning (ML) to classify accounts as authentic or fake. However, when ML is classifying at scale, adversaries can reverse engineer features, which limits the amount of ground truth data that can be obtained. Sara introduces deep entity classification (DEC), an ML framework designed to detect abusive accounts. She demonstrates that while accounts in isolation may be difficult to classify, their embeddings in the social graph are difficult for attackers to replicate or evade at scale. Sara shares how Facebook has solved the problem of limited labels by employing a multistage, multitask-learning paradigm that leverages a large number of medium-precision, automated labels and a small number of high-precision,  human-labeled samples. The DEC system is responsible for the removal of hundreds of millions of fake accounts.

Preventing abuse using unsupervised learning

Grace Tang, Senior Staff Machine Learning Engineer, LinkedIn
James Verbus, Staff Machine Learning Engineer, LinkedIn 

Detection of abusive activity on a large social network is an adversarial challenge with quickly evolving behavior patterns and imperfect ground truth labels. Additionally, there is often negligible signal from an individual fake account before it is used for abuse. These characteristics limit the use of supervised learning techniques but can be overcome using unsupervised methods. Grace and James describe how LinkedIn used unsupervised outlier detection and behavioral clustering to detect fake and compromised accounts as well as abusive automation.

Temporal interaction embeddings: Incorporating context for finding abuse at scale

Aude Hofleitner, Manager and Research Scientist, Facebook

Machine learning (ML) models for detecting integrity issues have made significant progress, thanks to research in natural language processing and computer vision. However, using content alone may not always be sufficient (or even available) for accurate predictions, and these algorithms need to be complemented by hand-engineered features aiming to capture user behavior. Producing these features is labor-intensive, requires deep domain expertise, and may not capture all the important information about the entity being classified. Aude presents an alternative approach: Instead of relying on content alone or handcrafting features, Facebook extracts the entire sequence of interactions involving the entity into a low-dimensional feature vector (or embedding) that can be input into a classifier for any prediction task. The algorithm used, temporal interaction embeddings (TIEs), is a supervised deep learning model that captures static features around each interaction source and target, as well as temporal features of the interaction sequence.

Working together to fight abuse: How sharing platforms and hosting platforms can help each other

Jennifer Lin, Software Engineer Manager, Facebook
Jeff Piper, Program Manager, Google 

Video not available

Hate and violent extremist views aren’t shared only on the dark web. Malevolent actors often use the same websites and platforms we do to spread abusive content. Therefore, understanding how extremists use these platforms will go a long way toward helping us remove that content. Jennifer and Jeff propose a breakdown of internet services into sharing platforms that are used to spread ideas and concepts, and hosting platforms that provide storage for the core assets containing these ideas. Our key insight is that sharing patterns can help hosting platforms identify abusive content, while hosting platforms can help sharing platforms prevent the spread of abusive content. In response, they’re leading a collaboration between Facebook and Google to identify and remove terrorist content from the companies’ platforms. Results demonstrate that working together as an industry can strengthen the capacity to more quickly find abusive content and remove it from the internet.

Selective exposure: Countering noncommercial engagement fraud

Laurie Charpentier, Software Engineer, YouTube  

Video not available

When the internet became a big deal, videos of cute cats falling down stairs were trending alongside the divisive topics that we used to avoid talking about with strangers. Internet points went from being a fun niche currency to an important measure of popular opinion. Suddenly, manipulating numbers on a webpage wasn’t just another way to make money; it was a way to pick and choose what stories the world believed. Laurie covers key differences between economically motivated and ideologically motivated engagement fraud, as well as how these challenge historical assumptions, and she details strategies for detection and mitigation.

Detecting and deflecting International Revenue Share Fraud with a little help from our friends

Tim Larson, Program Manager, Microsoft
Kevin Qi, Data Scientist, Microsoft 

Video not available

International Revenue Share Fraud (IRSF) is one of the most common fraud attacks in the telecom industry. Fraudsters abuse international phone numbers, drive fake telephony traffic, and receive shared revenue with number providers. IRSF attacks are often conducted through SMS/voice spam, Wangiri, second-factor authentication, and account takeovers. Since it’s easy for fraudsters to obtain international numbers and the cost-benefit ratio is extremely high, IRSF is hard to detect and prevent. IRSF has been a major fraud vector in the industry for some time. Tim and Kevin explain how Microsoft Identity converted investigation insights into a heuristic-based detection algorithm. They also share how Microsoft built an ML-based phone reputation system that prevented IRSF both in real time and offline.

Preventing payment fraud with human and machine intelligence

Baoshi Yan, Engineer Manager, Uber

Video not available

Tackling payment fraud is incredibly important for a company’s financial bottom line and extremely challenging due to great financial incentives for fraudsters. This is further complicated by Uber’s various product lines, its global operations, and its support of several payment methods across countries and regions. Baoshi gives an overview of Uber’s payment fraud prevention system. He touches upon how machine learning is applied to various phases of Uber’s system, including feature engineering, fraud assessment, and potential actions taken against fraud. He further covers how they mitigate some of the limitations of ML with an efficient framework leveraging human intelligence. The combination of human and machine intelligence enables an effective solution that has greatly lowered payment fraud on Uber over the past few years.

To help personalize content, tailor and measure ads and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy