Announcements
·
July 1, 2020

Sharing Our Actions on Stopping Hate

F

acebook is committed to making sure everyone using our platforms can stay safe and informed. As part of our ongoing effort, we’re making changes to our policies, investing in system transparency and sharing work we’re doing to address the 10 recommendations outlined by the Stop Hate For Profit boycott organizers. Below are some of the updates on progress that we’ve shared:



Latest News

Update on March 30 2023

Updated on March 30, 2023 at 6:00AM PT

Brand Suitability Controls and Third Party Verification for Feed

We’re proud to announce that Meta’s new inventory filters for Facebook and Instagram Feeds are now rolling out to advertisers in English- and Spanish-speaking markets. Our third-party verification solution for Facebook Feed is also now available through Zefr and additional Meta Business Partners will onboard in the coming months. These developments highlight our ongoing collaboration with industry partners and the critical work we’re doing to meet the needs of advertisers today.

Learn More



Update on June 16 2021

Updated on June 17, 2021 at 6:00AM PT

Our Commitment to Safety

Over the past year, we hired Roy L. Austin, Jr. to serve as Facebook’s first VP of Civil Rights and continued to grow our team of civil rights experts. We’ve made substantial investments to keep hate off our platform and have updated our policies to catch more implicit hate speech and banned Holocaust denial.

Before the US election last year, we also banned militarized social movements, took down tens of thousands of QAnon pages, groups, and accounts, and updated our policies prohibiting voter suppression.

We hold ourselves accountable through regular reports to the public on the progress we’re making and areas where we need to do more. While there is always more work to do, we are committed to doing it.

The document below shares a comprehensive look at what we have done to limit hate and harmful content on our platforms, and make Facebook safer. We are committed to continuing our work to make Facebook a welcoming place for all.

Learn More



Update on July 31 2020

Updated on July 31, 2020 at 11:30AM PT

Action on Civil Rights Issues

Today, we’re sharing an infographic that highlights the action we’ve taken on civil rights issues as highlighted in the third and final civil rights audit report.

Facebook actions highlighted in Facebook's Civil Rights Audit



Update on July 30 2020

Updated on July 30, 2020 at 8:00AM PT

Stopping Hate: A Look Back and a Path Forward

Though July is coming to a close, the work to keep our platforms safe never ends.

We do not want hate on our platforms. We have outlined the investments we have made here, but know we need to do more to address concerns. Hate is an all-of-society problem, and we will continue to work with experts and outside groups including civil rights leaders to continue to effect change.

We’re sharing an overview of our efforts and how we’re moving forward, below. (Updated on September 17, 2020 at 1:45PM PT with a new link to Our Commitment to Safety.)

Our Years-Long Investment

The people using Facebook don’t want to see hate on our services, our advertisers don’t want to see it and we have no tolerance for it. We’ve invested billions of dollars over the years to combat hate and keep our community safe:

  • Through human review and the latest technologies - like advanced AI - we proactively find nearly 90% of the hate speech content we remove from Facebook before anyone even reports it.
  • We remove three million pieces of hate speech each month, or more than 4,000 per hour.
  • In the last few years we have tripled — to more than 35,000 — the number of people working on safety and security.
  • We have set the standard in our industry for transparency by publishing regular Community Standards Enforcement Reports (CSER) to track our progress.

Innovating and Making Improvements

To effectively tackle hate speech - and other harmful content like voter suppression and misinformation - we regularly review our policies and seek input from experts and affected communities.

We’ve expanded our voter suppression policies to prohibit threats that voting will result in law enforcement consequences and attempts to coordinate with others to interfere with the right to vote, both of which have been known to intimidate and demobilize voters. We take down misinformation that can cause real world harm - and for other types of misinformation, we work with over 70 fact-checkers globally to label false content and we dramatically reduce its distribution on Facebook.

When it comes to ads, we go beyond just banning hate speech and also ban ads that are divisive, inflammatory or fear-mongering.

Facebook does not benefit from hate, and we have no incentive other than to remove it when we find it.

Taking Collective Action

We know that together we can drive greater progress. To take collective action toward our shared goal of stopping hate, we support the advertising community through the Global Alliance for Responsible Media (GARM) and continue to engage with civil rights leaders about our policies and practices.

GARM is taking actionable steps to tackle harmful content in media and social media and we've already committed to a number of concrete proposals through this work:

  • To align across the industry on shared definitions of hate speech
  • To begin measuring the prevalence of hate speech in our regular transparency reports (CSER), with a methodology for adjacency to follow
  • To undertake two third-party audits — one of the CSER, and another of our monetization policies and brand safety controls by the Media Ratings Council
  • To look at new ways of expanding our current brand safety controls

We are changing for the better thanks to continued engagement with the civil rights community. We published our third civil rights audit report in July, bringing to a close an independent two-year review of our policies and practices. No other social media company has completed an audit of this kind. We have already made changes based on the auditors’ recommendations, and we are in the process of evaluating outstanding recommendations to determine how and where we can do more.

We should be held accountable for our work on this and invite continued scrutiny of our efforts so that we can make the best and fastest progress possible.



Update on July 24th 2020

Updated on July 24, 2020 at 3:15PM PT

Boycott Recommendations and Facebook’s Ongoing Efforts

We’re sharing an overview of existing and ongoing work at Facebook along with the #StopHateForProfit recommendations.

There’s some overlap between what the boycott organizers have asked for and what we already do, which reaffirms the fact that our end goals are the same — fight online hate.

This doesn’t mean we won’t do more to address the delta between what we currently do and what the organizers have asked for, but the information below is meant to illustrate what we are doing and what we are exploring thanks to continued conversations with the civil rights community.

Accountability

#StopHateForProfit Recommendation 1:

Establish and empower permanent civil rights infrastructure including C-suite level executive with civil rights expertise to evaluate products and policies for discrimination, bias and hate.

This person would make sure that the design and decisions of this platform considered the impact on all communities and the potential for radicalization and hate.

What We’re Doing and Exploring:

In 2019, we set up a Civil Rights Task Force, which is led by Sheryl Sandberg, and includes other senior leaders. The Task Force meets monthly for the purpose of surfacing, discussing and addressing civil rights issues.

We are working with civil rights law firm, Relman Colfax, PLLC to develop civil rights training for key groups of employees working in the early stages of relevant product and policy development, as well as all members of the Civil Rights Task Force.

We will hire a civil rights leader at the VP level. That person will build out a team over time and help us establish a long-term civil rights infrastructure. While the search continues to hire this VP, the task force that includes senior leaders from the company continues to meet.

We will continue to consult with Laura W. Murphy and the team at Relman Colfax, PLLC who led the civil rights audit of Facebook.

#StopHateForProfit Recommendation 2:

Submit to regular, third party, independent audits of identity-based hate and misinformation with summary results published on a publicly accessible website. We simply can no longer trust Facebook’s own claims on what they are or are not doing. A “transparency report” is only as good as its author is independent.

What We’re Doing and Exploring:

We will undertake an independent, third-party audit of our content moderation systems, including the work we do on identity-based hate. We agree that this is in the best interests of the community we serve and the industry. We welcome the opportunity to do this work and establish the standard which can apply to all digital platforms.

We have been working with a Big 4 consulting firm internally to prepare for an external audit. We are currently working on the scope of the audit, and will issue an RFP in Q3 2020. Our expectation is that the audit will be conducted in 2021. This is to ensure that the process can be done with the appropriate diligence. Facebook will be transparent in sharing updates and a timeline as they become available.

In the interim, we will continue to publish our Community Standards Enforcement Reports (CSER) with the next report scheduled for publication in August 2020.

#StopHateForProfit Recommendation 3:

Provide audit of and refund to advertisers whose ads were shown next to content that was later removed for violations of terms of service. We have documented many examples of companies’ advertisements running alongside the horrible content that Facebook permits. That is not what most advertisers pay for, and they shouldn’t have to.

What We’re Doing and Exploring:

In terms of audits for advertisers, we will undertake an independent brand safety audit, to be conducted by the Media Rating Council (MRC).

This audit will include, but not be limited to:

  • An evaluation of development and enforcement of our Partner Monetization Policies
  • An evaluation of development and enforcement of our Content Monetization Policies and how these policies enforce the 4A’s/GARM Brand Suitability Framework, and comply with MRC’s Standards for Brand Safety
  • An assessment of our ability to apply brand safety controls to ads shown within publisher content such as in-stream, Instant Articles or Audience Network
  • A determination of the accuracy of our available reporting in these areas

We will share an update on the scope of the audit by the end of Q3 2020, and will have an update on timing thereafter.

We refund advertisers when their ads run in videos or Instant Articles that are determined to violate our network policies.

We do not currently have a system in place to track ads that appear in News Feed next to content removed for violating our Community Standards. While we are prioritizing providing a metric for hate speech prevalence first, we are currently exploring options to be able to provide additional reporting options, such as when ads appear next to violating content.

For ads that appear within publisher content such as in-stream video, Instant Articles and Audience Network, we do provide advertisers controls and visibility into the way their ads show up on our services. Our Brand Safety Controls interface allows advertisers to review the publishers, individual in-stream videos and Instant Articles in which their ads appeared. We also provide advertisers with metadata such as the publisher name, video title and ad impressions for where their ads appeared.

Decency

#StopHateForProfit Recommendation 4:

Find and remove public and private groups focused on white supremacy, militia, antisemitism, violent conspiracies, Holocaust denialism, vaccine misinformation and climate denialism.

What We’re Doing and Exploring:

Our AI works across public and private Groups, and when we identify violating content, we remove it.

Violating content in a Group is not always grounds for removal of the Group in full, but if a Group repeatedly posts violating content or if a Group admin is participating in or encouraging the violating behavior or content, we will remove the Group in full.

Our content policies prohibit content that celebrates, defends or attempts to justify the Holocaust, or attacks people based on their religion, content that contains clear calls for violence and misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm.

By using a combination of the latest technology, human review and user reports, we identify and remove harmful content hosted in Groups, whether they are public, closed or secret. We also provide information and controls to Group admins, and if they are complicit in allowing violations, we’ll take action on their Groups, ultimately removing the Group if it continues to post harmful content.

#StopHateForProfit Recommendation 5:

Adopting common-sense changes to their policies that will help stem radicalization and hate on the platform.

What We’re Doing and Exploring:

We make regular updates to our policies – sometimes refining existing policies, other times writing wholly new policies to account for new types of abuse.

As part of our policy development process, we solicit input and expertise from a range of stakeholders, including civil society organizations, activist groups, thought leaders and external experts. Details about our stakeholder engagement process are available here.

#StopHateForProfit Recommendation 6:

Stop recommending or otherwise amplifying groups or content from groups associated with hate, misinformation or conspiracies to users.

What We’re Doing and Exploring:

We have policies in place that outline what is not recommendable on our services. This content may not violate our Community Standards, but it is nonetheless content that we do not want to amplify.

Non-recommendable content includes, but is not limited to:

  • Content that may depict violence.
  • Content that has been deemed false by independent third-party fact-checkers.
  • Profiles, Pages and Groups who have posted content that violates Facebook’s Community Standards such that individual pieces of content have been removed, but the profile, Page or Group has not been removed in full from the platform.
  • Profiles, Pages and Groups that repeatedly share content that is not recommendable.
  • Profiles, Pages and Groups that post false information, deemed as such by independent third-party fact-checkers.

#StopHateForProfit Recommendation 7:

Create an internal mechanism to automatically flag hateful content in private groups for human review. Private groups are not small gatherings of friends - but can be hundreds of thousands of people large, which many hateful groups are.

What We’re Doing and Exploring:

Our Community Standards apply across Facebook, including in private Groups. To enforce these policies, we use a combination of people and technology — content reviewers and proactive detection. Our AI works across public and private Groups, and when we identify violating content, we remove it. And content that is reported as hate speech across our platform, including for Groups, is sent for human review.

For the past three years, we’ve been working on the Safe Communities Initiative, with the mission of protecting people from harmful groups and harm in Groups. By using a combination of the latest technology, human review and user reports, we identify and remove harmful Groups, whether they are public, closed or secret.

#StopHateForProfit Recommendation 8:

Ensure accuracy in political and voting matters by eliminating the politician exemption; removing misinformation related to voting; and prohibiting calls to violence by politicians in any format. Given the importance of political and voting matters for society, Facebook’s carving out an exception in this area is especially dangerous.

What We’re Doing and Exploring:

We agree that accuracy in political and voting matters is important, which is why we are introducing a Voting Information Center to give people authoritative, real-time information about voting and elections at the state and district level.

All posts that discuss voting, including those from politicians, will include a link to our Voting Information Center.

Our rules prohibiting voter interference and incitement of violence apply to everyone, including politicians. Content that breaks our rules against voter interference and inciting violence will never be deemed newsworthy.

Support

#StopHateForProfit Recommendation 9:

Create expert teams to review submissions of identity-based hate and harassment. Forty two percent of daily users of Facebook have experienced harassment on the platform, and much of this harassment is based on the individual’s identity. Facebook needs to ensure that their teams understand the different types of harassment faced by different groups in order to adjudicate claims.

What We’re Doing and Exploring:

We send hate speech reports to reviewers that have training in identity-based hate policies (Updated on August 9, 2021 at 7PM PT to accurately reflect our practices).

We have a comprehensive training program that includes at least 80 hours of live instructor-led training, as well as hands-on practice for all of our reviewers. Training can be thought of in three areas:

  • Pre-training: In addition to receiving an introduction to Facebook, reviewers learn about what they can expect in the role. Part of this onboarding includes informing them of the resources available to them, such as resiliency and counseling as they begin work.
  • Hands-on Learning and Hands-on Classroom Instruction: Over 80 hours of instructor-led training, practice and ongoing knowledge checks/quizzes. Following this in-depth training we also ensure each reviewer spends time with veteran reviewers prior to working live reports. This allows them time to get used to the tool and ask questions.
  • Ongoing Coaching and Training: routine coaching sessions, learning huddles, Refresher Sessions highlighted by areas for improvement and ongoing policy update or clarification training.

All training materials are created in partnership with Content Policy and our in-market specialists or native speakers from the region. After starting, reviewers are regularly trained and tested with specific examples on how to uphold the Community Standards and take the correct action on a report. Additional training happens continuously, including when policies are clarified, or as they evolve.

We use AI to automatically filter and hide bullying comments. On Instagram, comments that our AI flag as potentially offensive will automatically be filtered out. On Facebook, you can hide comments in your timeline that contain certain offensive words and our AI will filter them. We also use AI to encourage people to reconsider what they post to help stop bullying or harassment before it happens.

In cases where people receive unwanted contact, we provide tools for people to control their experience and avoid additional unwanted contact.

  • Block accounts - You can block someone who is harassing you. The person you block will not be notified.
  • Restrict accounts - On Instagram you can restrict comments from certain people on your posts; the comments will only be visible to that person. You can choose to make a restricted person’s comments visible to others by approving their comments. Restricted people won’t be able to see when you’re active on Instagram or when you’ve read their direct messages. On Facebook you can add a Friend to your Restricted list and they won't see posts that you share only to Friends.
  • Message controls - On Facebook Messenger you can ignore a conversation, turn off notifications for a conversation, delete a conversation or block the person from sending you messages. On Instagram you have the ability to allow only people you follow to message and add you to group threads. People who enable this setting will no longer receive messages, group message requests or story replies from anyone they have not chosen to follow.
  • Comment controls - On Instagram you can block certain offensive comments, block someone from commenting, limit who can comment, turn off comments completely or block a large amount of comments at once. On Facebook you can hide comments in your timeline that contain certain offensive words.
  • Tag controls - On Instagram you can limit who tags you and on Facebook you can limit the audience who can see posts you’re tagged in.

#StopHateForProfit Recommendation 10:

Enable individuals facing severe hate and harassment to connect with a live Facebook employee. In no other sector does a company not have a way for victims of their product to seek help. The above are not sufficient, but they are a start. Facebook is a company of incredible resources. We hope that they finally understand that society wants them to put more of those resources into doing the hard work of transforming the potential of the largest communication platform in human history into a force for good.

What We’re Doing and Exploring:

We agree it is important to provide support to people who have been targeted on our platform.

While we can’t offer personalized support in every situation, we do want to make sure that we’re directing people to the best resources available for the problem they’re facing.

We are open to exploring ways to put people who report hate speech in touch with additional resources to find support, as we do when people report content for suicide and self-injury. That could include helping people connect with organizations who counter hate speech and harassment in their communities.



Update on July 21 2020

Updated on July 21, 2020 at 6:30PM PT

Creating New Teams to Address Racial Inequality on Facebook and Instagram

We’re constantly working to build a more equitable experience for everyone. To continue this work, we’re creating new teams to ensure fairness and equity across everything we do on Facebook and Instagram.

“The racial justice movement is a moment of real significance for our company,” says Vishal Shah, VP of Product, Instagram. “Any bias in our systems and policies runs counter to providing a platform for everyone to express themselves. While we’re always working to create a more equitable experience, we are setting up additional efforts to continue this progress — from establishing the Instagram Equity Team to Facebook’s Inclusive Product Council.”

We’ll share more details on this work in the coming months.



Update on July 17 2020

Updated on July 17, 2020 at 11:00AM PT

Updates to Our Continued Investment in System Transparency

Last month we announced some important changes to prepare for the 2020 US elections — a direct result of feedback from the civil rights community collected through our civil rights audit.

As we make these changes, we are sharing updates and next steps as to how we are continuing to make our systems more transparent in the areas of content monetization and brand safety, the enforcement of our community standards, and our collaboration with industry partners.

Read the full article.



Update on July 15 2020

Update on July 15, 2020 at 10:00AM PT

Facebook 2020 Diversity Report

Today we’re releasing our 2020 Diversity Report, which outlines many of the steps we’ve taken to improve the diversity of our own workforce. While we’ve made progress, we remain committed to building a company in which all of our employees are seen, heard and valued — and a global community where everyone has access to opportunity with dignity.

Facebook’s Chief Diversity Officer Maxine Williams penned an article summarizing the progress we’ve made as a company and the steps we’re taking for the road ahead.

“In every crisis, there are opportunities to help, to serve, and to bring people together — to stand for community and, more importantly, to act for community,” says Maxine. “This has been our mission from day one at Facebook, and we want inclusion to be a leading factor — not a lagging one — in everything we do.”

Read the full article, learn more about our investment in Black and diverse communities or read the 2020 Diversity Report. Below are some of the updates:

Overall Representation

We have increased representation for women, Black and Hispanic people across every category we track. Last year, we set the goal of “50 in 5,” which means that by 2024, at least 50% of our workforce will be underrepresented people. In doing this, we aim to double the number of women employees globally and double the number of Black and Hispanic employees in the US. When we announced this goal last year, people from underrepresented groups accounted for 43% of our staff. Today, that number is 45.3%.

Non-Technical Roles

From 2014 to 2020, US Black representation as a percentage of our workforce in non-technical roles grew from 2% to almost 9%. US Hispanic representation in similar roles grew from 6% to almost 11%.

Technical Roles

Numbers for technical roles have remained more stubborn. Our greatest success has been among women, with an increase from 15% to 24.1%. While the representation of Black and Hispanic people in technical roles has increased, from 1% to 1.7% for Black in Tech and from 3% to 4.3% for Hispanic in Tech, progress has been slower than in non-technical roles. Increasing representation in technical roles, and overall, remains a major focus of our efforts in the immediate and long term.

Leadership Roles

This year we made an additional commitment: to increase the representation of people of color in leadership positions in the US by 30%, including a 30% increase in the representation of Black people in leadership by 2025.

These numbers, of course, are only one part of our story. We aim to create a workplace at Facebook where everyone has opportunity with dignity.



Update on July 09 2020

Update on July 9, 2020 at 6:30AM PT

Meetings with Civil Rights Leaders

This week Mark Zuckerberg, Sheryl Sandberg and several other members of our management team met with the organizers of the Stop Hate for Profit campaign. They also met with other civil rights leaders who have worked closely with us on our efforts to address civil rights, including Vanita Gupta from the Leadership Conference on Civil & Human Rights, Sherrilyn Ifill from the NAACP Legal Defense Fund and Laura Murphy, a well-known and respected civil rights and civil liberties advocate who has also been leading our civil rights audit. The meeting was an opportunity for us to hear from the campaign organizers and reaffirm our commitment to combating hate on our platform. They want Facebook to be free of hate speech and so do we. That's why it's so important that we work to get this right.

We appreciate the leaders taking the time to talk through their recommendations, and wanted to use this opportunity to explain how existing and ongoing work maps to the #StopHateForProfit recommendations. As demonstrated in this document, there is a lot of overlap, which reaffirms the fact that our end goals are the same — fight online hate. This doesn’t mean we won’t do more to address the delta between what we currently do and what the organizers have asked for, but we hope this document helps people understand where we are.

Changes We’re Making as a Result of Our Third Civil Rights Audit

Even beyond yesterday’s meetings, the civil rights community has been clear. In an interview with The Verge, Jade Magnus Ogunnaike, Color of Change’s Deputy Senior Campaign Director says: “Companies need to actually undergo civil rights audits. They need to look at how racism and discrimination are showing up at every level in the company.”

We agree and voluntarily accepted the call to undertake a civil rights audit at the encouragement of the civil rights community in May 2018, with the goal of strengthening and advancing civil rights on our service. We brought on Laura Murphy to lead the work, and over the course of more than two years, Laura has been working alongside civil rights law firm, Relman Colfax, in particular, one of their lead partners, Megan Cacace.

Together, the two of them have engaged with 100 civil rights organizations to inform the focus of the audit, and have provided real-time input on policy and product decisions at the company.

As Laura writes in the introduction to the report, the civil rights audit has been “meaningful” and we have taken some “positive and affirmative steps” towards effecting change.

That said, we also know that there is very real disappointment from the civil rights community. Today, we want to acknowledge both.

There are no quick fixes to the issues and recommendations the auditors have surfaced. Becoming a better company requires a deep analysis of how we can strengthen and advance civil rights at every level of our company. That is what this audit has been – but it is the beginning of the journey, not the end. It is on us to seriously review the recommendations Laura and Megan have made and invest in ongoing civil rights infrastructure and long-term change.

We’ve started making changes throughout the audit process including:

  • Bringing much-needed civil rights expertise in-house, starting with a commitment to hire a civil rights leader at the VP level who will be able to build out a long-term civil rights infrastructure and team.
  • Expanding our voter suppression policies since the 2016 and 2018 elections so that we now prohibit threats that voting will result in law enforcement consequences and attempts to coordinate with others to interfere with the right to vote, both of which have been known to intimidate and demobilize voters.
  • Extending the protections we have in place for voting to the US 2020 census by launching a robust census interference policy, which benefited from the auditors’ input and months of consultation with the US Census Bureau, civil rights groups and census experts.
  • Going above and beyond existing hate speech protections to ban some types of ads that are divisive with fear-mongering statements.
  • Directing people to our Voting Information Center for all posts about voting, with the goal that we help make sure people have accurate, real-time information about voting processes in their districts.
  • Taking meaningful steps to build a more diverse and inclusive workforce — and to create economic opportunity for communities of color. In recent weeks, we announced a commitment to spend at least $100 million with Black owned businesses, towards a goal of $1 billion in annual spend with diverse suppliers by the end of 2021. We also committed to bring on 30% more people of color, including 30% more Black people, in leadership positions.

While we've made progress, we clearly have more to do. The auditors and the civil rights community feel we have fallen short in a number of areas, explicitly calling out our position on politicians’ speech and recent enforcement decisions on both our voter suppression and hate speech policies. Underpinning all of this, the auditors conclude that, in addition to creating a diverse and more inclusive culture, we must do more to build out a robust internal civil rights infrastructure which, in turn, will help improve the decisions we make about products and policies.

We know the road ahead is a long one, but we also know we are better and stronger emerging from this audit. Thanks to Laura and Relman’s leadership, and the continued advocacy of civil rights groups and leaders, we are in a different place today than we were two years ago when the audit first began. Employees are asking questions about civil rights issues and implications before launching policies and products, our engagement with the civil rights community is deeper and more meaningful, and leadership all the way up to Mark Zuckerberg and Sheryl Sandberg is listening and learning. We will continue to engage with Laura and Megan and the civil rights community as we work to improve on civil rights issues as a company and a community.

You can learn more about the changes that we have made to date as a result of the two-year audit and read the audit report in full at this link.



Update on July 03 2020

Update on July 3, 2020 at 12:45PM PT

Our Principles and Policies

Our mission is to give people the power to build community and bring the world closer together — a mission that stands against hate. And our principles are what we stand for — the values we hold deeply and make tradeoffs to pursue. We highlight these values on our website along with our company mission, culture and leadership, and share them again with you here:

  • Give People a Voice: People deserve to be heard and to have a voice — even when that means defending the right of people we disagree with.
  • Serve Everyone: We work to make technology accessible to everyone, and our business model is ads so our service can be free.
  • Promote Economic Opportunity: Our tools level the playing field so businesses grow, create jobs and strengthen the economy.
  • Build Connection and Community: Our services help people connect, and when they’re at their best, they bring people closer together.
  • Keep People Safe and Protect Privacy: We have a responsibility to promote the best of what people can do together by keeping people safe and preventing harm.

Terms and Policies

We also share our terms and policies on our site so people can find everything they need to know in one place. We believe in transparency and openness about our policies and organize them into three categories.

Our commitment to expression is paramount, but we recognize the internet creates new and increased opportunities for abuse. For these reasons, when we limit expression, we do it in service of one or more the following values that drive our Community Standards:

  • Authenticity: We want to make sure the content people are seeing on Facebook is authentic. We believe that authenticity creates a better environment for sharing, and that’s why we don’t want people using Facebook to misrepresent who they are or what they’re doing.
  • Safety: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.
  • Privacy: We are committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, and to choose how and when to share on Facebook and to connect more easily.
  • Dignity: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others.

Our Community Standards and our terms of service apply to everyone. In addition, we hold advertisers to even stricter principles and advertising policies that we created to protect people from things like discriminatory ads.

Advertising Principles

We have principles that guide our decision making when it comes to advertising across Facebook, Messenger and Instagram. In an effort to continue being open, we’re sharing them again below. You can also read the original post in the Facebook Newsroom.

  • We build for people first. We build products to help people connect with the people and content they care about. Because we believe that when we create value for people, we also create value for businesses.
  • We don’t sell your data. Protecting people’s privacy is central to how we’ve designed our ad system.
  • People can control the ads they see. People can hide and block ads they don’t like. Anyone can visit their Ad Preferences to learn more about the interests and information that influence the ads they see, and manage this information so they get more relevant ads.
  • Advertising should be transparent. We built an ad library that lets people visit any Facebook Page and see the ads that advertiser is running. This will not only make advertising on Facebook more transparent; it will also hold advertisers accountable for the quality of ads they create.
  • Advertising should be safe and civil; it should not divide or discriminate. We don’t want advertising to be used for hate or discrimination, and our policies reflect that. We review many ads proactively using automated and manual tools, and reactively when people hide, block or mark ads as offensive.
  • Advertising should empower businesses big and small. As long as businesses follow our Community Standards and policies that help keep people safe, our platform should empower all advertisers with all voices to reach relevant audiences or build a community.
  • We’re always improving our advertising. We’re always making improvements and investing in what works. As people’s behaviors change, we’ll continue listening to feedback to improve the ads people see on our service.

Advertising Policies

When we share our principles, we also talk about advertising policies. Our advertising policies are there to make it clear what types of ad content are allowed on Facebook. They cover:

  • The ad review process
  • Steps to take if disapproved
  • Prohibited content
  • Restricted content
  • Video ads
  • Targeting
  • Positioning
  • Text in ad images
  • Lead ads
  • Use of our brand assets
  • Data use restrictions
  • Things you should know

Because the world and our services are always evolving, we constantly review our policies in collaboration with outside experts so they stay up to date and relevant, and reflect how things change. Read yesterday’s announcement about updates we’re making to our business terms and a recent update from CEO Mark Zuckerberg about changes we’re making to our policies.



Update on July 01 2020

Originally published July 1, 2020 at 7:30AM PT on AdAge

Facebook Does Not Benefit From Hate

Nick Clegg, Facebook’s VP of Global Affairs and Communications penned an article about the progress we’ve made with moderating harmful content and taking down hate speech before and after someone reports it.

“A recent European Commission report found that Facebook assessed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter,” says Nick. “Last month, we reported that we find nearly 90% of the hate speech we remove before someone reports it - up from 24% little over two years ago. We took action against 9.6 million pieces of content in the first quarter of 2020 – up from 5.7 million in the previous quarter. And 99% of the ISIS & Al Qaeda content we remove is taken down before anyone reports it to us.”

Read the full article.



Update on July 01 2020

Update on July 1, 2020 at 7:30 AM PT

Addressing the Stop Hate For Profit Recommendations

The Stop Hate For Profit boycott organizers outlined nine overall recommendations that fall into three categories: provide more support to people who are targets of racism, antisemitism and hate, increase transparency and control around hate speech or misinformation and improve the safety of private Groups on Facebook.

Below, we’re addressing the recommendations, describing the work that is underway and sharing areas where we’re exploring further changes.

Provide More Support to People Who are Targets of Racism, Anti Semitism and Hate

The boycott organizers have asked for three things within this category:

  1. Create a separate moderation pipeline staffed by experts on identity-based hate for users who express they have been targeted because of specific identity characteristics.
    Today, hate speech reports on Facebook are already automatically funneled to a set of reviewers with specific training in our identity-based hate policies in 50 markets covering 30 languages. In addition, we consult with experts on identity-based hate in developing and evolving the policies that these trained reviewers enforce.
  2. Connect individuals experiencing hate and harassment on Facebook with resources for support.
    Our approach, developed in consultation with experts, is to follow up with people who report hate speech and tell them about the actions we’ve taken. We also provide user controls that allow people to moderate comments on their posts, block other users and control the visibility of their posts by creating a restricted list. We’re exploring ways we can connect people with additional resources.
  3. Provide more information about hate speech in reports.
    We are committed to continuing to improve transparency about our Community Standards enforcement. We intend to include the prevalence of hate speech in future Community Standards Enforcement Reports (CSER), pending no further complications from COVID-19.

Increase Transparency and Control Around Hate Speech or Misinformation

The next set of recommendations focuses on increasing transparency and controls around hate speech or misinformation, including preventing ads from appearing near content labeled as hate or misinformation, telling advertisers how often their ads have appeared next to this content, providing refunds to those advertisers and providing an audited transparency report.

Below are some of our ongoing efforts as well as areas we’re continuing to explore:

  • Third-Party Fact Checking: Helps identify misinformation, puts prominent labels and reduces the distribution of content or disapproves related ads if the content is rated false or partly false on our platform.
  • Brand Safety Controls: Advertisers can review publishers, individual in-stream videos and instant articles in which their ads were embedded. While there are substantial technical challenges to extending the offerings in the Brand Safety Controls more broadly, we are exploring what is possible.
  • Advertiser Refunds: Issued when ads run in videos or in instant articles that are determined to violate our Network Policies.
  • Community Standards Enforcement Reports: Provide extensive information about our efforts to keep our community safe. We will continue investing in this work and will commit whatever resources are necessary to improve our enforcement.
  • Certification From Independent Groups: Groups like the Digital Trading Standards Group have examined our advertising processes against JICWEBS’ Good Practice Principles.
  • Auditing Our Brand Safety Tools and Practices: We will continue to work with advertising industry bodies like the Global Alliance for Responsible Media and the Media Ratings Council on these audits.

Improve the Safety of Private Groups on Facebook

Our team of 35,000 safety and security professionals actively review potentially violating content today, including content in private Groups. In addition:

  • Our proactive artificial intelligence-based detection tools are also used to identify hateful content and Groups that aren’t reported to us.
  • If moderators post or permit the posting of violating content, the Group incurs penalties that can result in the Group being removed from Facebook.
  • We are exploring providing moderators with even better tools for moderating content and membership.
  • We are exploring ways to make moderators more accountable for the content in groups they moderate, like providing more education on our Community Standards and increasing the requirements on moderating potential bad actors.

This isn’t work that ever finishes. We recognize our responsibility to help change the trajectory of hate speech.



Update on June 29 2020

Originally published on June 29, 2020 at 11:00AM PT

Our Continued Investment in System Transparency

We gave an update on our how we’re investing in system transparency, which includes:

  • An audit run by the Media Rating Council (MRC) where we plan to evaluate our partner and content monetization policies and the brand safety controls we make available to advertisers.
  • A commitment to include hate speech prevalence in our quarterly Community Standards Enforcement Report (CSER).
  • Participation in the World Federation of Advertiser’s Global Alliance for Responsible Media (GARM) to align on brand safety standards and definitions, scaling education, common tools and systems, and independent oversight for the industry.



Update on June 26 2020

Originally published on June 26, 2020 at 11:25AM PT

CEO Mark Zuckerberg announced policy changes based on feedback from the civil rights community and our civil rights auditors. These include:
  • Changing our policies to better protect immigrants, migrants, refugees and asylum seekers. We’ve already prohibited dehumanizing and violent speech targeted at these groups — now we’re banning ads suggesting these groups are inferior or express contempt, dismissal or disgust directed at them.
  • Banning posts that make false claims about ICE agents checking for immigration papers at polling places or other threats meant to discourage voting. We will use our Election Operations Center to work with state election authorities to remove false claims about polling conditions in the 72 hours leading up to election day.
  • Labeling content that we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society — but we'll add a prompt to tell people that the content they're sharing may violate our policies.



Update on June 21 2020

Originally published on June 21, 2020 at 5:00PM PT

Where We Stand

We stand against racism and in support of the Black community and all those fighting for equality and justice every single day. That’s why we’re taking action to advance racial justice in our company and on our platform. We are:

  • Improving our products, programs and policies.
  • Investing in communities of color.
  • Supporting organizations fighting for racial justice.
  • Elevating Black voices and stories.
  • Building a more diverse and inclusive workforce.

Read the full post.

Related Articles

More Ways to Reach Customers Through Video, AI-Powered Tools and Improved Ad Formats
Announcements · March 12, 2024

More Ways to Reach Customers Through Video, AI-Powered Tools and Improved Ad Formats

Today we’re introducing new AI-powered tools that provide more information to people at the moment of discovery to give them the confidence to make a purchase after seeing an ad.

Performance Spotlight: Using Advanced Analytics to measure Lowe’s One Roof Media Network
Announcements · March 11, 2024

Performance Spotlight: Using Advanced Analytics to measure Lowe’s One Roof Media Network

Performance Spotlight: Using Advanced Analytics to measure Lowe’s One Roof Media Network

Performance Talks: Ask Me Anything with Nicola Mendelsohn
Announcements · February 27, 2024

Performance Talks: Ask Me Anything with Nicola Mendelsohn

In this special edition of Performance Talks we caught up with Nicola Mendelsohn, Vice President of Meta’s Global Business Group, to grab a coffee

Get Facebook Business news in your inbox.

Sign up for our monthly newsletter for the latest updates, insights, marketing trends and articles from Facebook.

Was this page helpful?