Are these big tech entities really your friend?
We all get creeped out by the articles about Amazon employees listening in on personal conversations via Alexa to “determine consumer behavior and match them up with the right ads.” But I’m more worried about data harvesting that’s less obviously invasive.
As an advertiser, I take part in the other side of the personal data exchange: the receiving end. I place ads on Facebook and Google’s platforms and target people based on demographic and behavioral data gleaned from their online activity. That is an essential part of what makes my firm a growth agency.
But Facebook, Google and social media are waging a war against the malefactors who made them the subject of widespread scorn after the 2016 election. Us advertisers are caught in the crossfire, as ad accounts are often shut down for seemingly no reason. I’m here to share insights into dealing with a ruthless, AI-enabled crackdown, how to avoid being flagged and what to do if you are.
Freedom of Choice?
We are our clients’ “freedom” team––we let them focus on why they founded their company in the first place. The loneliest part of business ownership is the fact that everyone thinks “being your own boss” means having the most freedom. In fact, we have the least. Freedom to me is having more choice. When it comes to the platforms we use to reach an audience, there is no choice.
My privacy concerns aren’t related to what companies might know about me personally, or the potential for people at those companies to tap into private exchanges or sensitive files. What I’m worried about is how the personal data Facebook and other companies have amassed has made them indispensable to every marketing department in the free world. I can run my branding agency without them under a different business model, but ultimately my clients would have to contend with the tech behemoths. I remember when social and search marketing didn’t dominate our world, and media outlets had to compete for your business. Now it’s impossible to be independent of any platform when all of your clients come to you with those credentials in hand.
The power Facebook and Google hold over my company, and every media outlet, is disconcerting. It’s an affront to my autonomy. At any moment they could shut down all the ads we have running, putting my financial well-being in jeopardy. And they have.
What to Do in the AI’s Crosshairs
So how can advertisers protect themselves from the malevolent bots?
For one, you need to remember that the AI is still simple-minded. It will produce false negatives and censor you based on a set of rules, but it also won’t recognize when you’re using sly innuendo or allusion to leave unspoken what they might flag. That said, you can’t avoid getting stopped for a policy violation if you don’t know the policies. That in itself is a full-time job, so make sure whomever is running your ads is up to the task.
Furthermore, the policies may not be entirely clear in the first place. For example, somewhere out there is some sort of banned emojis list, but it seems to be ever-changing and no one can see it—so you have to guess. I’ve seen the eggplant and ax emojis cause problems, so use your best judgment.
Also, don’t blindly trust Facebook’s Audience Network or the platform’s suggested audiences. Build your audience based on behaviors versus interest. If you’re not specific about who you want to reach, your ad can end up on sites that may not be appropriate for it.
If you do get an ad flagged or your account gets suspended, don’t assume that the platform didn’t make a mistake. Most importantly, don’t give up. Getting flagged once will more than likely raise the chances of it happening again. So, take the proper steps to appeal. Ask why an ad was flagged—you’ll either get an answer or the ad may get turned back on. On Facebook, however, do not try to delete or rerun an ad that’s already been flagged. Take the time to ask what was wrong and explain your reasons for running it. If you get a human being to take a look, you have a good chance of a reasonable outcome.
You might initially wind up communicating with a Facebook chatbot. You may not realize it at first and could waste a lot of time with no resolution. A trick I use that can save you hours of hassle is to reply with an image attached to the message. This seems to be the fastest and easiest way to eventually connect with a human representative.
Misguided Censorship
As Facebook continues to ramp up its AI technologies––an effort that comes off the public scrutiny following the Cambridge Analytica scandal—it has chosen to err on the side of censorship. We recently had an entire ad account for a well-regarded urologist banned because, of course, their practice involves dealing with people’s sexual health and “naughty bits.” The sponsored content in question was an article that taught readers about the impact of hereditary factors on surface-level men’s health issues. An article with multiple untruths might show up on my timeline, but we’re misguidedly banning the ones featuring doctors who save lives.
There is no regard for the fact that this practice’s Facebook page is educating people on very real risks, like when you need to get checked for prostate cancer. Kidney stones can be an indicator of major health problems, but Facebook can’t distinguish between a serious health discussion and a scammer trying to sell cheap pills, even if your client is the go-to urologist for comment at magazines such as Prevention, Men’s Health or Glamour Magazine.
At first I thought I needed to be better at playing into Facebook’s policies. According to this worthwhile exposé on Facebook’s recent troubles, we aren’t the only ones having troubles like this. Even the article’s publisher ran into issues: “One day, traffic from Facebook suddenly dropped by 90 percent, and for four weeks it stayed there. After protestations, emails, and a raised eyebrow or two about the coincidence, Facebook finally got to the bottom of it. An ad run by a liquor advertiser, targeted at WIRED readers, had been mistakenly categorized as engagement bait by the platform. In response, the algorithm had let all the air out of WIRED’s tires.”
More recently, an Instagram model was banned from the service for offering to send nude photos to anyone who donated to nonprofits supporting wildfire relief in Australia. It’s notoriously difficult to characterize the difference between nudity and pornography, but it seems clear that context is being lost on the robots. Truth be told, I’ve had people at Facebook (when I get to talk to a real person) tell me they have no idea why an ad was declined or an account was shut down. A bot may have believed we violated their rules, but when we requested a human review of our ads and overturn the verdict, the people almost always saw our side.
This all gets creepier, though. While going through the process of overruling a temporary halt on my own company’s ads, the system asked me a series of questions. It showed me a photo posted on my company’s Facebook page several years ago. In it, an employee is holding a red plastic cup. It asked me if it was a Solo cup and if there was alcohol in it. This was never run as an ad. These were photos simply meant to depict our company culture from way back in the day. It was in a photo album with no offending caption. Why does the bot need to know what brand of cup we were using and what we were drinking out of it?
As more accounts are disabled or given restricted access, the less we’ll see support for small businesses. Business owners are forced to reason with robots on their customer support system. Facebook is now disabling accounts that have ads leading to online landing pages that may not contain image alt tags, for example, so their collaboration with Google has become increasingly advanced. I suspect everyone with a Facebook or Google account has an individual pixel with unimaginable amounts of personal data attached.
Facebook’s ad policies might be well-intended, but in practice it’s hard to discern any rhyme or reason. An account for a new technology startup that allows people to find affordable housing gets banned before we ever run an ad, but a land development group that targets the top 5% of the ultra-wealthy has no problems with “housing designations.” I do have to wonder if this is all about ad dollars. Previous social media marketing directors have run “native” ads but it soon becomes clear to them that their ad won’t get much visibility if they don’t make ample use of Facebook’s Audience Network.
I think human moderators can see that a lot of advertisers with perfectly decent intentions are being implicated under this “ban first, ask questions later” regime. From a performance-based agency’s point of view, people will start censoring themselves as they shift their thinking to work within Facebook guidelines, which in turn will lead to less authenticity. The irony is that the company’s latest policies are in support of creating and fostering communities so users can make “true connections.” You can’t do that if what is happening is censorship and AI gone awry.
Taking a step back to a macro view of these vague policies, I have to question the purpose of items Facebook scans seemingly at random, with no concern for its actual policies. The AI that is learning to distinguish between one brand of party cups and another might be used to seek out troll farms, but it seems like they’re being trained to do something less righteous. When we see each other as online avatars or clusters of data points, rather than humans, we’re doing the work of our dystopian overlords for them.
It’s easy to get conspiratorial when discussing a monolithic tech giant that has massive amounts of information on you and everyone you know. It’s only made worse by the fact that there is really no way for advertisers to “quit” Facebook or Google. There is no recourse outside of hoping these mega-corporations are going to do the right thing. And what incentive do they have for that? So, do you think Facebook is our friend?
Illustration by Bill Murphy.