Wij willen met u aan tafel zitten en in een openhartig gesprek uitvinden welke uitdagingen en vragen er bij u spelen om zo, gezamelijk, tot een beste oplossing te komen. Oftewel, hoe kan de techniek u ondersteunen in plaats van dat u de techniek moet ondersteunen.

The writing is on the wall for Facebook — the platform is losing market share, fast, among young users.

Edison Research’s Infinite Dial study from early 2019 showed that 62% of U.S. 12–34 year-olds are Facebook users, down from 67% in 2018 and 79% in 2017. This decrease is particularly notable as 35–54 and 55+ age group usage has been constant or even increased.

There are many theories behind Facebook’s fall from grace among millennials and Gen Zers — an influx of older users that change the dynamics of the platform, competition from more mobile and visual-friendly platforms like Instagram and Snapchat, and the company’s privacy scandals are just a few.

We surveyed 115 of our Accelerated campus ambassadors to learn more about how they’re using Facebook today. It’s worth noting that this group skews older Gen Z (ages 18–24); we suspect you’d get different results if you surveyed younger teens.

Overall penetration is still high, as 99% of our respondents have Facebook accounts. And most aren’t abandoning the platform entirely — 59% are on Facebook every day, and another 32% are on weekly. Daily Facebook usage is much lower than Instagram, however, which 82% of our respondents use daily and 7% use weekly.

Data from our scouts also confirms that the shift in usage in the last few years is particularly dramatic among younger users. 66% report using Facebook less frequently over the past two years, compared to 11% who use it more frequently (23% say their usage hasn’t changed).

What’s most interesting is what college students are using Facebook for. When we were in high school and college in the early/mid 2010s, our friends used Facebook to post (broadcast) content via their status, photos, and posts on friends’ Walls. Today, very few students use Facebook to “broadcast” content. Only 5% of our respondents say they regularly upload photos to Facebook, 4% post on friends’ Walls, and 3.5% post content to the Newsfeed (statuses). What are they doing instead?


TechCrunch

Facebook added a correction notice to a post by a fringe news site that Singapore’s government said contained false information. It’s the first time the government has tried to enforce a new law against ‘fake news’ outside its borders.

The post by fringe news site States Times Review (STR), contained “scurrilous accusations” according to the Singapore government.

The States Times Review post contained accusations about the arrest of an alleged whistleblower and election-rigging.

Singapore authorities had previously ordered STR editor Alex Tan to correct the post but the Australian citizen said he would “not comply with any order from a foreign government”.

Mr Tan, who was born in Singapore, said he was an Australian citizen living in Australia and was not subject to the law. In a follow-up post, he said he would “defy and resist every unjust law”. He also posted the article on Twitter, LinkedIn and Google Docs and challenged the government to order corrections there as well.

On the note Facebook said it “is legally required to tell you that the Singapore government says this post has false information”. They then embedded the note at the bottom of the original post, which was not altered. Only social media users in Singapore could see the note.

In a statement, Facebook said it had applied the label as required under the “fake news” law. The law, known as the Protection from Online Falsehoods and Manipulation bill, came into effect in October.

According to Facebook’s “transparency report” it often blocks content that governments allege violate local laws, with nearly 18,000 cases globally in the year to June.

Facebook — which has its Asia headquarters in Singapore — said it hoped assurances that the law would not impact on free expression “will lead to a measured and transparent approach to implementation”.

Anyone who breaks the law could be fined heavily and face a prison sentence of up to five years. The law also bans the use of fake accounts or bots to spread fake news, with penalties of up to S$ 1m (£563,000, $ 733,700) and a jail term of up to 10 years.

Critics say the law’s reach could jeopardize freedom of expression both in the city-state and outside its borders.


TechCrunch

Facebook has reached a settlement with the UK’s data protection watchdog, the ICO, agreeing to pay in full a £500,000 (~$ 643k) fine following the latter’s investigating into the Cambridge Analytica data misuse scandal.

As part of the arrangement Facebook has agreed to drop its legal appeal against the penalty. But under the terms of the settlement it has not admitted any liability in relation to paying the fine, which is the maximum possible monetary penalty under the applicable UK data protection law. (The Cambridge Analytica scandal predates Europe’s GDPR framework coming into force.)

Facebook’s appeal against the ICO’s penalty was focused on a claim that there was no evidence that U.K. Facebook users’ data had being mis-used by Cambridge Analytica .

But there’s a further twist here in that the company had secured a win, from a first tier legal tribunal — which held in June that “procedural fairness and allegations of bias” on the part of the ICO should be considered as part of its appeal.

The decision required the ICO to disclose materials relating to its decision-making process regarding the Facebook fine. The ICO, evidently less than keen for its emails to be trawled through, appealed last month. It’s now withdrawing the action as part of the settlement, Facebook having dropped its legal action.

In a statement laying out the bare bones of the settlement reached, the ICO writes: “The Commissioner considers that this agreement best serves the interests of all UK data subjects who are Facebook users. Both Facebook and the ICO are committed to continuing to work to ensure compliance with applicable data protection laws.”

An ICO spokeswoman did not respond to additional questions — telling us it does not have anything further to add than its public statement.

As part of the settlement, the ICO writes that Facebook is being allowed to retain some (unspecified) “documents” that the ICO had disclosed during the appeal process — to use for “other purposes”, including for furthering its own investigation into issues around Cambridge Analytica.

“Parts of this investigation had previously been put on hold at the ICO’s direction and can now resume,” the ICO adds.

Under the terms of the settlement the ICO and Facebook each pay their own legal costs. While the £500k fine is not kept by the ICO but paid to HM Treasury’s consolidated fund.

Commenting in a statement, deputy commissioner, James Dipple-Johnstone, said:

The ICO welcomes the agreement reached with Facebook for the withdrawal of their appeal against our Monetary Penalty Notice and agreement to pay the fine. The ICO’s main concern was that UK citizen data was exposed to a serious risk of harm. Protection of personal information and personal privacy is of fundamental importance, not only for the rights of individuals, but also as we now know, for the preservation of a strong democracy. We are pleased to hear that Facebook has taken, and will continue to take, significant steps to comply with the fundamental principles of data protection. With this strong commitment to protecting people’s personal information and privacy, we expect that Facebook will be able to move forward and learn from the events of this case.

In its own supporting statement, attached to the ICO’s remarks, Harry Kinmonth, director and associate general counsel at Facebook, added:

We are pleased to have reached a settlement with the ICO. As we have said before, we wish we had done more to investigate claims about Cambridge Analytica in 2015. We made major changes to our platform back then, significantly restricting the information which app developers could access. Protecting people’s information and privacy is a top priority for Facebook, and we are continuing to build new controls to help people protect and manage their information. The ICO has stated that it has not discovered evidence that the data of Facebook users in the EU was transferred to Cambridge Analytica by Dr Kogan. However, we look forward to continuing to cooperate with the ICO’s wider and ongoing investigation into the use of data analytics for political purposes.

A charitable interpretation of what’s gone on here is that both Facebook and the ICO have reached a stalemate where their interests are better served by taking a quick win that puts the issue to bed, rather than dragging on with legal appeals that might also have raised fresh embarrassments. 

That’s quick wins in terms of PR (a paid fine for the ICO; and drawing a line under the issue for Facebook), as well as (potentially) useful data to further Facebook’s internal investigation of the Cambridge Analytica scandal.

We don’t know exactly it’s getting from the ICO’s document stash. But we do know it’s facing a number of lawsuits and legal challenges over the scandal in the US. 

The ICO announced its intention to fine Facebook over the Cambridge Analytica scandal just over a year ago.

In March 2018 it had raided the UK offices of the now defunct data company, after obtaining a warrant, taking away hard drives and computers for analysis. It had also earlier ordered Facebook to withdraw its own investigators from the company’s offices.

Speaking to a UK parliamentary committee a year ago the information commissioner, Elizabeth Denham, and deputy Dipple-Johnstone, discussed their (then) ongoing investigation of data seized from Cambridge Analytica — saying they believed the Facebook user data-set the company had misappropriated could have been passed to more entities than were publicly known.

The ICO said at that point it was looking into “about half a dozen” entities.

It also told the committee it had evidence that, even as recently as early 2018, Cambridge Analytica might have retained some of the Facebook data — despite having claimed it had deleted everything.

“The follow up was less than robust. And that’s one of the reasons that we fined Facebook £500,000,” Denham also said at the time. 

Some of this evidence will likely be very useful for Facebook as it prepares to defend itself in legal challenges related to Cambridge Analytica. As well as aiding its claimed platform audit — when, in the wake of the scandal, Facebook said it would run a historical app audit and challenge all developers who it determined had downloaded large amounts of user data.

The audit, which it announced in March 2018, apparently remains ongoing.


TechCrunch

Submit campaign ads to fact checking, limit microtargeting, cap spending, observe silence periods, or at least warn users. These are the solutions Facebook employees put forward in an open letter pleading with CEO Mark Zuckerberg and company leadership to address misinformation in political ads.

The letter, obtained by the New York Times’ Mike Isaac, insists that “Free speech and paid speech are not the same thing . . . Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for.” The letter was posted to Facebook’s internal collaboration forum a few weeks ago.

The sentiments echo what I called for in a TechCrunch opinion piece on October 13th calling on Facebook to ban political ads. Unfettered misinformation in political ads on Facebook lets politicians and their supporters spread inflammatory and inaccurate claims about their views and their rivals while racking up donations to buy more of these ads.

The social network can still offer freedom of expression to political campaigns on their own Facebook Pages while limiting the ability of the richest and most dishonest to pay to make their lies the loudest. We suggested that if Facebook won’t drop political ads, they should be fact checked and/or use an array of generic “vote for me” or “donate here” ad units that don’t allow accusations. We also criticized how microtargeting of communities vulnerable to misinformation and instant donation links make Facebook ads more dangerous than equivalent TV or radio spots.

Mark Zuckerberg Hearing In Congress

The Facebook CEO, Mark Zuckerberg, testified before the House Financial Services Committee on Wednesday October 23, 2019 Washington, D.C. (Photo by Aurora Samperio/NurPhoto via Getty Images)

Over 250 employees of Facebook’s 35,000 staffers have signed the letter, that declares “We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.” It suggests the current policy undermines Facebook’s election integrity work, confuses users about where misinformation is allowed, and signals Facebook is happy to profit from lies.

The solutions suggested include:

  1. Don’t accept political ads unless they’re subject to third-party fact checks
  2. Use visual design to more strongly differentiate between political ads and organic non-ad posts
  3. Restrict microtargeting for political ads including the use of Custom Audiences since microtargeted hides ads from as much public scrutiny that Facebook claims keeps politicians honest
  4. Observe pre-election silence periods for political ads to limit the impact and scale of misinformation
  5. Limit ad spending per politician or candidate, with spending by them and their supporting political action committees combined
  6. Make it more visually clear to users that political ads aren’t fact-checked

A combination of these approaches could let Facebook stop short of banning political ads without allowing rampant misinformation or having to police individual claims.

Zuckerberg Elections 1

Zuckerberg had stood resolute on the policy despite backlash from the press and lawmakers including Representative Alexandria Ocasio-Cortez (D-NY). She left him tongue-tied during a congressional testimony when she asked exactly what kinds of misinfo were allowed in ads.

But then Friday Facebook blocked an ad designed to test its limits by claiming Republican Lindsey Graham had voted for Ocasio-Cortez’s Green Deal he actually opposes. Facebook told Reuters it will fact-check PAC ads

One sensible approach for politicians’ ads would be for Facebook to ramp up fact-checking, starting with Presidential candidates until it has the resources to scan more. Those fact-checked as false should receive an interstitial warning blocking their content rather than just a “false” label. That could be paired with giving political ads a bigger disclaimer without making them too prominent looking in general and only allowing targeting by state.

Deciding on potential spending limits and silent periods would be more messy. Low limits could even the playing field and broad silent periods especially during voting periods could prevent voter suppression. Perhaps these specifics should be left to Facebook’s upcoming independent Oversight Board that acts as a supreme court for moderation decisions and policies.

fb arbiter of truth

Zuckerberg’s core argument for the policy is that over time, history bends towards more speech, not censorship. But that succumbs to utopic fallacy that assumes technology evenly advantages the honest and dishonest. In reality, sensational misinformation spreads much further and faster than level-headed truth. Microtargeted ads with thousands of variants undercut and overwhelm the democratic apparatus designed to punish liars, while partisan news outlets counter attempts to call them out.

Zuckerberg wants to avoid Facebook becoming the truth police. But as we and employees have put forward, there a progressive approaches to limiting misinformation if he’s willing to step back from his philosophical orthodoxy.

The full text of the letter from Facebook employees to leadership about political ads can be found below, via the New York Times:

We are proud to work here.

Facebook stands for people expressing their voice. Creating a place where we can debate, share different opinions, and express our views is what makes our app and technologies meaningful for people all over the world.

We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.”

This is our company.

We’re reaching out to you, the leaders of this company, because we’re worried we’re on track to undo the great strides our product teams have made in integrity over the last two years. We work here because we care, because we know that even our smallest choices impact communities at an astounding scale. We want to raise our concerns before it’s too late.

Free speech and paid speech are not the same thing.

Misinformation affects us all. Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for. We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.

Allowing paid civic misinformation to run on the platform in its current state has the potential to:

— Increase distrust in our platform by allowing similar paid and organic content to sit side-by-side — some with third-party fact-checking and some without. Additionally, it communicates that we are OK profiting from deliberate misinformation campaigns by those in or seeking positions of power.

— Undo integrity product work. Currently, integrity teams are working hard to give users more context on the content they see, demote violating content, and more. For the Election 2020 Lockdown, these teams made hard choices on what to support and what not to support, and this policy will undo much of that work by undermining trust in the platform. And after the 2020 Lockdown, this policy has the potential to continue to cause harm in coming elections around the world.

Proposals for improvement

Our goal is to bring awareness to our leadership that a large part of the employee body does not agree with this policy. We want to work with our leadership to develop better solutions that both protect our business and the people who use our products. We know this work is nuanced, but there are many things we can do short of eliminating political ads altogether.

These suggestions are all focused on ad-related content, not organic.

1. Hold political ads to the same standard as other ads.

a. Misinformation shared by political advertisers has an outsized detrimental impact on our community. We should not accept money for political ads without applying the standards that our other ads have to follow.

2. Stronger visual design treatment for political ads.

a. People have trouble distinguishing political ads from organic posts. We should apply a stronger design treatment to political ads that makes it easier for people to establish context.

3. Restrict targeting for political ads.

a. Currently, politicians and political campaigns can use our advanced targeting tools, such as Custom Audiences. It is common for political advertisers to upload voter rolls (which are publicly available in order to reach voters) and then use behavioral tracking tools (such as the FB pixel) and ad engagement to refine ads further. The risk with allowing this is that it’s hard for people in the electorate to participate in the “public scrutiny” that we’re saying comes along with political speech. These ads are often so micro-targeted that the conversations on our platforms are much more siloed than on other platforms. Currently we restrict targeting for housing and education and credit verticals due to a history of discrimination. We should extend similar restrictions to political advertising.

4. Broader observance of the election silence periods

a. Observe election silence in compliance with local laws and regulations. Explore a self-imposed election silence for all elections around the world to act in good faith and as good citizens.

5. Spend caps for individual politicians, regardless of source

a. FB has stated that one of the benefits of running political ads is to help more voices get heard. However, high-profile politicians can out-spend new voices and drown out the competition. To solve for this, if you have a PAC and a politician both running ads, there would be a limit that would apply to both together, rather than to each advertiser individually.

6. Clearer policies for political ads

a. If FB does not change the policies for political ads, we need to update the way they are displayed. For consumers and advertisers, it’s not immediately clear that political ads are exempt from the fact-checking that other ads go through. It should be easily understood by anyone that our advertising policies about misinformation don’t apply to original political content or ads, especially since political misinformation is more destructive than other types of misinformation.

Therefore, the section of the policies should be moved from “prohibited content” (which is not allowed at all) to “restricted content” (which is allowed with restrictions).

We want to have this conversation in an open dialog because we want to see actual change.

We are proud of the work that the integrity teams have done, and we don’t want to see that undermined by policy. Over the coming months, we’ll continue this conversation, and we look forward to working towards solutions together.

This is still our company.


TechCrunch

Are we really doing this again? After the pivot to video. After Instant Articles. After news was deleted from the News Feed. Once more, Facebook dangles extra traffic, and journalism outlets leap through its hoop and into its cage.

Tomorrow, Facebook will unveil its News tab. About 200 publishers are already aboard including the Wall Street Journal and BuzzFeed News, and some will be paid. None seem to have learned the lesson of platform risk.

facebook newspaper dollars

When you build on someone else’s land, don’t be surprised when you’re bulldozed. And really, given Facebook’s flawless track record of pulling the rug out from under publishers, no one should be surprised.

I could just re-run my 2015 piece on how “Facebook is turning publishers into ghost writers,” merely dumb content in its smart pipe. Or my 2018 piece on “how Facebook stole the news business” by retraining readers to abandon publishers’ sites and rely on its algorithmic feed.

Chronicling Facebook’s abuse of publishers

Let’s take a stroll back through time and check out Facebook’s past flip-flops on news that hurt everyone else:

-In 2007 before Facebook even got into news, it launches a developer platform with tons of free virality, leading to the build-up of companies like Zynga. Once that spam started drowning the News Feed, Facebook cut it and Zynga off, then largely abandoned gaming for half a decade as the company went mobile. Zynga never fully recovered.

-In 2011, Facebook launches the open graph platform with Social Reader apps that auto-share to friends what news articles you’re reading. Publishers like The Guardian and Washington Post race to build these apps and score viral traffic. But in 2012, Facebook changes the feed post design and prominence of social reader apps, they lost most of their users, those and other outlets shut down their apps, and Facebook largely abandons the platform

guardian social reader dau done done done 1

-In 2015, Facebook launches Instant Articles, hosting news content inside its app to make it load faster. But heavy-handed rules restricting advertising, subscription signup boxes, and recirculation modules lead publishers to get little out of Instant Articles. By late 2017, many publishers had largely abandoned the feature.

Facebook Instant Articles Usage

Decline of Instant Article use, via Columbia Journalism Review

-Also in 2015, Facebook started discussing “the shift to video,” citing 1 billion video views per day. As the News Feed algorithm prioritized video and daily views climbed to 8 billion within the year, newsrooms shifted headcount and resources from text to video. But a lawsuit later revealed Facebook already knew it was inflating view metrics by 150% to 900%. By the end of 2017 it had downranked viral videos, eliminated 50 million hours per day of viewing (over 2 minutes per user), and later pulled back on paying publishers for Live video as it largely abandoned publisher videos in favor of friend content.

-In 2018, Facebook announced it would decrease the presence of news in the News Feed from 5% to 4% while prioritizing friends and family content. Referral shrank sharply, with Google overtaking it as the top referrer, while some outlets were hit hard like Slate which lost 87% of traffic from Facebook. You’d understand if some publishers felt…largely abandoned.

Slate Facebook Referral Traffic

Facebook referral traffic to slate plummeted 87% after a strategy change prioritized friends and family content over news

Are you sensing a trend? 📉

Facebook typically defends the whiplash caused by its strategic about-faces by claiming it does what’s best for users, follows data on what they want, and tries to protect them. What it leaves out is how the rest of the stakeholders are prioritized.

Aggregated to death

I used to think of Facebook as being in a bizarre love quadrangle with its users, developers and advertisers. But increasingly it feels like the company is in an abusive love/hate relationship with users, catering to their attention while exploiting their privacy. Meanwhile, it dominates the advertisers thanks to its duopoly with Google that lets it survive metrics errors, and the developers as it alters their access and reach depending on if it needs their users or is backpedaling after a data fiasco.

Only recently after severe backlash does society seem to be getting any of Facebook’s affection. And perhaps even lower in the hierarchy would be news publishers. They’re not a huge chunk of Facebook’s content or, therefore, its revenue, they’re not part of the friends and family graph at the foundation of the social network, and given how hard the press goes on Facebook relative to Apple and Google, it’s hard to see that relationship getting much worse than it already is.

how news feed works copy 2

That’s not to say Facebook doesn’t philosophically care about news. It invests in its Journalism Project hand-outs, literacy and its local news feature Today In. Facebook has worked diligently in the wake of Instant Article backlash to help publishers build out paywalls. Given how centrally it’s featured, Facebook’s team surely reads plenty of it. And supporting the sector could win it some kudos between scandals.

But what’s not central to Facebook’s survival will never be central to its strategy. News is not going to pay the bills, and it probably won’t cause a major change in its hallowed growth rate. Remember that Twitter, which hinges much more on news, is 1/23rd of Facebook’s market cap.

So hopefully at this point we’ve established that Facebook is not an ally of news publishers.

At best it’s a fickle fair-weather friend. And even paying out millions of dollars, which can sound like a lot in journalism land, is a tiny fraction of the $ 22 billion in profit it earned in 2018.

Whatever Facebook offers publishers is conditional. It’s unlikely to pay subsidies forever if the News tab doesn’t become sustainable. For newsrooms, changing game plans or reallocating resources means putting faith in Facebook it hasn’t earned.

What should publishers do? Constantly double-down on the concept of owned audience.

They should court direct traffic to their sites where they have the flexibility to point users to subscriptions or newsletters or podcasts or original reporting that’s satisfying even if it’s not as sexy in a feed.

Meet users where they are, but pull them back to where you live. Build an app users download or get them to bookmark the publisher across their devices. Develop alternative revenue sources to traffic-focused ads, such as subscriptions, events, merchandise, data and research. Pay to retain and recruit top talent with differentiated voices.

What scoops, opinions, analysis, and media can’t be ripped off or reblogged? Make that. What will stand out when stories from every outlet are stacked atop each other? Because apparently that’s the future. Don’t become generic dumb content fed through someone else’s smart pipe.

Ben Thompson Stratechery Aggregation Theory

As Ben Thompson of Stratechery has proselytized, Facebook is the aggregator to which the spoils of attention and advertisers accrue as they’re sucked out of the aggregated content suppliers. To the aggregator, the suppliers are interchangeable and disposable. Publishers are essentially ghostwriters for the Facebook News destination. Becoming dependent upon the aggregator means forfeiting control of your destiny.

Surely, experimenting to become the breakout star of the News tab could pay dividends. Publishers can take what it offers if that doesn’t require uprooting their process. But with everything subject to Facebook’s shifting attitudes, it will be like publishers trying to play bocce during an earthquake.

[Featured Image: Russell Werges]


TechCrunch

Permitting falsehood in political advertising would work if we had a model democracy, but we don’t. Not only are candidates dishonest, but voters aren’t educated, and the media isn’t objective. And now, hyperlinks turn lies into donations and donations into louder lies. The checks don’t balance. What we face is a self-reinforcing disinformation dystopia.

That’s why if Facebook, Twitter, Snapchat and YouTube don’t want to be the arbiters of truth in campaign ads, they should stop selling them. If they can’t be distributed safely, they shouldn’t be distributed at all.

No one wants historically untrustworthy social networks becoming the honesty police, deciding what’s factual enough to fly. But the alternative of allowing deception to run rampant is unacceptable. Until voter-elected officials can implement reasonable policies to preserve truth in campaign ads, the tech giants should go a step further and refuse to run them.

0A3B330A 3DC9 4A5F 9F7C 5EB85D753795

This problem came to a head recently when Facebook formalized its policy of allowing politicians to lie in ads and refusing to send their claims to third-party fact-checkers. “We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny” Facebook’s VP of policy Nick Clegg wrote.

The Trump campaign was already running ads with false claims about Democrats trying to repeal the Second Amendment and weeks-long scams about a “midnight deadline” for a contest to win the one-millionth MAGA hat.

Trump Ad

After the announcement, Trump’s campaign began running ads smearing potential opponent Joe Biden with widely debunked claims about his relationship with Ukraine. Facebook, YouTube and Twitter refused to remove the ad when asked by Biden.

In response to the policy, Elizabeth Warren is running ads claiming Facebook CEO Mark Zuckerberg endorses Trump because it’s allowing his campaign lies. She’s continued to press Facebook on the issue, asking “you can be in the disinformation-for-profit business, or you can hold yourself to some standards.”

It’s easy to imagine campaign ads escalating into an arms race of dishonesty.

Campaigns could advertise increasingly untrue and defamatory claims about each other tied to urgent calls for donations. Once all sides are complicit in the misinformation, lying loses its stigma, becomes the status quo, and ceases to have consequences. Otherwise, whichever campaign misleads more aggressively will have an edge.

“In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.” Facebook’s Clegg writes.

But as is emblematic of Facebook’s past mistakes, it’s putting too much idealistic faith in society. If all voters were well educated and we weren’t surrounded by hyperpartisan media from Fox News to far-left Facebook Pages, maybe this hands-off approach might work. But in reality, juicy lies spread further than boring truths, and plenty of “news” outlets are financially incentivized to share sensationalism and whatever keeps their team in power.

2931D35C EABA 490A BB17 3AAA1C3E49F3

Protecting the electorate should fall to legislators. But incumbents have few reasons to change the rules that got them their jobs. The FCC already has truth in advertising policies, but exempts campaign ads and a judge struck down a law mandating accuracy.

Granted, there have always been dishonest candidates, uninformed voters, and one-sided news outlets. But it’s all gotten worse. We’re in a post-truth era now where the spoils won through deceptive demagoguery are clear. Cable news and digitally native publications have turned distortion of facts into a huge business.

Most critically, targeted social network advertising combined with donation links create a perpetual misinformation machine. Politicians can target vulnerable demographics with frightening lies, then say only their financial contribution will let the candidate save them. A few clicks later and the candidate has the cash to buy more ads, amplifying more untruths and raising even more money. Without the friction of having to pick up the phone, mail a letter, or even type in a URL like TV ads request, the feedback loop is shorter and things spiral out of control.

Many countries including the UK, Ireland, and the EU ban or heavily restrict TV campaign ads. There’s plenty of precedent for policies keeping candidates’ money out of the most powerful communication mediums.

Campaign commercials on US television might need additional regulation as well. However, the lack of direct connections to donate buttons, microtargeting, and rapid variable testing weaken their potential for abuse. Individual networks can refuse ads for containing falsehoods as CNN recently did without the same backlash over bias that an entity as powerful as Facebook receives.

This is why the social networks should halt sales of political campaign ads now. They’re the one set of stakeholders with flexibility and that could make a united decision. You’ll never get all the politicians and media to be honest, or the public to understand, but just a few companies could set a policy that would protect democracy from the world’s . And they could do it without having to pick sides or make questionable decisions on a case-by-case basis. Just block them all from all candidates.

F864D0B0 D9EE 4C3A 8A33 EE834EF136C8

Facebook wrote in response to Biden’s request to block the Trump ads that “Our approach is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process, and the belief that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.”

But banning campaign ads would still leave room for open political expression that’s subject to public scrutiny. Social networks should continue to let politicians say what they want to their own followers, barring calls for violence. Tech giants can offer a degree of freedom of speech, just not freedom of reach. Whoever wants to listen can, but they shouldn’t be able to jam misinformation into the feeds of the unsuspecting.

If the tech giants want to stop short of completely banning campaign ads, they could introduce a format designed to minimize misinformation. Politicians could be allowed to simply promote themselves with a set of stock messages, but without the option to make claims about themselves or their opponents.

Campaign ads aren’t a huge revenue driver for social apps, nor are they a high-margin business nowadays. The Trump and Clinton campaigns spent only a combined $ 81 million on 2016 election ads, a fraction of Facebook’s $ 27 billion in revenue that year. $ 284 million was spent in total on 2018 midterm election ads versus Facebook’s $ 55 billion in revenue last year, says Tech For Campaigns. Zuckerberg even said that Facebook will lose money selling political ads because of all the moderators it hires to weed out election interference by foreign parties.

Surely, there would be some unfortunate repercussions from blocking campaign ads. New candidates in local to national elections would lose a tool for reducing the lead of incumbents, some of which have already benefited from years of advertising. Some campaign ads might be pushed “underground” where they’re not properly labeled, though the major spenders could be kept under watch.

If the social apps can still offer free expression through candidates’ own accounts, aren’t reliant on politicians’ cash to survive, won’t police specific lies in their promos, and would rather let the government regulate the situation, then they should respectfully decline to sell campaign advertising. Following the law isn’t enough until the laws adapt. This will be an ongoing issue through the 2020 election, and leaving the floodgates open is irresponsible.

If a game is dangerous, you don’t eliminate the referee. You stop playing until you can play safe.


TechCrunch

Facebook is buying CTRL-labs, a NY-based startup building an armband that translates movement and the wearer’s neural impulses into digital input signals, a company spokesperson tells TechCrunch.

CTRL-labs raised $ 67 million according to Crunchbase. The startup’s investors include GV, Lux Capital, Amazon’s Alexa Fund, Spark Capital, Founders Fund, among others. Facebook didn’t disclose how much they paid for the startup, but we’re digging around.

The acquisition, which has not yet closed, will bring the startup into the company’s Facebook Reality Labs division. CTRL labs’ CEO and co-founder Thomas Reardon, a veteran technologist whose accolades include founding the team at Microsoft that built Internet Explorer, will be joining Facebook while CTRL-labs’ employees will have the option to do the same, we are told.

Facebook has talked a lot about working on a non-invasive brain input device that can make things like text entry possible just by thinking. So far, most of the company’s progress on that project appears to be taking the form of university research that they’ve funded. With this acquisition, the company appears to be working more closely with technology that could one day be productized.

“We know there are more natural, intuitive ways to interact with devices and technology. And we want to build them,” Facebook AR/VR VP Andrew Bosworth wrote in a post announcing the deal. “It’s why we’ve agreed to acquire CTRL-labs. They will be joining our Facebook Reality Labs team where we hope to build this kind of technology, at scale, and get it into consumer products faster.”

CTRL-labs’ technology isn’t focused on text-entry as much as it is muscle movement and hand movements specifically. The startup’s progress was most recently distilled in a developer kit which paired multiple types of sensors together to accurately determine the wearer’s hand position. The wrist-worn device offered developers an alternative to camera-based or glove-based hand-tracking solutions. The company has previously talked about AR and VR input as a clear use case for the kit. Facebook did not give details on what this acquisition means for developers currently using CTRL-labs’ kit.

This acquisition also brings the armband patents of North (formerly Thalmic Labs) to Facebook. CTRL-labs purchased the patents related to the startup’s defunct Myo armband earlier this year for an undisclosed sum.

CTRL-labs acquisition brings more IP and talent under Facebook’s wings as competitors like Microsoft and Apple continue to build out augmented reality products. There is plenty of overlap between many of the technologies that Oculus is building for Facebook’s virtual reality products like the Quest and Rift S, but CTRL-Labs’ tech can help the company build input devices that are less bulky, less conspicuous and more robust.

“There are some fundamental advantages that we have over really any camera-based technology — including Leap Motion or Kinect — because we’re directly on the body sensing the signal that’s going from the brain to the hand.”  CTRL-labs Head of R&D Adam Berenzweig told TechCrunch in an interview late last year. “There are no issues with collusion or field-of-view problems — it doesn’t matter where your hands are, whether they’re in a glove or a spacesuit.”

Facebook is holding its Oculus Connect 6 developer conference later this week where the company will be delivering updates on its AR/VR efforts.


TechCrunch

Created by R the Company. Powered by SiteMuze.