Wij willen met u aan tafel zitten en in een openhartig gesprek uitvinden welke uitdagingen en vragen er bij u spelen om zo, gezamelijk, tot een beste oplossing te komen. Oftewel, hoe kan de techniek u ondersteunen in plaats van dat u de techniek moet ondersteunen.

A security researcher said he has matched 17 million phone numbers to Twitter user accounts by exploiting a flaw in Twitter’s Android app.

Ibrahim Balic found that it was possible to upload entire lists of generated phone numbers through Twitter’s contacts upload feature. “If you upload your phone number, it fetches user data in return,” he told TechCrunch.

He said Twitter’s contact upload feature doesn’t accept lists of phone numbers in sequential format — likely as a way to prevent this kind of matching. Instead, he generated more than two billion phone numbers, one after the other, then randomized the numbers, and uploaded them to Twitter through the Android app. (Balic said the bug did not exist in the web-based upload feature.)

Over a two-month period, Balic said he matched records from users in Israel, Turkey, Iran, Greece, Armenia, France and Germany, he said, but stopped after Twitter blocked the effort on December 20.

Balic provided TechCrunch with a sample of the phone numbers he matched. Using the site’s password reset feature, we verified his findings by comparing a random selection of usernames with the phone numbers that were provided.

In one case, TechCrunch was able to identify a senior Israeli politician using their matched phone number.

While he did not alert Twitter to the vulnerability, he took many of the phone numbers of high-profile Twitter users — including politicians and officials — to a WhatsApp group in an effort to warn users directly.

It’s not believed Balic’s efforts are related to a Twitter blog post published this week, which confirmed a bug could have allowed “a bad actor to see nonpublic account information or to control your account,” such as tweets, direct messages and location information.

A Twitter spokesperson told TechCrunch the company was working to “ensure this bug cannot be exploited again.”

“Upon learning of this bug, we suspended the accounts used to inappropriately access people’s personal information. Protecting the privacy and safety of the people who use Twitter is our number one priority and we remain focused on rapidly stopping spam and abuse originating from use of Twitter’s APIs,” the spokesperson said.

It’s the latest security lapse involving Twitter data in the past year. In May, Twitter admitted it gave account location data to one of its partners, even if the user had opted-out of having their data shared. In August, the company said it inadvertently gave its ad partners more data than it should have. And just last month, Twitter confirmed it used phone numbers provided by users for two-factor authentication for serving targeted ads.

Balic is previously known for identifying a security flaw breach that affected Apple’s developer center in 2013.


TechCrunch

In less than two weeks, two major reports have been published that contain leaked Chinese government documents about the persecution of Uighurs and other Muslim minorities in China. Details include the extent to which technology enables mass surveillance, making it possible to track the daily lives of people at unprecedented scale.

The first was a New York Times article that examined more than 400 pages of leaked documents detailing how government leaders, including President Xi Jinping, developed and enforced policies against Uighurs. The latest comes from the International Consortium of Investigative Journalists, an independent non-profit, and reports on more than 24 pages of documents that show how the government is using new technologies to engage in mass surveillance and identify groups for arrest and detainment in Xinjiang region camps that may now hold as many as a million Uighurs, Kazakhs and other minorities, including people who hold foreign citizenship.

These reports are significant because leaks of this magnitude from within the Communist Party of China are rare and they validate reports from former prisoners and the work of researchers and journalists who have been monitoring the persecution of the Uighurs, an ethnic group with more than 10 million people in China.

As ICIJ reporter Bethany Allen-Ebrahimian writes, the classifed documents, verified by independent experts and linguists, “demonstrates the power of technology to help drive industrial-scale human rights abuses.” Furthermore, they also force members of targeted groups in Xinjiang region to live in “a perpetual state of terror.”

The documents obtained by the ICIJ detail how the Integrated Joint Operations Platform (IJOP), an AI-based policing platform, is used by the police and other authorities to collect personal data, along with data from facial-recognition cameras and other surveillance tools, which is then fed into an algorithm to identify entire categories of Xinjiang residents for detention. The Human Rights Watch began reporting on the IJOP’s police app in early 2018 and the ICIJ report shows how powerful the platform has become.

The Human Rights Watch reverse-engineered the IJOP app used by police and found that it prompts them to enter a wide range of personal information about people they interrogate, including height, blood type, license plate numbers, education level, profession, recent travel and even household electric-meter readings, data which is then used by an algorithm that determines which groups of people should be viewed as “suspect.”

The documents also say that the Chinese government ordered security officials in Xinjiang to monitor users of Zapya, which has about 1.8 million users, for ties to terrorist organizations. Launched in 2012, the app was created by DewMobile, a Beijing-based startup that has received funding from InnoSpring Silicon Valley, Silicon Valley Bank and Tsinghua University and is meant to give people a way to download the Quran and send messages and files to other users without being connected to the Web.

According to the ICIJ, the documents show that since at least July 2016, Chinese authorities have been monitoring the app on some Uighurs’ phone in order to flag users for investigation. DewMobile did not respond to ICIJ’s repeated requests for comments. Uighurs who hold foreign citizenship or live abroad are not free from surveillance, with directives in the leaked documents ordering them to be monitored as well.

Allen-Ebrahimian describes the “grinding psychological effects of living under such a system,” which Samantha Hoffman, an analyst at the Australian Strategic Policy Institute, says is deliberate: “That’s how state terror works. Part of the fear that this instills is that you don’t know when you’re not OK.”

The reports by the New York Times and the ICIJ are important because they counter the Xi administration’s insistence that the detention camps are “vocational educational and training centers” meant to prevent extremist violence and help minority groups integrate into mainstream Chinese society, even though many experts now describe the persecution and imprisonment of Uighurs as cultural genocide. Former inmates have also reported torture, beatings and sexual violence including rape and forced abortions.

But the Chinese government continues to push its narrative, even as evidence against it grows. The Chinese embassy in the United Kingdom told the Guardian, an ICIJ partner organization, that the leaked documents “pure fabrication and fake news” and insisted that “the preventative measures have nothing to do with the eradication of religious groups.” (The Guardian published the embassy’s response here.)

In October, the United States placed eight companies, including SenseTime and Megvii, on a trade blacklist for the role the Commerce Department says their technology has played in China’s campaign against Uighurs, Kazakhs and other Muslim minority groups. But the  documents published by the New York Times and ICIJ show how deeply entrenched the Chinese government’s surveillance technology has become in the daily life of Xinjiang residents and underscores how imperative it is for the world to pay attention to the atrocities being carried out against minority groups there.


TechCrunch

A number of malicious websites used to hack into iPhones over a two-year period were targeting Uyghur Muslims, TechCrunch has learned.

Sources familiar with the matter said the websites were part of a state-backed attack — likely China — designed to target the Uyghur community in the country’s Xinjiang state.

It’s part of the latest effort by the Chinese government to crack down on the minority Muslim community in recent history. In the past year, Beijing has detained more than a million Uyghurs in internment camps, according to a United Nations human rights committee.

Google security researchers found and recently disclosed the malicious websites this week, but until now it wasn’t known who they were targeting.

The websites were part of a campaign to target the religious group by infecting an iPhone with malicious code simply by visiting a booby-trapped web page. In gaining unfettered access to the iPhone’s software, an attacker could read a victim’s messages, passwords, and track their location in near-real time.

Apple fixed the vulnerabilities in February in iOS 12.1.4, days after Google privately disclosed the flaws. News of the hacking campaign was first disclosed by this week.

These websites had “thousands of visitors” per week for at least two years, Google said.

But it’s not immediately known if the same websites were used to target Android users.

Victims were tricked into opening a link, which when opened would load one of the malicious websites used to infect the victim. It’s a common tactic to target phone owners with spyware.

One of the sources told TechCrunch that the websites also infected non-Uygurs who inadvertently accessed these domains because they were indexed in Google search, prompting the FBI to alert Google to ask for the site to be removed from its index to prevent infections.

A Google spokesperson would not comment beyond the published research. A FBI spokesperson said they could neither confirm nor deny any investigation, and did not comment further.

Google faced some criticism following its bombshell report for not releasing the websites used in the attacks. The researchers said the attacks were “indiscriminate watering hole attacks” with “no target discrimination,” noting that anyone visiting the site would have their iPhone hacked.

But the company would not say who was behind the attacks.

Apple did not comment. An email requesting comment to the Chinese consulate in New York was unreturned.


TechCrunch

Security researchers at Google say they’ve found a number of malicious websites which, when visited, could quietly hack into a victim’s iPhone by exploiting a set of previously undisclosed software flaws.

Google’s Project Zero said in a deep-dive blog post published late on Thursday that the websites were visited thousands of times per week by unsuspecting victims, in what they described as an “indiscriminate” attack.

“Simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant,” said Ian Beer, a security researcher at Project Zero.

He said the websites had been hacking iPhones over a “period of at least two years.”

The researchers found five distinct exploit chains involving 12 separate security flaws, including seven involving Safari, the in-built web browser on iPhones. The five separate attack chains allowed an attacker to gain “root” access to the device — the highest level of access and privilege on an iPhone. In doing so, an attacker could gain access to the device’s full range of features normally off-limits to the user. That means an attacker could quietly install malicious apps to spy on an iPhone owner without their knowledge or consent.

Google said based off their analysis, the vulnerabilities were used to steal a user’s photos and messages as well as track their location in near-realtime. The “implant” could also access the user’s on-device bank of saved passwords.

The vulnerabilities affect iOS 10 through to the current iOS 12 software version.

Google privately disclosed the vulnerabilities in February, giving Apple only a week to fix the flaws and roll out updates to its users. That’s a fraction of the 90 days typically given to software developers, giving an indication of the severity of the vulnerabilities.

Apple issued a fix six days later with iOS 12.1.4 for iPhone 5s and iPad Air and later.

Beer said it’s possible other hacking campaigns are currently in action.

The iPhone and iPad maker in general has a good rap on security and privacy matters. Recently the company increased its maximum bug bounty payout to $ 1 million for security researchers who find flaws that can silently target an iPhone and gain root-level privileges without any user interaction. Under Apple’s new bounty rules — set to go into effect later this year — Google would’ve been eligible for several million dollars in bounties.

When reached, a spokesperson for Apple declined to comment.


TechCrunch

Japan’s trade ministry said today that it will restrict the export of some tech materials to South Korea, including polyimides used in flexible displays made by companies like Samsung Electronics. The new rules come as the two countries argue over compensation for South Koreans forced to work in Japanese factories during World War II.

The list of restricted supplies, expected to go into effect on July 4, includes polyimides used in smartphone and flexible organic LED displays, and etching gas and resist used to make semiconductors. That means Japanese suppliers who wish to sell those materials to South Korean tech companies such as Samsung, LG and SK Hynix will need to submit each contract for approval.

Japan’s government may also remove South Korea from its list of countries that have fewer restrictions on trading technology that might have national security implications, reports Nikkei Asian Review.

Earlier this year, South Korea’s Supreme Court ruled several Japanese companies, including Nippon Steel & Sumitomo Metal Corp. and Mitsubishi Heavy Industries, that had used forced labor during World War II must pay compensation and began seizing assets for liquidation. But Japan’s government claims the issue was settled in 1965 as part of a treaty that restored basic diplomatic relations between the two countries and is asking South Korea to put the matter before an international arbitration panel instead.


TechCrunch

Internet platforms like Google, Facebook, and Twitter are under incredible pressure to reduce the proliferation of illegal and abhorrent content on their services.

Interestingly, Facebook’s Mark Zuckerberg recently called for the establishment of “third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards.” In a follow-up conversation with Axios, Kevin Martin of Facebook “compared the proposed standard-setting body to the Motion Picture Association of America’s system for rating movies.”

The ratings group, whose official name is the Classification and Rating Administration (CARA), was established in 1968 to stave off government censorship by educating parents about the contents of films. It has been in place ever since – and as longtime filmmakers, we’ve interacted with the MPAA’s ratings system hundreds of times – working closely with them to maintain our filmmakers’ creative vision, while, at the same time, keeping parents informed so that they can decide if those movies are appropriate for their children.  

CARA is not a perfect system. Filmmakers do not always agree with the ratings given to their films, but the board strives to be transparent as to why each film receives the rating it does. The system allows filmmakers to determine if they want to make certain cuts in order to attract a wider audience. Additionally, there are occasions where parents may not agree with the ratings given to certain films based on their content. CARA strives to consistently strike the delicate balance between protecting a creative vision and informing people and families about the contents of a film.

 CARA’s effectiveness is reflected in the fact that other creative industries including televisionvideo games, and music have also adopted their own voluntary ratings systems. 

While the MPAA’s ratings system works very well for pre-release review of content from a professionally- produced and curated industry, including the MPAA member companies and independent distributors, we do not believe that the MPAA model can work for dominant internet platforms like Google, Facebook, and Twitter that rely primarily on post hoc review of user-generated content (UGC).

Image: Bryce Durbin / TechCrunch

 Here’s why: CARA is staffed by parents whose judgment is informed by their experiences raising families – and, most importantly, they rate most movies before they appear in theaters. Once rated by CARA, a movie’s rating will carry over to subsequent formats, such as DVD, cable, broadcast, or online streaming, assuming no other edits are made.

By contrast, large internet platforms like Facebook and Google’s YouTube primarily rely on user-generated content (UGC), which becomes available almost instantaneously to each platform’s billions of users with no prior review. UGC platforms generally do not pre-screen content – instead they typically rely on users and content moderators, sometimes complemented by AI tools, to flag potentially problematic content after it is posted online.

The numbers are also revealing. CARA rates about 600-900 feature films each year, which translates to approximately 1,500 hours of content annually. That’s the equivalent of the amount of new content made available on YouTube every three minutes. Each day, uploads to YouTube total about 720,000 hours – that is equivalent to the amount of content CARA would review in 480 years!

 Another key distinction: premium video companies are legally accountable for all the content they make available, and it is not uncommon for them to have to defend themselves against claims based on the content of material they disseminate.

By contrast, as CreativeFuture said in an April 2018 letter to Congress: “the failure of Facebook and others to take responsibility [for their content] is rooted in decades-old policies, including legal immunities and safe harbors, that actually absolve internet platforms of accountability [for the content they host.]”

In short, internet platforms whose offerings consist mostly of unscreened user-generated content are very different businesses from media outlets that deliver professionally-produced, heavily-vetted, and curated content for which they are legally accountable.

Given these realities, the creative content industries’ approach to self-regulation does not provide a useful model for UGC-reliant platforms, and it would be a mistake to describe any post hoc review process as being “like MPAA’s ratings system.” It can never play that role.

This doesn’t mean there are not areas where we can collaborate. Facebook and Google could work with us to address rampant piracy. Interestingly, the challenge of controlling illegal and abhorrent content on internet platforms is very similar to the challenge of controlling piracy on those platforms. In both cases, bad things happen – the platforms’ current review systems are too slow to stop them, and harm occurs before mitigation efforts are triggered. 

Also, as CreativeFuture has previously said, “unlike the complicated work of actually moderating people’s ‘harmful’ [content], this is cut and dried – it’s against the law. These companies could work with creatives like never before, fostering a new, global community of advocates who could speak to their good will.”

Be that as it may, as Congress and the current Administration continue to consider ways to address online harms, it is important that those discussions be informed by an understanding of the dramatic differences between UGC-reliant internet platforms and creative content industries. A content-reviewing body like the MPAA’s CARA is likely a non-starter for the reasons mentioned above – and policymakers should not be distracted from getting to work on meaningful solutions.


TechCrunch

Created by R the Company. Powered by SiteMuze.