Wij willen met u aan tafel zitten en in een openhartig gesprek uitvinden welke uitdagingen en vragen er bij u spelen om zo, gezamelijk, tot een beste oplossing te komen. Oftewel, hoe kan de techniek u ondersteunen in plaats van dat u de techniek moet ondersteunen.

A number of malicious websites used to hack into iPhones over a two-year period were targeting Uyghur Muslims, TechCrunch has learned.

Sources familiar with the matter said the websites were part of a state-backed attack — likely China — designed to target the Uyghur community in the country’s Xinjiang state.

It’s part of the latest effort by the Chinese government to crack down on the minority Muslim community in recent history. In the past year, Beijing has detained more than a million Uyghurs in internment camps, according to a United Nations human rights committee.

Google security researchers found and recently disclosed the malicious websites this week, but until now it wasn’t known who they were targeting.

The websites were part of a campaign to target the religious group by infecting an iPhone with malicious code simply by visiting a booby-trapped web page. In gaining unfettered access to the iPhone’s software, an attacker could read a victim’s messages, passwords, and track their location in near-real time.

Apple fixed the vulnerabilities in February in iOS 12.1.4, days after Google privately disclosed the flaws. News of the hacking campaign was first disclosed by this week.

These websites had “thousands of visitors” per week for at least two years, Google said.

But it’s not immediately known if the same websites were used to target Android users.

Victims were tricked into opening a link, which when opened would load one of the malicious websites used to infect the victim. It’s a common tactic to target phone owners with spyware.

One of the sources told TechCrunch that the websites also infected non-Uygurs who inadvertently accessed these domains because they were indexed in Google search, prompting the FBI to alert Google to ask for the site to be removed from its index to prevent infections.

A Google spokesperson would not comment beyond the published research. A FBI spokesperson said they could neither confirm nor deny any investigation, and did not comment further.

Google faced some criticism following its bombshell report for not releasing the websites used in the attacks. The researchers said the attacks were “indiscriminate watering hole attacks” with “no target discrimination,” noting that anyone visiting the site would have their iPhone hacked.

But the company would not say who was behind the attacks.

Apple did not comment. An email requesting comment to the Chinese consulate in New York was unreturned.


TechCrunch

Security researchers at Google say they’ve found a number of malicious websites which, when visited, could quietly hack into a victim’s iPhone by exploiting a set of previously undisclosed software flaws.

Google’s Project Zero said in a deep-dive blog post published late on Thursday that the websites were visited thousands of times per week by unsuspecting victims, in what they described as an “indiscriminate” attack.

“Simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant,” said Ian Beer, a security researcher at Project Zero.

He said the websites had been hacking iPhones over a “period of at least two years.”

The researchers found five distinct exploit chains involving 12 separate security flaws, including seven involving Safari, the in-built web browser on iPhones. The five separate attack chains allowed an attacker to gain “root” access to the device — the highest level of access and privilege on an iPhone. In doing so, an attacker could gain access to the device’s full range of features normally off-limits to the user. That means an attacker could quietly install malicious apps to spy on an iPhone owner without their knowledge or consent.

Google said based off their analysis, the vulnerabilities were used to steal a user’s photos and messages as well as track their location in near-realtime. The “implant” could also access the user’s on-device bank of saved passwords.

The vulnerabilities affect iOS 10 through to the current iOS 12 software version.

Google privately disclosed the vulnerabilities in February, giving Apple only a week to fix the flaws and roll out updates to its users. That’s a fraction of the 90 days typically given to software developers, giving an indication of the severity of the vulnerabilities.

Apple issued a fix six days later with iOS 12.1.4 for iPhone 5s and iPad Air and later.

Beer said it’s possible other hacking campaigns are currently in action.

The iPhone and iPad maker in general has a good rap on security and privacy matters. Recently the company increased its maximum bug bounty payout to $ 1 million for security researchers who find flaws that can silently target an iPhone and gain root-level privileges without any user interaction. Under Apple’s new bounty rules — set to go into effect later this year — Google would’ve been eligible for several million dollars in bounties.

When reached, a spokesperson for Apple declined to comment.


TechCrunch

Japan’s trade ministry said today that it will restrict the export of some tech materials to South Korea, including polyimides used in flexible displays made by companies like Samsung Electronics. The new rules come as the two countries argue over compensation for South Koreans forced to work in Japanese factories during World War II.

The list of restricted supplies, expected to go into effect on July 4, includes polyimides used in smartphone and flexible organic LED displays, and etching gas and resist used to make semiconductors. That means Japanese suppliers who wish to sell those materials to South Korean tech companies such as Samsung, LG and SK Hynix will need to submit each contract for approval.

Japan’s government may also remove South Korea from its list of countries that have fewer restrictions on trading technology that might have national security implications, reports Nikkei Asian Review.

Earlier this year, South Korea’s Supreme Court ruled several Japanese companies, including Nippon Steel & Sumitomo Metal Corp. and Mitsubishi Heavy Industries, that had used forced labor during World War II must pay compensation and began seizing assets for liquidation. But Japan’s government claims the issue was settled in 1965 as part of a treaty that restored basic diplomatic relations between the two countries and is asking South Korea to put the matter before an international arbitration panel instead.


TechCrunch

Internet platforms like Google, Facebook, and Twitter are under incredible pressure to reduce the proliferation of illegal and abhorrent content on their services.

Interestingly, Facebook’s Mark Zuckerberg recently called for the establishment of “third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards.” In a follow-up conversation with Axios, Kevin Martin of Facebook “compared the proposed standard-setting body to the Motion Picture Association of America’s system for rating movies.”

The ratings group, whose official name is the Classification and Rating Administration (CARA), was established in 1968 to stave off government censorship by educating parents about the contents of films. It has been in place ever since – and as longtime filmmakers, we’ve interacted with the MPAA’s ratings system hundreds of times – working closely with them to maintain our filmmakers’ creative vision, while, at the same time, keeping parents informed so that they can decide if those movies are appropriate for their children.  

CARA is not a perfect system. Filmmakers do not always agree with the ratings given to their films, but the board strives to be transparent as to why each film receives the rating it does. The system allows filmmakers to determine if they want to make certain cuts in order to attract a wider audience. Additionally, there are occasions where parents may not agree with the ratings given to certain films based on their content. CARA strives to consistently strike the delicate balance between protecting a creative vision and informing people and families about the contents of a film.

 CARA’s effectiveness is reflected in the fact that other creative industries including televisionvideo games, and music have also adopted their own voluntary ratings systems. 

While the MPAA’s ratings system works very well for pre-release review of content from a professionally- produced and curated industry, including the MPAA member companies and independent distributors, we do not believe that the MPAA model can work for dominant internet platforms like Google, Facebook, and Twitter that rely primarily on post hoc review of user-generated content (UGC).

Image: Bryce Durbin / TechCrunch

 Here’s why: CARA is staffed by parents whose judgment is informed by their experiences raising families – and, most importantly, they rate most movies before they appear in theaters. Once rated by CARA, a movie’s rating will carry over to subsequent formats, such as DVD, cable, broadcast, or online streaming, assuming no other edits are made.

By contrast, large internet platforms like Facebook and Google’s YouTube primarily rely on user-generated content (UGC), which becomes available almost instantaneously to each platform’s billions of users with no prior review. UGC platforms generally do not pre-screen content – instead they typically rely on users and content moderators, sometimes complemented by AI tools, to flag potentially problematic content after it is posted online.

The numbers are also revealing. CARA rates about 600-900 feature films each year, which translates to approximately 1,500 hours of content annually. That’s the equivalent of the amount of new content made available on YouTube every three minutes. Each day, uploads to YouTube total about 720,000 hours – that is equivalent to the amount of content CARA would review in 480 years!

 Another key distinction: premium video companies are legally accountable for all the content they make available, and it is not uncommon for them to have to defend themselves against claims based on the content of material they disseminate.

By contrast, as CreativeFuture said in an April 2018 letter to Congress: “the failure of Facebook and others to take responsibility [for their content] is rooted in decades-old policies, including legal immunities and safe harbors, that actually absolve internet platforms of accountability [for the content they host.]”

In short, internet platforms whose offerings consist mostly of unscreened user-generated content are very different businesses from media outlets that deliver professionally-produced, heavily-vetted, and curated content for which they are legally accountable.

Given these realities, the creative content industries’ approach to self-regulation does not provide a useful model for UGC-reliant platforms, and it would be a mistake to describe any post hoc review process as being “like MPAA’s ratings system.” It can never play that role.

This doesn’t mean there are not areas where we can collaborate. Facebook and Google could work with us to address rampant piracy. Interestingly, the challenge of controlling illegal and abhorrent content on internet platforms is very similar to the challenge of controlling piracy on those platforms. In both cases, bad things happen – the platforms’ current review systems are too slow to stop them, and harm occurs before mitigation efforts are triggered. 

Also, as CreativeFuture has previously said, “unlike the complicated work of actually moderating people’s ‘harmful’ [content], this is cut and dried – it’s against the law. These companies could work with creatives like never before, fostering a new, global community of advocates who could speak to their good will.”

Be that as it may, as Congress and the current Administration continue to consider ways to address online harms, it is important that those discussions be informed by an understanding of the dramatic differences between UGC-reliant internet platforms and creative content industries. A content-reviewing body like the MPAA’s CARA is likely a non-starter for the reasons mentioned above – and policymakers should not be distracted from getting to work on meaningful solutions.


TechCrunch

Created by R the Company. Powered by SiteMuze.