Wij willen met u aan tafel zitten en in een openhartig gesprek uitvinden welke uitdagingen en vragen er bij u spelen om zo, gezamelijk, tot een beste oplossing te komen. Oftewel, hoe kan de techniek u ondersteunen in plaats van dat u de techniek moet ondersteunen.

The upcoming CCPA regulations coming into effect in the US have put a renewed focus on how companies online are handling the issues of data privacy and compliance. Today a startup that’s built a platform to help them navigate those waters more easily is announcing a round of funding to meet that demand.

Ethyca, which lets organisations both identify where sensitive data may be used and then provides an easy set of API tools to create permissions, reporting and analytics around it, has raised $ 13.5 million in financing after picking up a number of major companies, including some high-profile tech companies, as customers.

The crux of the issue that Ethyca is tackling is that online privacy compliance has become a critical issue, in part because of regulations, but mainly because the online world has, before anyone had a chance to blink, become a critical component of our lives so getting things wrong can be disastrous.

“Move fast and break things sounds good on a T-shirt, but the web is effectively society infrastructure now,” explained co-founder and CEO Cillian Kieran, who hails from Ireland but now lives in New York. “If you met a bridge builder wearing a t-shirt saying that you’d panic. So despite the omnipresence of tech we don’t have the tools to deal with privacy issues. The aim here is to build safe systems, and we provide the data and data maps to do that.”

The funding comes on the back of a seed round Ethyca raised in July 2019 and brings the total raised to about $ 20 million. 

IA Ventures, Affirm and PayPal cofounder Max Levchin’s SciFi VC, CAA cofounder Michael Ovitz, Warby Parker cofounders Neil Blumenthal and Dave Gilboa, Harry’s cofounder Jeff Raider, Allbird’s cofounder Joey Zwillinge, Behance cofounder Scott Belsky, former Chief Data Scientist of the US Office of Science and Technology Policy DJ Patil, Lachy Groom, and Abstract Ventures make up the long list of high-profile names and firms that are a part of this latest round, which speaks to some of the traction and attention that New York-based Ethyca has had to date.

On the enterprise side, the company works with a number of large tech businesses including banks and some major tech companies that don’t want their names disclosed, to help them both better map personal data within their systems, as well as create better workflows for extracting that information when it’s requested either by a user, or for the purposes of reporting for data compliance regulations, or more often to make sure that when new products are being built, that they take that existing personal data into account comply with data policies around it.

If it sounds odd that a tech company might need to turn to a third-party startup for privacy services, it’s not so strange. Even at big tech companies, which would have spent years and millions of dollars preparing for privacy regulations, the complexity has meant that not all use cases can be accounted for.

On the smaller end of the scale, it also has a number of well known brands like luggage company Away, Parachute Home and Aspire IQ as well a number of other smaller businesses implementing its tools.

As Kieran describes it, while there are already others out there building tools to navigate data protection and privacy regulations like CCPA and GDPR in Europe (OneTrust and DataGuard being two in the startup arena that have raised big rounds), the aim of Ethyca is to build a layer that makes it quick and relatively easy to implement a compliance layer into a system.

The company has APIs but also now has introduced a self-service version of its product for smaller businesses, which he says means that “any customer can turn it on and follow the automated process in a TurboTax type of way.”

CCPA compliance can take 8-10 weeks to implement, and you often need consultants and more technical talent to get the work done and run services afterwards, he said. “Now it can be done in as little as an hour for an average midsized business.” Larger companies may take a few days, he added.

Kieran and his co-founder Miguel Burger-Calderon know first-hand about some of the issues that brands and other online businesses might face when it comes to identifying what kind of data might fall under these newer regulations, and the challenges of navigating that once you do. BrandCommerce, a previous company that the two founded, helps brands and businesses build and run D2C operations online. (You can also see, therefore, why Ethyca may have in part picked up the particular investors that it has.)

“Companies can no longer simply strive to be compliant and get by – enterprises need to think long-term and show their customers that they can be trusted with their data,” said Roger Ehrenberg of IA Ventures in a statement. “Forward-thinking companies have recognised the value of Ethyca’s product to their bottom line as you can see from looking at the growing set of blue-chip brands and technology customers so far.”

 


TechCrunch

In a public health emergency that relies on people keeping an anti-social distance from each other to avoid spreading a highly contagious virus for which humans have no pre-existing immunity governments around the world have been quick to look to technology companies for help.

Background tracking is, after all, what many Internet giants’ ad-targeting business models rely on. While, in the US, telcos were recently exposed sharing highly granular location data for commercial ends.

Some of these privacy-hostile practices face ongoing challenges under existing data protection laws in Europe — and/or have at least attracted regulator attention in the US, which lacks a comprehensive digital privacy framework — but a pandemic is clearly an exceptional circumstance. So we’re seeing governments turn to the tech sector for help.

US president Donald Trump was reported last week to have summoned a number of tech companies to the White House to discuss how mobile location data could be used for tracking citizens.

In another development this month he announced Google was working on a nationwide coronavirus screening site — in fact it’s Verily, a different division of Alphabet. But concerns were quickly raised that the site requires users to sign in with a Google account, suggesting users’ health-related queries could be linked to other online activity the tech giant monetizes via ads. (Verily has said the data is stored separately and not linked to other Google products, although the privacy policy does allow data to be shared with third parties including Salesforce for customer service purposes.)

In the UK the government has also been reported to be in discussions with telcos about mapping mobile users’ movements during the crisis — though not at an individual level. It was reported to have held an early meeting with tech companies to ask what resources they could contribute to the fight against COVID-19.

Elsewhere in Europe, Italy — which remains the European nation worst hit by the virus — has reportedly sought anonymized data from Facebook and local telcos that aggregates users’ movement to help with contact tracing or other forms of monitoring.

While there are clear public health imperatives to ensure populations are following instructions to reduce social contact, the prospect of Western democracies making like China and actively monitoring citizens’ movements raises uneasy questions about the long term impact of such measures on civil liberties.

Plus, if governments seek to expand state surveillance powers by directly leaning on the private sector to keep tabs on citizens it risks cementing a commercial exploitation of privacy — at a time when there’s been substantial push-back over the background profiling of web users for behavioral ads.

“Unprecedented levels of surveillance, data exploitation, and misinformation are being tested across the world,” warns civil rights campaign group Privacy International, which is tracking what it dubs the “extraordinary measures” being taken during the pandemic.

A couple of examples include telcos in Israel sharing location data with state agencies for COVID-19 contact tracing and the UK government tabling emergency legislation that relaxes the rules around intercept warrants.

“Many of those measures are based on extraordinary powers, only to be used temporarily in emergencies. Others use exemptions in data protection laws to share data. Some may be effective and based on advice from epidemiologists, others will not be. But all of them must be temporary, necessary, and proportionate,” it adds. “It is essential to keep track of them. When the pandemic is over, such extraordinary measures must be put to an end and held to account.”

At the same time employers may feel under pressure to be monitoring their own staff to try to reduce COVID-19 risks right now — which raises questions about how they can contribute to a vital public health cause without overstepping any legal bounds.

We talked to two lawyers from Linklaters to get their perspective on the rules that wrap extraordinary steps such as tracking citizens’ movements and their health data, and how European and US data regulators are responding so far to the coronavirus crisis.

Bear in mind it’s a fast-moving situation — with some governments (including the UK and Israel) legislating to extend state surveillance powers during the pandemic.

The interviews below have been lighted edited for length and clarity

Europe and the UK

Dr Daniel Pauly, technology, media & telecommunications partner at Linklaters in Frankfurt 

Data protection law has not been suspended. At least when it comes to Europe. So data protection law still applies — without any restrictions. This is the baseline on which we need to work and for which we need to start. Then we need to differentiate between what the government can do and what employers can do in particular.

It’s very important to understand that when we look at governments they do have the means to allow themselves a certain use of data. Because there are opening clauses, flexibility clauses, in particular in the GDPR, when it comes to public health concerns, cross-border threats.

By using the legislation process they may introduce further powers. To give you one example what the Germany government did to respond is they created a special law — the coronavirus notification regulation — we already have in place a law governing the use of personal data in respect of certain serious infections. And what they did is they simply added the coronavirus infection to that list, which now means that hospitals and doctors must notify the competent authority of any COVID-19 infection.

This is pretty far reaching. They need to transmit names, contact details, sex, date of birth and many other details to allow the competent authority to gather that data and to analyze that data.

Another important topic in that field is the use of telecommunications data — in particular mobile phone data. Efficient use of that data might be one of the reasons why they obviously were quite successful in China with reducing the threat from the virus.

In Europe the government may not simply use mobile phone data and movement data — they have to anonymize it first and this is what, in Germany and other European jurisdictions, happened — including the UK — that anonymized mobile phone data has been handed over to organizations who start analyzing that data to get a better view of how the people behave, how the people move and what they need to do in order to restrict further movement. Or to restrict public life. This is the view on the government at least in Europe and the UK.

Transparency obligations [related to government use of personal data] are stemming from the GDPR [General Data Protection Regulation]. When they would like to make use of mobile phone data this is the ePrivacy directive. This is not as transparent as the GDPR is and they did not succeed in replacing that piece of legislation by new regulation. So the ePrivacy directive gives again the various Member States, including the UK, the possibility to introduce further and more restrictive laws [for public health reasons].

[If Internet companies such as Google were to be asked by European governments to pass data on users for a coronavirus tracking purpose] it has to be taken into consideration that they have not included this in their records of processing activities — in their data protection notifications and information.

So it would be at least from a pure legal perspective it would be a huge step — and I’m wondering whether it would be feasible without the governments introducing special laws for that.

If [EU] governments would make use of private companies to provide them with data which has not been collected for such purposes — so that would be a huge step from the perspective of the GDPR at least. I’m not aware of something like this. I’ve certainly read there are discussions ongoing with Netflix to reduce the net traffic but I haven’t heard anything about making use of the data Google has.

I wouldn’t expect it in Europe — and particularly in Germany. Tracking people, tracking and monitoring what they are doing this is almost last resort — so I wouldn’t expect that in the next couple of weeks. And I hope then it’s over.

[So far], from my perspective, the European regulators have responded [to the coronavirus crisis] in a pretty reasonable manner by saying that, in particular, any response to the virus must be proportionate.

We still have that law in place and we need to consider that the data we’re talking about is health data — it’s the most protected data of all. Having said that there are some ways at least the GDPR is allowing the government and allowing employers to make use of that data. In particular when it comes to processing for substantial public interest. Or if it’s required for the purposes of preventive medicine or necessary for reasons of public interest.

So the legislator was wise enough to include clauses allowing the use of such data under certain circumstances and there are a number of supervisory authorities who already made public guidelines how to make use of these statutory permissions. And what they basically said was it always needs to be figured out on a case by case basis whether the data is really required in the specific case.

To give you an example, it was made clear that an employer may not ask an employee where he has been during his vacation — but he may ask have you been in any of the risk areas? And then the sufficient answer is yes or no. They do not need any further data. So it’s always [about approaching this] a smart way — by being smart you get the information you need; it’s not the flood gate suddenly opened.

You really need to look at the specific case and see how to get the data you need. Usually it’s a yes or no which is sufficient in the particular case.

The US

Caitlin Potratz Metcalf, senior U.S. associate at Linklaters and a Certified Information Privacy Professional (CIPP/US)

Even though you don’t have a structured privacy framework in the US — or one specific regulator that covers privacy — you’ve got some of the same issues. The FCC [Federal Communications Commission] will go after companies that take any action that is inconsistent with their privacy policies. And that would be misleading to consumers. Their initial focus is on consumer protection, not privacy, but in the last couple of years they’ve been wearing two hats. So there is a focus on privacy even though we don’t have a national privacy law [equivalent in scope to the EU’s GDPR] but it’s coming from a consumer protection point of view.

So, for example, the FCC back in February actually announced potential sanctions against four major telecoms companies int he US with respect to sharing data related to cell phone tracking — it wasn’t the geolocation in an app but actually pinging off cell towers — and sharing that data to third parties without proper safeguards. Because that wasn’t disclosed in their privacy policies.

They haven’t actually issued those fines but it was announced that they may pursue a $ 208M fine total against these four companies: AT&T, Verizon*, T-Mobile, Sprint… So they do take it very seriously about how that data is safeguarded, how it’s being shared. And the fact that we have a state of emergency doesn’t change that emphasis on consumer protection.

You’ll see the same is true for the Department of Health and Human Services (HHS) — that’s responsible for any medical or health data.

That is really limited towards entities that are covered entities under HIPAA [Health Insurance Portability and Accountability Act] or their business associates. So it doesn’t apply to everybody across the board. But if you are a hospital health plan provider, whether you’re an employer and you have a group health plan, an insurer, or a business associate supporting one of those covered entities then you have to comply with HIPAA to the extent you’re handling protected health information. And that’s a bit narrower than the definition of personal data that you’d have under GDPR.

So you’re really looking at identifying information for that patient: Their medical status, their birth date, address, things like that that might be very identifiable and related to the person. But you could share things that are more general. For example you have a middle aged man from this county who’s tested positive for COVID and is at XYZ facility being treated and his condition is stable. Or his condition is critical. So you could share that kind of level of detail — but not further.

And so HHS in February had issued a bullet stressing that you can’t set aside the privacy and security safeguards under HIPAA during an emergency. They stressed to all covered entities that you have to still comply with the law — sanctions are still in place. And to the extent that you do have to disclose some of the protected health information it has to be to the minimum extent necessary. And that can be disclosed either to other hospitals, to a regulator in order to help stem the spread of COVID and also in order to provide treatment to a patient. So they listed a couple of different exceptions how you can share that information but really stressing the minimum necessary.

The same would be true for an employer — like of a group health plan — if they’re trying to share information about employees but it’s going to be very narrow in what they can actually share. And they can’t just cite as an exception that it’s for the public health interest.. You don’t necessarily have to disclose what country they’ve been to it’s just have they been to a region that’s on a restricted list for travel. So it’s finding creative ways to relay the necessary information you need and if there’s anything less intrusive you’re required to go that route.

That said, just last week HHS also issued another bullet saying that they would waive HIPAA sanctions and penalties during the nationwide public health emergency. But it was only directed to hospitals — so it doesn’t apply to all covered entities.

They also issued another bulletin saying that they would lax restrictions on basically sharing data on using electronic means. So there’s very heightened restrictions on how you can share data electronically when it relates to medical and health information. And so this was allowing doctors to communicate by FaceTime or video chat and other methods that may not be encrypted or secure. Or communicate with patients etc. So they’re giving a waiver or just softening some of the restrictions related to transferring health data electronically.

So you can see it’s an evolving situation but they’ve still taken a very reserved and kind of conservative approach — really emphasizing that you do need to comply with your obligation to protect health data. So that’s where you see the strongest implementations. And then the FCC coming at it from a consumer protection point of view.

Going back to the point you made earlier about Google sharing data [with governments] — you could get there, it just depends on how their privacy policies are structured.

In terms of tracking individuals we don’t have a national statute like GDPR that would prevent that but it would also be very difficult to anonymize that data because it’s so tied to individuals — it’s like your DNA; you can map a person leaving home, going to work or school, going to a doctor’s office, coming back home — and it really does have very sensitive information and because of all the specific data points it means it’s very difficult to anonymize it and provide it in a format that wouldn’t violate someone’s privacy without their consent. And so while you may not need full consent in the US you would still need to have notice and transparency about the policies.

Then it would be slightly different if you’re a California resident — the degree that you need under the new California law [CCPA] to provide disclosures and give individuals the opportunity to opt out if you were to share their information. So in that case, where the telecoms companies are potentially going to be sued by the FCC for sharing data with third parties, that in particular would also violate the new California law if consumers weren’t given the opportunity to opt out of having their information sold.

So there’s a lot of different puzzle pieces that fit together since we have a patchwork quilt of data protection — depending on the different state and federal laws.

The government, I guess, could issue other mandates or regulations [to requisition telco tracking data for a COVID-related public health purpose] — I don’t know that they will. I would envisage more of a call to arms requesting support and assistance from the private sector. Not a mandate that you must share your data, given the way our government is structured. Unless things get incredibly dire I don’t really see a mandate to companies that they have to share certain data in order to be able to track patients.

[If Google makes use of health-related searches/queries to enrich user profiles it uses for commercial purposes] that in and of itself wouldn’t be protected health information.

Google is not a [HIPAA] covered entity. And depending on what type of support it’s providing for covered entities it may be in limited circumstances could be considered a business associate that could be subject to HIPAA but in the context of just collecting data on consumers it wouldn’t be governed by that.

So as long as it’s not doing anything outside the scope of what’s already in its privacy policies then it’s fine — so the fact that it’s collecting data based on searches that you run on Google that should be in the privacy policy anyway. It doesn’t need to be specific to the type of search that you’re running. So the fact that it’s looking up how to get COVID testing or treatment or what are the symptoms for COVID, things like that, that can all be tied to the data [it holds on users] and enriched. And that can also be shared and sold to third parties — unless you’re a California resident. They have a separate privacy policy for California residents… They just have to consistent with their privacy policy.

The interesting thing to me is maybe the approach that Asia has taken — where they have a lot more influence over the commercial sector and data tracking–  and so you actually have the regulator stepping in and doing more tracking, not just private companies. But private companies are able to provide tracking information.

You see it actually with Uber. They’ve issued additional privacy notices to consumers — saying that to the extent we become aware of a passenger that has had COVID or a driver, we will notify people who have come into contact with that Uber over a given time period. They’re trying to take the initiative to do their own tracking to protect workers and consumers.

And they can do that — they just have to be careful about how much detail they share about personal information. Not naming names of who was impacted [but rather saying something like] ‘in the last 24 hours you may have ridden in an Uber that was impacted or known to have an infected individual in the Uber’.

[When it comes to telehealth platforms and privacy protections] it depends if they’re considered a business associate of a covered entity. So they may not be a covered entity themselves but if they are a business associate supporting a covered entity — for example a hospital or a clinic or insurers sharing that data and relying on a telehealth platform. In that context they would be governed by some of the same privacy and security regulations under HIPAA.

Some of them are slightly different for a business associate compared to a covered entity but generally you step in the shoes of the covered entity if you’re handling the covered entity’s data and have the same restrictions apply to you.

Aggregate data wouldn’t be considered protected health information — so they could [for example] share a symptom heat map that doesn’t identify specific individuals or patients and their health data.

[But] standalone telehealth apps that are collecting data directly from the consumer are not covered by HIPAA.

That’s actually a big loophole in terms of consumer protection, privacy protections related to health data. You have the same issue for all the health fitness apps — whether it’s your fitbit or other health apps or if you’re pregnant and you have an app that tracks your maternity or your period or things like that. Any of that data that’s collected is not protected.

The only protections you have are whatever disclosures are in the privacy policies. And in them having to be transparent and act within that privacy policy. If they don’t they can face an enforcement action by the FCC but that is not regulated by the Department of Health and Human Services under HIPAA.

So it’s a very different approach than under GDPR which is much more comprehensive.

That’s not to say in the future we might see a tightening of restrictions on that but individuals are freely giving that information — and in theory should read the privacy policy that’s provided when you log into the app. But most users probably don’t read that and then that data can be shared with other third parties.

They could share it with a regulator, they could sell it to other third parties so long as they have the proper disclosure that they may sell your personal information or share it with third parties. It depends on how they’re privacy policy is crafted. So long as it covers those specific actions. And for California residents it’s a more specific test — there are more disclosures that are required.

For example the type of data that you’re collecting, the purpose that you’re collecting it for, how you intend to process that data, who you intend to share it with and why. So it’s tightened for California residents but for the rest of the US you just have to be consistent with your privacy policy and you aren’t required to have the same level of disclosures.

More sophisticated, larger companies, though, definitely are already complying with GDPR — or endeavouring to comply with the California law — and so they have more sophisticated, detailed privacy notices than are maybe required by law in the US. But they’re kind of operating on a global platform and trying to have a global privacy policy.

*Disclosure: Verizon is TechCrunch’s parent company


TechCrunch

This week California’s attorney general, Xavier Becerra, published draft guidance for enforcing the state’s landmark privacy legislation.

The draft text of the regulations under the California Consumer Privacy Act (CCPA) will undergo a public consultation period, including a number of public hearings, with submissions open until December 6 this year.

The CCPA itself will take effect in the state on January 1, with a further six months’ grace period before enforcement of the law begins.

“The proposed regulations are intended to operationalize the CCPA and provide practical guidance to consumers and businesses subject to the law,” writes the State of California’s Department of Justice in a press release announcing the draft text. “The regulations would address some of the open issues raised by the CCPA and would be subject to enforcement by the Department of Justice with remedies provided under the law.”

Translation: Here’s the extra detail we think is needed to make the law work.

The CCPA was signed into law in June 2018 — enshrining protections for a sub-set of US citizens against their data being collected and sold without their knowledge.

The law requires businesses over a certain user and/or revenue threshold to disclose what personal data they collect; the purposes they intend to use the data for; and any third parties it will be shared with; as well as requiring that they provide a discrimination-free opt-out to personal data being sold or shared.

Businesses must also comply with consumer requests for their data to be deleted.


TechCrunch

Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.

A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.

We’ve reached out to Amazon for comment.

Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.

However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…

In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.

These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.

Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.

The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.

Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.

A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.

Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.

Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.

The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.

In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.

At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.

In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:

We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.


TechCrunch

Google has responded to a report this week from Belgian public broadcaster VRT NWS, which revealed that contractors were given access to Google Assistant voice recordings, including those which contained sensitive information — like addresses, conversations between parents and children, business calls, and others containing all sorts of private information. As a result of the report, Google says it’s now preparing to investigate and take action against the contractor who leaked this information to the news outlet.

The company, by way of a blog post, explained that it partners with language experts around the world who review and transcribe a “small set of queries” to help Google better understand various languages.

Only around 0.2 percent of all audio snippets are reviewed by language experts, and these snippets are not associated with Google accounts during the review process, the company says. Other background conversations or noises are not supposed to be transcribed.

The leaker had listened to over 1,000 recordings, and found 153 were accidental in nature — meaning, it was clear the user hadn’t intended to ask for Google’s help. In addition, the report found that determining a user’s identity was often possible because the recordings themselves would reveal personal details. Some of the recordings contained highly sensitive information, like “bedroom conversations,” medical inquiries, or people in what appeared to be domestic violence situations, to name a few.

Google defended the transcription process as being a necessary part of providing voice assistant technologies to its international users.

But instead of focusing on its lack of transparency with consumers over who’s really listening to their voice data, Google says it’s going after the leaker themselves.

“[Transcription] is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant,” writes David Monsees, Product Manager for Search at Google, in the blog post. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” he said.

As voice assistant devices are becoming a more common part of consumers’ everyday lives, there’s increased scrutiny on how tech companies are handline the voice recordings, who’s listening on the other end, what records are being stored, and for how long, among other things.

This is not an issue that only Google is facing.

Earlier this month, Amazon responded to a U.S. senator’s inquiry over how it was handling consumers’ voice records. The inquiry had followed a CNET investigation which discovered Alexa recordings were kept unless manually deleted by users, and that some voice transcripts were never deleted. In addition, a Bloomberg report recently found that Amazon workers and contractors during the review process had access to the recordings, as well as an account number, the user’s first name, and the device’s serial number.

Further, a coalition of consumer privacy groups recently lodged a complaint with the U.S. Federal Trade Commission which claims Amazon Alexa is violating the U.S. Children’s Online Privacy Protection Act (COPPA) by failing to obtain proper consent over the company’s use of the kids’ data.

Neither Amazon nor Google have gone out of their way to alert consumers as to how the voice recordings are being used.

As Wired notes, the Google Home privacy policy doesn’t disclose that Google is using contract labor to review or transcribe audio recordings. The policy also says that data only leaves the device when the wake word is detected. But these leaked recordings indicate that’s clearly not true — the devices accidentally record voice data at times.

The issues around the lack of disclosure and transparency could be yet another signal to U.S. regulators that tech companies aren’t able to make responsible decisions on their own when it comes to consumer data privacy.

The timing of the news isn’t great for Google. According to reports, the U.S. Department of Justice is preparing for a possible antitrust investigation of Google’s business practices, and is watching the company’s behavior closely. Given this increased scrutiny, one would think Google would be going over its privacy policies with a fine-toothed comb — especially in areas that are newly coming under fire, like policies around consumers’ voice data — to ensure that consumers understand how their data is being stored, shared, and used.

Google also notes today that people do have a way to opt-out of having their audio data stored. Users can either turn off audio data storage entirely, or choose to have the data auto-delete every 3 months or every 18 months.

The company also says it will work to better explain how this voice data is used going forward.

“We’re always working to improve how we explain our settings and privacy practices to people, and will be reviewing opportunities to further clarify how data is used to improve speech technology,” said Monsees.


TechCrunch

Rechter zet streep door bewaarplicht

Telecombedrijven en internetproviders hoeven geen metadata van internet- en telefoniegebruik te bewaren. De bewaarplicht, een wet die het verzamelen en opslaan van gegevens over internet- en telefoongebru …

Read more …

Created by R the Company. Powered by SiteMuze.