Wij willen met u aan tafel zitten en in een openhartig gesprek uitvinden welke uitdagingen en vragen er bij u spelen om zo, gezamelijk, tot een beste oplossing te komen. Oftewel, hoe kan de techniek u ondersteunen in plaats van dat u de techniek moet ondersteunen.

Israel has passed an emergency law to use mobile phone data for tracking people infected with COVID-19 including to identify and quarantine others they have come into contact with and may have infected.

The BBC reports that the emergency law was passed during an overnight sitting of the cabinet, bypassing parliamentary approval.

Israel also said it will step up testing substantially as part of its respond to the pandemic crisis.

In a statement posted to Facebook, prime minister Benjamin Netanyahu wrote: “We will dramatically increase the ability to locate and quarantine those who have been infected. Today, we started using digital technology to locate people who have been in contact with those stricken by the Corona. We will inform these people that they must go into quarantine for 14 days. These are expected to be large – even very large – numbers and we will announce this in the coming days. Going into quarantine will not be a recommendation but a requirement and we will enforce it without compromise. This is a critical step in slowing the spread of the epidemic.”

“I have instructed the Health Ministry to significantly increase the number of tests to 3,000 a day at least,” he added. “It is very likely that we will reach a higher figure, even up to 5,000 a day. To the best of my knowledge, relative to population, this is the highest number of tests in the world, even higher than South Korea. In South Korea, there are around 15,000 tests a day for a population five or six times larger than ours.”

On Monday an Israeli parliamentary subcommittee on intelligence and secret services discussed a government request to authorize Israel’s Shin Bet security service to assist in a national campaign to stop the spread of the novel coronavirus — but declined to vote on the request, arguing more time is needed to assess it.

Civil liberties campaigners have warned the move to monitor citizens’ movements sets a dangerous precedent.

According to WHO data, Israel had 200 confirmed cases of the coronavirus as of yesterday morning. Today the country’s health ministry reported cases had risen to 427.

Details of exactly how the tracking will work have not been released — but, per the BBC, the location data of people’s mobile devices will be collected from telcos by Israel’s domestic security agency and shared with health officials.

It also reports the health ministry will be involved in monitoring the location of infected people to ensure they are complying with quarantine rules — saying it can also send text messages to people who have come into contact with someone with COVID-19 to instruct them to self isolate.

In recent days Netanyahu has expressed frustration that Israel citizens have not been paying enough mind to calls to combat the spread of the virus via voluntary social distancing.

“This is not child’s play. This is not a vacation. This is a matter of life and death,” he wrote on Facebook. “There are many among you who still do not understand the magnitude of the danger. I see the crowds on the beaches, people having fun. They think this is a vacation.”

“According to the instructions that we issued yesterday, I ask you not leave your homes and stay inside as much as possible. At the moment, I say this as a recommendation. It is still not a directive but that can change,” he added.

Since the Israeli government’s intent behind the emergency mobile tracking powers is to combat the spread of COVID-19 by enabling state agencies to identify people whose movements need to be restricted to avoid them passing the virus to others, it seems likely law enforcement agencies will also be involved in enacting the measures.

That will mean citizens’ smartphones being not just a tool of mass surveillance but also a conduit for targeted containment — raising questions about the impact such intrusive measures might have on people’s willingness to carry mobile devices everywhere they go, even during a pandemic.

Yesterday the Wall Street Journal reported that the US government is considering similar location-tracking technology measures in a bid to check the spread of COVID-19 — with discussions ongoing between tech giants, startups and White House officials on measures that could be taken to monitor the disease.

Last week the UK government also held a meeting with tech companies to ask for their help in combating the coronavirus. Per Wired some tech firms offered to share data with the state to help with contact tracing — although, at the time, the government was not pursuing a strategy of mass restrictions on public movement. It has since shifted position.


TechCrunch

Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.

A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.

We’ve reached out to Amazon for comment.

Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.

However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…

In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.

These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.

Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.

The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.

Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.

A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.

Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.

Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.

The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.

In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.

At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.

In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:

We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.


TechCrunch

Five billion dollars. That’s the apparent size of Facebook’s latest fine for violating data privacy. 

While many believe the sum is simply a slap on the wrist for a behemoth like Facebook, it’s still the largest amount the Federal Trade Commission has ever levied on a technology company. 

Facebook is clearly still reeling from Cambridge Analytica, after which trust in the company dropped 51%, searches for “delete Facebook” reached 5-year highs, and Facebook’s stock dropped 20%.

While incumbents like Facebook are struggling with their data, startups in highly-regulated, “Third Wave” industries can take advantage by using a data strategy one would least expect: ethics. Beyond complying with regulations, startups that embrace ethics look out for their customers’ best interests, cultivate long-term trust — and avoid billion dollar fines. 

To weave ethics into the very fabric of their business strategies and tech systems, startups should adopt “agile” data governance systems. Often combining law and technology, these systems will become a key weapon of data-centric Third Wave startups to beat incumbents in their field. 

Established, highly-regulated incumbents often use slow and unsystematic data compliance workflows, operated manually by armies of lawyers and technology personnel. Agile data governance systems, in contrast, simplify both these workflows and the use of cutting-edge privacy tools, allowing resource-poor startups both to protect their customers better and to improve their services.

In fact, 47% of customers are willing to switch to startups that protect their sensitive data better. Yet 80% of customers highly value more convenience and better service. 

By using agile data governance, startups can balance protection and improvement. Ultimately, they gain a strategic advantage by obtaining more data, cultivating more loyalty, and being more resilient to inevitable data mishaps. 

Agile data governance helps startups obtain more data — and create more value 

With agile data governance, startups can address their critical weakness: data scarcity. Customers share more data with startups that make data collection a feature, not a burdensome part of the user experience. Agile data governance systems simplify compliance with this data practice. 

Take Ally Bank, which the Ponemon Institute rated as one of the most privacy-protecting banks. In 2017, Ally’s deposits base grew 16%, while those of incumbents declined 4%.

One key principle to its ethical data strategy: minimizing data collection and use. Ally’s customers obtain services through a personalized website, rarely filling out long surveys. When data is requested, it’s done in small doses on the site — and always results in immediate value, such as viewing transactions. 

This is on purpose. Ally’s Chief Marketing Officer publicly calls the industry-mantra of “more data” dangerous to brands and consumers alike.

A critical tool to minimize data use is to use advanced data privacy tools like differential privacy. A favorite of organizations like Apple, differential privacy limits your data analysts’ access to summaries of data, such as averages. And by injecting noise into those summaries, differential privacy creates provable guarantees of privacy and prevents scenarios where malicious parties can reverse-engineer sensitive data. But because differential privacy uses summaries, instead of completely masking the data, companies can still draw meaning from it and improve their services. 

With tools like differential privacy, organizations move beyond governance patterns where data analysts either gain unrestricted access to sensitive data (think: Uber’s controversial “god view”) or face multiple barriers to data access. Instead, startups can use differential privacy to share and pool data safely, helping them overcome data scarcity. The most agile data governance systems allow startups to use differential privacy without code and the large engineering teams that only incumbents can afford.

Ultimately, better data means better predictions — and happier customers.

Agile data governance cultivates customer loyalty

According to Deloitte, 80% of consumers are more loyal to companies they believe protect their data. Yet far fewer leaders at established, incumbent companies — the respondents of the same survey — believed this to be true. Customers care more about their data than the leaders at incumbent companies think. 

This knowledge gap is an opportunity for startups. 

Furthermore, big enterprise companies — themselves customers of many startups — say data compliance risks prevent them from working with startups. And rightly so. Over 80% of data incidents are actually caused by errors from insiders, like third party vendors who mishandle sensitive data by sharing it with inappropriate parties. Yet over 68% of companies do not have good systems to prevent these types of errors. In fact, Facebook’s Cambridge Analytica firestorm — and resulting $ 5 billion fine — was sparked by third party inappropriately sharing personal data with a political consulting firm without user consent. 

As a result, many companies — both startups and incumbents — are holding a ticking time bomb of customer attrition. 

Agile data governance defuses these risks by simplifying the ethical data practices of understanding, controlling, and monitoring data at all times. With such practices, startups can prevent and correct the mishandling of sensitive data quickly.

Cognoa is a good example of a Third Wave healthcare startup adopting these three practices at a rapid pace. First, it understands where all of its sensitive health data lies by connecting all of its databases. Second, Cognoa can control all connected data sources at once from one point by using a single access-and-control layer, as opposed to relying on data silos. When this happens, employees and third parties can only access and share the sensitive data sources they’re supposed to. Finally, data queries are always monitored, allowing Cognoa to produce audit reports frequently and catch problems before they escalate out of control. 

With tools that simplify these three practices, even low-resourced startups can make sure sensitive data is tightly controlled at all times to prevent data incidents. Because key workflows are simplified, these same startups can maintain the speed of their data analytics by sharing data safely with the right parties. With better and safer data sharing across functions, startups can develop the insight necessary to cultivate a loyal fan base for the long-term.

Agile data governance can help startups survive inevitable data incidents

In 2018, Panera mistakenly shared 37 million customer records on its website and took 8 months to respond. Panera’s data incident is a taste of what’s to come: Gartner predicts that 50% of business ethics violations will stem from data incidents like these. In the era of “Big Data,” billion dollar incumbents without agile data governance will likely continue to violate data ethics. 

Given the inevitability of such incidents, startups that adopt agile data governance will likely be the most resilient companies of the future. 

Case in point: Harvard Business Review reports that the stock prices of companies without strong data governance practices drop 150% more than companies that do adopt strong practices. Despite this difference, only 10% of Fortune 500 companies actually employ the data transparency principle identified in the report. Practices include clearly disclosing data practices and giving users control over their privacy settings. 

Sure, data incidents are becoming more common. But that doesn’t mean startups don’t suffer from them. In fact, up to 60% of startups fold after a cyber attack. 

Startups can learn from WebMD, which Deloitte named as one standout in applying data transparency. With a readable privacy policy, customers know how data will be used, helping customers feel comfortable about sharing their data. More informed about the company’s practices, customers are surprised less by incidents. Surprises, BCG found, can reduce consumer spending by one-third. On a self-service platform on WebMD’s site, customers can control their privacy settings and how to share their data, further cultivating trust. 

Self-service tools like WebMD’s are part of agile data governance. These tools allow startups to simplify manual processes, like responding to customer requests to control their data. Instead, startups can focus on safely delivering value to their customers. 

Get ahead of the curve

For so long, the public seemed to care less about their data. 

That’s changing. Senior executives at major companies have been publicly interrogated for not taking data governance seriously. Some, like Facebook and Apple, are even claiming to lead with privacy. Ultimately, data privacy risks significantly rise in Third Wave industries where errors can alter access to key basic needs, such as healthcare, housing, and transportation.

While many incumbents have well-resourced legal and compliance departments, agile data governance goes beyond the “risk mitigation” missions of those functions. Agile governance means that time-consuming and error-prone workflows are streamlined so that companies serve their customers more quickly and safely.

Case in point: even after being advised by an army of lawyers, Zuckerberg’s 30,000-word Senate testimony about Cambridge Analytica included “ethics” only once, and it excluded “data governance” completely.

And even if companies do have legal departments, most don’t make their commitment to governance clear. Less than 15% of consumers say they know which companies protect their data the best. Startups can take advantage of this knowledge gap by adopting agile data governance and educate their customers about how to protect themselves in the risky world of the Third Wave.

Some incumbents may always be safe. But those in highly-regulated Third Wave industries, such as automotive, healthcare, and telecom should be worried; customers trust these incumbents the least. Startups that adopt agile data governance, however, will be trusted the most, and the time to act is now. 


TechCrunch

Google has responded to a report this week from Belgian public broadcaster VRT NWS, which revealed that contractors were given access to Google Assistant voice recordings, including those which contained sensitive information — like addresses, conversations between parents and children, business calls, and others containing all sorts of private information. As a result of the report, Google says it’s now preparing to investigate and take action against the contractor who leaked this information to the news outlet.

The company, by way of a blog post, explained that it partners with language experts around the world who review and transcribe a “small set of queries” to help Google better understand various languages.

Only around 0.2 percent of all audio snippets are reviewed by language experts, and these snippets are not associated with Google accounts during the review process, the company says. Other background conversations or noises are not supposed to be transcribed.

The leaker had listened to over 1,000 recordings, and found 153 were accidental in nature — meaning, it was clear the user hadn’t intended to ask for Google’s help. In addition, the report found that determining a user’s identity was often possible because the recordings themselves would reveal personal details. Some of the recordings contained highly sensitive information, like “bedroom conversations,” medical inquiries, or people in what appeared to be domestic violence situations, to name a few.

Google defended the transcription process as being a necessary part of providing voice assistant technologies to its international users.

But instead of focusing on its lack of transparency with consumers over who’s really listening to their voice data, Google says it’s going after the leaker themselves.

“[Transcription] is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant,” writes David Monsees, Product Manager for Search at Google, in the blog post. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” he said.

As voice assistant devices are becoming a more common part of consumers’ everyday lives, there’s increased scrutiny on how tech companies are handline the voice recordings, who’s listening on the other end, what records are being stored, and for how long, among other things.

This is not an issue that only Google is facing.

Earlier this month, Amazon responded to a U.S. senator’s inquiry over how it was handling consumers’ voice records. The inquiry had followed a CNET investigation which discovered Alexa recordings were kept unless manually deleted by users, and that some voice transcripts were never deleted. In addition, a Bloomberg report recently found that Amazon workers and contractors during the review process had access to the recordings, as well as an account number, the user’s first name, and the device’s serial number.

Further, a coalition of consumer privacy groups recently lodged a complaint with the U.S. Federal Trade Commission which claims Amazon Alexa is violating the U.S. Children’s Online Privacy Protection Act (COPPA) by failing to obtain proper consent over the company’s use of the kids’ data.

Neither Amazon nor Google have gone out of their way to alert consumers as to how the voice recordings are being used.

As Wired notes, the Google Home privacy policy doesn’t disclose that Google is using contract labor to review or transcribe audio recordings. The policy also says that data only leaves the device when the wake word is detected. But these leaked recordings indicate that’s clearly not true — the devices accidentally record voice data at times.

The issues around the lack of disclosure and transparency could be yet another signal to U.S. regulators that tech companies aren’t able to make responsible decisions on their own when it comes to consumer data privacy.

The timing of the news isn’t great for Google. According to reports, the U.S. Department of Justice is preparing for a possible antitrust investigation of Google’s business practices, and is watching the company’s behavior closely. Given this increased scrutiny, one would think Google would be going over its privacy policies with a fine-toothed comb — especially in areas that are newly coming under fire, like policies around consumers’ voice data — to ensure that consumers understand how their data is being stored, shared, and used.

Google also notes today that people do have a way to opt-out of having their audio data stored. Users can either turn off audio data storage entirely, or choose to have the data auto-delete every 3 months or every 18 months.

The company also says it will work to better explain how this voice data is used going forward.

“We’re always working to improve how we explain our settings and privacy practices to people, and will be reviewing opportunities to further clarify how data is used to improve speech technology,” said Monsees.


TechCrunch

Dataform, a U.K. company started by ex-Googlers that wants to make it easier for businesses to manage their data warehouses, has picked up $ 2 million in funding. Leading the round is LocalGlobe, with participation from a number of unnamed angel investors. The startup is also an alumni of Silicon Valley accelerator Y Combinator and graduated in late 2018.

Founded by former Google employees Lewis Hemens and Guillaume-Henri Huon, Dataform has set out to help data-rich companies draw insights from the data stored in their data warehouses. Mining data for insights and business intelligence typically requires a team of data engineers and analysts. Dataform wants to simply this task and in turn make it faster and cheaper for organisations to take full advantage of their data assets.

“Businesses are generating more and more data that they are now centralising into cloud data warehouses like Google BigQuery, AWS Redshift or Snowflake. [However,] to exploit this data, such as conducting analytics or using BI tools, they need to convert the vast amount of raw data into a list of clean, reliable and up-to-date datasets,” explains Dataform co-founder Guillaume-Henri Huon. .

“Data teams don’t have the right tools to manage data in the warehouse efficiently. As a result, they have to spend most of their time building custom infrastructure and making sure their data pipelines work”.

Huon says Dataform solves this by offering a complete toolkit to manage data in data warehouses. Data teams can build new datasets and set them to update automatically every day, or more frequently. The entire process is managed via a single interface and setting up a new dataset is said to take as little as 5 minutes. “On top of this, we have an open source framework that helps managing data using engineering best practices, including reusable functions, testing and dependency management.

Meanwhile, Dataform says the seed funding will help the company continue to grow both its sales and engineering teams. It will also be used to further develop its product. The startup generates revenue based on a classic SaaS model: typically charging per number of users.


TechCrunch

The UK’s Information Commissioner is starting off the week with a GDPR bang: this morning, it announced that it has fined British Airways and its parent International Airlines Group (IAG) £183.39 million ($ 230 million) in connection with a data breach that took place last year that affected a whopping 500,000 customers browsing and booking tickets online. In an investigation, the ICO said that it found “that a variety of information was compromised by poor security arrangements at [BA], including log in, payment card, and travel booking details as well name and address information.”

The fine — 1.5% of BA’s total revenues for the year that ended December 31, 2018 — is the highest-ever that the ICO has levelled at a company over a data breach (previous “record holder” Facebook was fined a mere £500,000 last year by comparison).

And it is significant for another reason: it shows that data breaches can be not just just a public relations liability, destroying consumer trust in the organization, but a financial liability, too. IAG is currently seeing volatile trading in London, with shares down 1.5% at the moment.

In a statement to the market, the two leaders of IAG defended the company and said that its own investigations found that no evidence of fraudulent activity was found on accounts linked to the theft (although as you may know, data from breaches may not always be used in the place where it’s been stolen).

“We are surprised and disappointed in this initial finding from the ICO,” said Alex Cruz, British Airways chairman and chief executive. “British Airways responded quickly to a criminal act to steal customers’ data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused.”

Willie Walsh, International Airlines Group chief executive, added in his own comment that “British Airways will be making representations to the ICO in relation to the proposed fine. We intend to take all appropriate steps to defend the airline’s position vigorously, including making any necessary appeals.”

The degree to which companies are going to be held accountable for these kinds of breaches is going to be a lot more transparent going forward: the ICO’s announcement is part of a new directive to disclose the details of its fines and investigations to the public.

“People’s personal data is just that – personal,” said Information Commissioner Elizabeth Denham in a statement. “When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The ICO said in a statement this morning that the fine is related to infringements of the General Data Protection Regulation (GDPR), which went into effect last year prior to the breach. More specifically, the incident involved malware on BA.com that diverted user traffic to a fraudulent site, where customer details were subsequently harvested by the malicious hackers.

BA notified the ICO of the incident in September, but the breach was believed to have first started in June. Since then, the ICO said that British Airways “has cooperated with the ICO investigation and has made improvements to its security arrangements since these events came to light.” But it should be pointed out that even before this breach, there were other examples of the company treating data protection lightly. (Now, it seems BA has learned its lesson the hard way.)

From the statement issued by IAG today, it sounds like BA will choose to try to appeal the fine and overall ruling.

While there are a lot of question marks over how the UK will interface with the rest of Europe over regulatory cases such as this one after it leaves the EU, for now it’s working in concert with the bigger group.

The ICO says it has been “lead supervisory authority on behalf of other EU Member State data protection authorities” in this case, liaising with other regulators in the process. This also means that these authorities where its residents were also affected by the breach will also have a chance to provide input on the ruling before it is completely final.


TechCrunch

Microsoft today announced that is first data center regions in the Middle East are now online. The data centers are located in Abu Dhabi and Dubai and will offer local access to the usual suite of services, including Azure’s cloud computing services and Office 365. Support for Dynamics 365 and Microsoft’s Power Platform will arrive later this year.

“In our experience, local datacenter infrastructure supports and stimulates economic
development for both customers and partners alike, enabling companies, governments and regulated industries to realize the benefits of the cloud for innovation and new projects, as well as bolstering the technology ecosystem that supports these projects,” Microsoft’s corporate VP Azure Global writes in today’s announcement. “We anticipate the cloud services delivered from UAE to have a positive impact on job creation, entrepreneurship and economic growth across the region.”

The company first announced these new regions last March. Back in 2017, Microsoft’s cloud rival, Amazon’s AWS, said it would offer a region in Bahrain in early 2019. This region is not online yet, but is still listed as ‘coming soon‘ on the service’s infrastructure map. Google currently has no data center presence in the Middle East and hasn’t announced any plans to change this.


TechCrunch

Created by R the Company. Powered by SiteMuze.