Wij willen met u aan tafel zitten en in een openhartig gesprek uitvinden welke uitdagingen en vragen er bij u spelen om zo, gezamelijk, tot een beste oplossing te komen. Oftewel, hoe kan de techniek u ondersteunen in plaats van dat u de techniek moet ondersteunen.

Saudi Arabian officials allegedly paid at least two employees of Twitter to access personal information on users the government there was interested in, according to recently unsealed court documents. Those users were warned of the attempt in 2015, but the full picture is only now emerging.

According to an AP report citing the federal complaint, Ahmad Abouammo and Ali Alzabarah were both approached by the Saudi government, which promised “a designer watch and tens of thousands of dollars” if they could retrieve personal information on certain users.

Abouammo worked for Twitter in media partnerships in the Middle East, and Alzabarah was an engineer; both are charged with acting as unregistered Saudi agents — spies.

Alzabarah reportedly met with a member of the Saudi royal family in Washington, D.C. in 2015, and within a week he had begun accessing data on thousands of users, including at least 33 that Saudi Arabia had officially contacted Twitter to request information on. These users included political activists and journalists critical of the royal family and Saudi government.

This did not go unnoticed and Alzabarah, when questioned by his supervisors, reportedly said he had only done it out of curiosity. But when he was forced to leave work, he flew to Saudi Arabia with his family literally the next day, and now works for the government there.

The attempt resulted in Twitter alerting thousands of users that they were the potential targets of a state-sponsored attack, but that there was no evidence their personal data had actually been exfiltrated. Last year, the New York Times reported that this event had been prompted by a Twitter employee groomed by Saudi officials for the purpose. And now we learn there was another employee engaged in similar activity.

The cases in question are still open and as such more information will likely come to light soon. I asked Twitter for comment on the events and what specifically it had done to prevent similar attacks in the future. It did not respond directly to these queries, instead providing the following statement:

We would like to thank the FBI and the U.S. Department of Justice for their support with this investigation. We recognize the lengths bad actors will go to try and undermine our service. Our company limits access to sensitive account information to a limited group of trained and vetted employees. We understand the incredible risks faced by many who use Twitter to share their perspectives with the world and to hold those in power accountable. We have tools in place to protect their privacy and their ability to do their vital work. We’re committed to protecting those who use our service to advocate for equality, individual freedoms, and human rights.


TechCrunch

Facebook has reached a settlement with the UK’s data protection watchdog, the ICO, agreeing to pay in full a £500,000 (~$ 643k) fine following the latter’s investigating into the Cambridge Analytica data misuse scandal.

As part of the arrangement Facebook has agreed to drop its legal appeal against the penalty. But under the terms of the settlement it has not admitted any liability in relation to paying the fine, which is the maximum possible monetary penalty under the applicable UK data protection law. (The Cambridge Analytica scandal predates Europe’s GDPR framework coming into force.)

Facebook’s appeal against the ICO’s penalty was focused on a claim that there was no evidence that U.K. Facebook users’ data had being mis-used by Cambridge Analytica .

But there’s a further twist here in that the company had secured a win, from a first tier legal tribunal — which held in June that “procedural fairness and allegations of bias” on the part of the ICO should be considered as part of its appeal.

The decision required the ICO to disclose materials relating to its decision-making process regarding the Facebook fine. The ICO, evidently less than keen for its emails to be trawled through, appealed last month. It’s now withdrawing the action as part of the settlement, Facebook having dropped its legal action.

In a statement laying out the bare bones of the settlement reached, the ICO writes: “The Commissioner considers that this agreement best serves the interests of all UK data subjects who are Facebook users. Both Facebook and the ICO are committed to continuing to work to ensure compliance with applicable data protection laws.”

An ICO spokeswoman did not respond to additional questions — telling us it does not have anything further to add than its public statement.

As part of the settlement, the ICO writes that Facebook is being allowed to retain some (unspecified) “documents” that the ICO had disclosed during the appeal process — to use for “other purposes”, including for furthering its own investigation into issues around Cambridge Analytica.

“Parts of this investigation had previously been put on hold at the ICO’s direction and can now resume,” the ICO adds.

Under the terms of the settlement the ICO and Facebook each pay their own legal costs. While the £500k fine is not kept by the ICO but paid to HM Treasury’s consolidated fund.

Commenting in a statement, deputy commissioner, James Dipple-Johnstone, said:

The ICO welcomes the agreement reached with Facebook for the withdrawal of their appeal against our Monetary Penalty Notice and agreement to pay the fine. The ICO’s main concern was that UK citizen data was exposed to a serious risk of harm. Protection of personal information and personal privacy is of fundamental importance, not only for the rights of individuals, but also as we now know, for the preservation of a strong democracy. We are pleased to hear that Facebook has taken, and will continue to take, significant steps to comply with the fundamental principles of data protection. With this strong commitment to protecting people’s personal information and privacy, we expect that Facebook will be able to move forward and learn from the events of this case.

In its own supporting statement, attached to the ICO’s remarks, Harry Kinmonth, director and associate general counsel at Facebook, added:

We are pleased to have reached a settlement with the ICO. As we have said before, we wish we had done more to investigate claims about Cambridge Analytica in 2015. We made major changes to our platform back then, significantly restricting the information which app developers could access. Protecting people’s information and privacy is a top priority for Facebook, and we are continuing to build new controls to help people protect and manage their information. The ICO has stated that it has not discovered evidence that the data of Facebook users in the EU was transferred to Cambridge Analytica by Dr Kogan. However, we look forward to continuing to cooperate with the ICO’s wider and ongoing investigation into the use of data analytics for political purposes.

A charitable interpretation of what’s gone on here is that both Facebook and the ICO have reached a stalemate where their interests are better served by taking a quick win that puts the issue to bed, rather than dragging on with legal appeals that might also have raised fresh embarrassments. 

That’s quick wins in terms of PR (a paid fine for the ICO; and drawing a line under the issue for Facebook), as well as (potentially) useful data to further Facebook’s internal investigation of the Cambridge Analytica scandal.

We don’t know exactly it’s getting from the ICO’s document stash. But we do know it’s facing a number of lawsuits and legal challenges over the scandal in the US. 

The ICO announced its intention to fine Facebook over the Cambridge Analytica scandal just over a year ago.

In March 2018 it had raided the UK offices of the now defunct data company, after obtaining a warrant, taking away hard drives and computers for analysis. It had also earlier ordered Facebook to withdraw its own investigators from the company’s offices.

Speaking to a UK parliamentary committee a year ago the information commissioner, Elizabeth Denham, and deputy Dipple-Johnstone, discussed their (then) ongoing investigation of data seized from Cambridge Analytica — saying they believed the Facebook user data-set the company had misappropriated could have been passed to more entities than were publicly known.

The ICO said at that point it was looking into “about half a dozen” entities.

It also told the committee it had evidence that, even as recently as early 2018, Cambridge Analytica might have retained some of the Facebook data — despite having claimed it had deleted everything.

“The follow up was less than robust. And that’s one of the reasons that we fined Facebook £500,000,” Denham also said at the time. 

Some of this evidence will likely be very useful for Facebook as it prepares to defend itself in legal challenges related to Cambridge Analytica. As well as aiding its claimed platform audit — when, in the wake of the scandal, Facebook said it would run a historical app audit and challenge all developers who it determined had downloaded large amounts of user data.

The audit, which it announced in March 2018, apparently remains ongoing.


TechCrunch

Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.

A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.

We’ve reached out to Amazon for comment.

Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.

However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…

In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.

These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.

Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.

The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.

Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.

A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.

Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.

Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.

The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.

In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.

At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.

In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:

We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.


TechCrunch

Five billion dollars. That’s the apparent size of Facebook’s latest fine for violating data privacy. 

While many believe the sum is simply a slap on the wrist for a behemoth like Facebook, it’s still the largest amount the Federal Trade Commission has ever levied on a technology company. 

Facebook is clearly still reeling from Cambridge Analytica, after which trust in the company dropped 51%, searches for “delete Facebook” reached 5-year highs, and Facebook’s stock dropped 20%.

While incumbents like Facebook are struggling with their data, startups in highly-regulated, “Third Wave” industries can take advantage by using a data strategy one would least expect: ethics. Beyond complying with regulations, startups that embrace ethics look out for their customers’ best interests, cultivate long-term trust — and avoid billion dollar fines. 

To weave ethics into the very fabric of their business strategies and tech systems, startups should adopt “agile” data governance systems. Often combining law and technology, these systems will become a key weapon of data-centric Third Wave startups to beat incumbents in their field. 

Established, highly-regulated incumbents often use slow and unsystematic data compliance workflows, operated manually by armies of lawyers and technology personnel. Agile data governance systems, in contrast, simplify both these workflows and the use of cutting-edge privacy tools, allowing resource-poor startups both to protect their customers better and to improve their services.

In fact, 47% of customers are willing to switch to startups that protect their sensitive data better. Yet 80% of customers highly value more convenience and better service. 

By using agile data governance, startups can balance protection and improvement. Ultimately, they gain a strategic advantage by obtaining more data, cultivating more loyalty, and being more resilient to inevitable data mishaps. 

Agile data governance helps startups obtain more data — and create more value 

With agile data governance, startups can address their critical weakness: data scarcity. Customers share more data with startups that make data collection a feature, not a burdensome part of the user experience. Agile data governance systems simplify compliance with this data practice. 

Take Ally Bank, which the Ponemon Institute rated as one of the most privacy-protecting banks. In 2017, Ally’s deposits base grew 16%, while those of incumbents declined 4%.

One key principle to its ethical data strategy: minimizing data collection and use. Ally’s customers obtain services through a personalized website, rarely filling out long surveys. When data is requested, it’s done in small doses on the site — and always results in immediate value, such as viewing transactions. 

This is on purpose. Ally’s Chief Marketing Officer publicly calls the industry-mantra of “more data” dangerous to brands and consumers alike.

A critical tool to minimize data use is to use advanced data privacy tools like differential privacy. A favorite of organizations like Apple, differential privacy limits your data analysts’ access to summaries of data, such as averages. And by injecting noise into those summaries, differential privacy creates provable guarantees of privacy and prevents scenarios where malicious parties can reverse-engineer sensitive data. But because differential privacy uses summaries, instead of completely masking the data, companies can still draw meaning from it and improve their services. 

With tools like differential privacy, organizations move beyond governance patterns where data analysts either gain unrestricted access to sensitive data (think: Uber’s controversial “god view”) or face multiple barriers to data access. Instead, startups can use differential privacy to share and pool data safely, helping them overcome data scarcity. The most agile data governance systems allow startups to use differential privacy without code and the large engineering teams that only incumbents can afford.

Ultimately, better data means better predictions — and happier customers.

Agile data governance cultivates customer loyalty

According to Deloitte, 80% of consumers are more loyal to companies they believe protect their data. Yet far fewer leaders at established, incumbent companies — the respondents of the same survey — believed this to be true. Customers care more about their data than the leaders at incumbent companies think. 

This knowledge gap is an opportunity for startups. 

Furthermore, big enterprise companies — themselves customers of many startups — say data compliance risks prevent them from working with startups. And rightly so. Over 80% of data incidents are actually caused by errors from insiders, like third party vendors who mishandle sensitive data by sharing it with inappropriate parties. Yet over 68% of companies do not have good systems to prevent these types of errors. In fact, Facebook’s Cambridge Analytica firestorm — and resulting $ 5 billion fine — was sparked by third party inappropriately sharing personal data with a political consulting firm without user consent. 

As a result, many companies — both startups and incumbents — are holding a ticking time bomb of customer attrition. 

Agile data governance defuses these risks by simplifying the ethical data practices of understanding, controlling, and monitoring data at all times. With such practices, startups can prevent and correct the mishandling of sensitive data quickly.

Cognoa is a good example of a Third Wave healthcare startup adopting these three practices at a rapid pace. First, it understands where all of its sensitive health data lies by connecting all of its databases. Second, Cognoa can control all connected data sources at once from one point by using a single access-and-control layer, as opposed to relying on data silos. When this happens, employees and third parties can only access and share the sensitive data sources they’re supposed to. Finally, data queries are always monitored, allowing Cognoa to produce audit reports frequently and catch problems before they escalate out of control. 

With tools that simplify these three practices, even low-resourced startups can make sure sensitive data is tightly controlled at all times to prevent data incidents. Because key workflows are simplified, these same startups can maintain the speed of their data analytics by sharing data safely with the right parties. With better and safer data sharing across functions, startups can develop the insight necessary to cultivate a loyal fan base for the long-term.

Agile data governance can help startups survive inevitable data incidents

In 2018, Panera mistakenly shared 37 million customer records on its website and took 8 months to respond. Panera’s data incident is a taste of what’s to come: Gartner predicts that 50% of business ethics violations will stem from data incidents like these. In the era of “Big Data,” billion dollar incumbents without agile data governance will likely continue to violate data ethics. 

Given the inevitability of such incidents, startups that adopt agile data governance will likely be the most resilient companies of the future. 

Case in point: Harvard Business Review reports that the stock prices of companies without strong data governance practices drop 150% more than companies that do adopt strong practices. Despite this difference, only 10% of Fortune 500 companies actually employ the data transparency principle identified in the report. Practices include clearly disclosing data practices and giving users control over their privacy settings. 

Sure, data incidents are becoming more common. But that doesn’t mean startups don’t suffer from them. In fact, up to 60% of startups fold after a cyber attack. 

Startups can learn from WebMD, which Deloitte named as one standout in applying data transparency. With a readable privacy policy, customers know how data will be used, helping customers feel comfortable about sharing their data. More informed about the company’s practices, customers are surprised less by incidents. Surprises, BCG found, can reduce consumer spending by one-third. On a self-service platform on WebMD’s site, customers can control their privacy settings and how to share their data, further cultivating trust. 

Self-service tools like WebMD’s are part of agile data governance. These tools allow startups to simplify manual processes, like responding to customer requests to control their data. Instead, startups can focus on safely delivering value to their customers. 

Get ahead of the curve

For so long, the public seemed to care less about their data. 

That’s changing. Senior executives at major companies have been publicly interrogated for not taking data governance seriously. Some, like Facebook and Apple, are even claiming to lead with privacy. Ultimately, data privacy risks significantly rise in Third Wave industries where errors can alter access to key basic needs, such as healthcare, housing, and transportation.

While many incumbents have well-resourced legal and compliance departments, agile data governance goes beyond the “risk mitigation” missions of those functions. Agile governance means that time-consuming and error-prone workflows are streamlined so that companies serve their customers more quickly and safely.

Case in point: even after being advised by an army of lawyers, Zuckerberg’s 30,000-word Senate testimony about Cambridge Analytica included “ethics” only once, and it excluded “data governance” completely.

And even if companies do have legal departments, most don’t make their commitment to governance clear. Less than 15% of consumers say they know which companies protect their data the best. Startups can take advantage of this knowledge gap by adopting agile data governance and educate their customers about how to protect themselves in the risky world of the Third Wave.

Some incumbents may always be safe. But those in highly-regulated Third Wave industries, such as automotive, healthcare, and telecom should be worried; customers trust these incumbents the least. Startups that adopt agile data governance, however, will be trusted the most, and the time to act is now. 


TechCrunch

Google has responded to a report this week from Belgian public broadcaster VRT NWS, which revealed that contractors were given access to Google Assistant voice recordings, including those which contained sensitive information — like addresses, conversations between parents and children, business calls, and others containing all sorts of private information. As a result of the report, Google says it’s now preparing to investigate and take action against the contractor who leaked this information to the news outlet.

The company, by way of a blog post, explained that it partners with language experts around the world who review and transcribe a “small set of queries” to help Google better understand various languages.

Only around 0.2 percent of all audio snippets are reviewed by language experts, and these snippets are not associated with Google accounts during the review process, the company says. Other background conversations or noises are not supposed to be transcribed.

The leaker had listened to over 1,000 recordings, and found 153 were accidental in nature — meaning, it was clear the user hadn’t intended to ask for Google’s help. In addition, the report found that determining a user’s identity was often possible because the recordings themselves would reveal personal details. Some of the recordings contained highly sensitive information, like “bedroom conversations,” medical inquiries, or people in what appeared to be domestic violence situations, to name a few.

Google defended the transcription process as being a necessary part of providing voice assistant technologies to its international users.

But instead of focusing on its lack of transparency with consumers over who’s really listening to their voice data, Google says it’s going after the leaker themselves.

“[Transcription] is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant,” writes David Monsees, Product Manager for Search at Google, in the blog post. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” he said.

As voice assistant devices are becoming a more common part of consumers’ everyday lives, there’s increased scrutiny on how tech companies are handline the voice recordings, who’s listening on the other end, what records are being stored, and for how long, among other things.

This is not an issue that only Google is facing.

Earlier this month, Amazon responded to a U.S. senator’s inquiry over how it was handling consumers’ voice records. The inquiry had followed a CNET investigation which discovered Alexa recordings were kept unless manually deleted by users, and that some voice transcripts were never deleted. In addition, a Bloomberg report recently found that Amazon workers and contractors during the review process had access to the recordings, as well as an account number, the user’s first name, and the device’s serial number.

Further, a coalition of consumer privacy groups recently lodged a complaint with the U.S. Federal Trade Commission which claims Amazon Alexa is violating the U.S. Children’s Online Privacy Protection Act (COPPA) by failing to obtain proper consent over the company’s use of the kids’ data.

Neither Amazon nor Google have gone out of their way to alert consumers as to how the voice recordings are being used.

As Wired notes, the Google Home privacy policy doesn’t disclose that Google is using contract labor to review or transcribe audio recordings. The policy also says that data only leaves the device when the wake word is detected. But these leaked recordings indicate that’s clearly not true — the devices accidentally record voice data at times.

The issues around the lack of disclosure and transparency could be yet another signal to U.S. regulators that tech companies aren’t able to make responsible decisions on their own when it comes to consumer data privacy.

The timing of the news isn’t great for Google. According to reports, the U.S. Department of Justice is preparing for a possible antitrust investigation of Google’s business practices, and is watching the company’s behavior closely. Given this increased scrutiny, one would think Google would be going over its privacy policies with a fine-toothed comb — especially in areas that are newly coming under fire, like policies around consumers’ voice data — to ensure that consumers understand how their data is being stored, shared, and used.

Google also notes today that people do have a way to opt-out of having their audio data stored. Users can either turn off audio data storage entirely, or choose to have the data auto-delete every 3 months or every 18 months.

The company also says it will work to better explain how this voice data is used going forward.

“We’re always working to improve how we explain our settings and privacy practices to people, and will be reviewing opportunities to further clarify how data is used to improve speech technology,” said Monsees.


TechCrunch

Dataform, a U.K. company started by ex-Googlers that wants to make it easier for businesses to manage their data warehouses, has picked up $ 2 million in funding. Leading the round is LocalGlobe, with participation from a number of unnamed angel investors. The startup is also an alumni of Silicon Valley accelerator Y Combinator and graduated in late 2018.

Founded by former Google employees Lewis Hemens and Guillaume-Henri Huon, Dataform has set out to help data-rich companies draw insights from the data stored in their data warehouses. Mining data for insights and business intelligence typically requires a team of data engineers and analysts. Dataform wants to simply this task and in turn make it faster and cheaper for organisations to take full advantage of their data assets.

“Businesses are generating more and more data that they are now centralising into cloud data warehouses like Google BigQuery, AWS Redshift or Snowflake. [However,] to exploit this data, such as conducting analytics or using BI tools, they need to convert the vast amount of raw data into a list of clean, reliable and up-to-date datasets,” explains Dataform co-founder Guillaume-Henri Huon. .

“Data teams don’t have the right tools to manage data in the warehouse efficiently. As a result, they have to spend most of their time building custom infrastructure and making sure their data pipelines work”.

Huon says Dataform solves this by offering a complete toolkit to manage data in data warehouses. Data teams can build new datasets and set them to update automatically every day, or more frequently. The entire process is managed via a single interface and setting up a new dataset is said to take as little as 5 minutes. “On top of this, we have an open source framework that helps managing data using engineering best practices, including reusable functions, testing and dependency management.

Meanwhile, Dataform says the seed funding will help the company continue to grow both its sales and engineering teams. It will also be used to further develop its product. The startup generates revenue based on a classic SaaS model: typically charging per number of users.


TechCrunch

The UK’s Information Commissioner is starting off the week with a GDPR bang: this morning, it announced that it has fined British Airways and its parent International Airlines Group (IAG) £183.39 million ($ 230 million) in connection with a data breach that took place last year that affected a whopping 500,000 customers browsing and booking tickets online. In an investigation, the ICO said that it found “that a variety of information was compromised by poor security arrangements at [BA], including log in, payment card, and travel booking details as well name and address information.”

The fine — 1.5% of BA’s total revenues for the year that ended December 31, 2018 — is the highest-ever that the ICO has levelled at a company over a data breach (previous “record holder” Facebook was fined a mere £500,000 last year by comparison).

And it is significant for another reason: it shows that data breaches can be not just just a public relations liability, destroying consumer trust in the organization, but a financial liability, too. IAG is currently seeing volatile trading in London, with shares down 1.5% at the moment.

In a statement to the market, the two leaders of IAG defended the company and said that its own investigations found that no evidence of fraudulent activity was found on accounts linked to the theft (although as you may know, data from breaches may not always be used in the place where it’s been stolen).

“We are surprised and disappointed in this initial finding from the ICO,” said Alex Cruz, British Airways chairman and chief executive. “British Airways responded quickly to a criminal act to steal customers’ data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused.”

Willie Walsh, International Airlines Group chief executive, added in his own comment that “British Airways will be making representations to the ICO in relation to the proposed fine. We intend to take all appropriate steps to defend the airline’s position vigorously, including making any necessary appeals.”

The degree to which companies are going to be held accountable for these kinds of breaches is going to be a lot more transparent going forward: the ICO’s announcement is part of a new directive to disclose the details of its fines and investigations to the public.

“People’s personal data is just that – personal,” said Information Commissioner Elizabeth Denham in a statement. “When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The ICO said in a statement this morning that the fine is related to infringements of the General Data Protection Regulation (GDPR), which went into effect last year prior to the breach. More specifically, the incident involved malware on BA.com that diverted user traffic to a fraudulent site, where customer details were subsequently harvested by the malicious hackers.

BA notified the ICO of the incident in September, but the breach was believed to have first started in June. Since then, the ICO said that British Airways “has cooperated with the ICO investigation and has made improvements to its security arrangements since these events came to light.” But it should be pointed out that even before this breach, there were other examples of the company treating data protection lightly. (Now, it seems BA has learned its lesson the hard way.)

From the statement issued by IAG today, it sounds like BA will choose to try to appeal the fine and overall ruling.

While there are a lot of question marks over how the UK will interface with the rest of Europe over regulatory cases such as this one after it leaves the EU, for now it’s working in concert with the bigger group.

The ICO says it has been “lead supervisory authority on behalf of other EU Member State data protection authorities” in this case, liaising with other regulators in the process. This also means that these authorities where its residents were also affected by the breach will also have a chance to provide input on the ruling before it is completely final.


TechCrunch

Created by R the Company. Powered by SiteMuze.