Wij willen met u aan tafel zitten en in een openhartig gesprek uitvinden welke uitdagingen en vragen er bij u spelen om zo, gezamelijk, tot een beste oplossing te komen.
Oftewel, hoe kan de techniek u ondersteunen in plaats van dat u de techniek moet ondersteunen.
Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.
A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.
We’ve reached out to Amazon for comment.
Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.
However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…
In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.
These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.
Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.
The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.
Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.
Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.
Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.
The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.
In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.
At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.
In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:
We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.
Facebook is clearly still reeling from Cambridge Analytica, after which trust in the company dropped 51%, searches for “delete Facebook” reached 5-year highs, and Facebook’s stock dropped 20%.
While incumbents like Facebook are struggling with their data, startups in highly-regulated, “Third Wave” industries can take advantage by using a data strategy one would least expect: ethics. Beyond complying with regulations, startups that embrace ethics look out for their customers’ best interests, cultivate long-term trust — and avoid billion dollar fines.
To weave ethics into the very fabric of their business strategies and tech systems, startups should adopt “agile” data governance systems. Often combining law and technology, these systems will become a key weapon of data-centric Third Wave startups to beat incumbents in their field.
Established, highly-regulated incumbents often use slow and unsystematic data compliance workflows, operated manually by armies of lawyers and technology personnel. Agile data governance systems, in contrast, simplify both these workflows and the use of cutting-edge privacy tools, allowing resource-poor startups both to protect their customers better and to improve their services.
In fact, 47% of customers are willing to switch to startups that protect their sensitive data better. Yet 80% of customers highly value more convenience and better service.
By using agile data governance, startups can balance protection and improvement. Ultimately, they gain a strategic advantage by obtaining more data, cultivating more loyalty, and being more resilient to inevitable data mishaps.
Agile data governance helps startups obtain more data — and create more value
With agile data governance, startups can address their critical weakness: data scarcity. Customers sharemore data with startups that make data collection a feature, not a burdensome part of the user experience. Agile data governance systems simplify compliance with this data practice.
Take Ally Bank, which the Ponemon Institute ratedas one of the most privacy-protecting banks. In 2017, Ally’s deposits base grew 16%, while those of incumbents declined 4%.
One key principle to its ethical data strategy: minimizing data collection and use. Ally’s customers obtain services through a personalized website, rarely filling out long surveys. When data is requested, it’s done in small doses on the site — and always results in immediate value, such as viewing transactions.
This is on purpose. Ally’s Chief Marketing Officer publicly calls the industry-mantra of “more data” dangerous to brands and consumers alike.
A critical tool to minimize data use is to use advanced data privacy tools like differential privacy. A favorite of organizations like Apple, differential privacy limits your data analysts’ access to summaries of data, such as averages. And by injecting noise into those summaries, differential privacy createsprovable guarantees of privacy and prevents scenarios where malicious parties can reverse-engineer sensitive data. But because differential privacy uses summaries, instead of completely masking the data, companies can still draw meaning from it and improve their services.
With tools like differential privacy, organizations move beyond governance patterns where data analysts either gain unrestricted access to sensitive data (think: Uber’s controversial “god view”) or face multiple barriers to data access. Instead, startups can use differential privacy to share and pool data safely, helping them overcome data scarcity. The most agile data governance systems allow startups to use differential privacy without code and the large engineering teams that only incumbents can afford.
According to Deloitte,80% of consumers are more loyal to companies they believe protect their data. Yet far fewer leaders at established, incumbent companies — the respondents of the same survey — believed this to be true. Customers care more about their data than the leaders at incumbent companies think.
This knowledge gap is an opportunity for startups.
Furthermore, big enterprise companies — themselves customers of many startups — say data compliance risks prevent them from working with startups. And rightly so. Over 80% of data incidents are actually caused by errors from insiders, like third party vendors who mishandle sensitive data by sharing it with inappropriate parties. Yet over 68% of companies do not have good systems to prevent these types of errors. In fact, Facebook’s Cambridge Analytica firestorm — and resulting $ 5 billion fine — was sparked by third party inappropriately sharing personal data with a political consulting firm without user consent.
As a result, many companies — both startups and incumbents — are holding a ticking time bomb of customer attrition.
Agile data governance defuses these risks by simplifying the ethical data practices of understanding, controlling, and monitoring data at all times. With such practices, startups can prevent and correct the mishandling of sensitive data quickly.
Cognoa is a good example of a Third Wave healthcare startup adopting these three practices at a rapid pace. First, it understands where all of its sensitive health data lies by connecting all of its databases. Second, Cognoa can control all connected data sources at once from one point by using a single access-and-control layer, as opposed to relying on data silos. When this happens, employees and third parties can only access and share the sensitive data sources they’re supposed to. Finally, data queries are always monitored, allowing Cognoa to produce audit reports frequently and catch problems before they escalate out of control.
With tools that simplify these three practices, even low-resourced startups can make sure sensitive data is tightly controlled at all times to prevent data incidents. Because key workflows are simplified, these same startups can maintain the speed of their data analytics by sharing data safely with the right parties. With better and safer data sharing across functions, startups can develop the insight necessary to cultivate a loyal fan base for the long-term.
Agile data governance can help startups survive inevitable data incidents
In 2018, Panera mistakenly shared 37 million customer records on its website and took 8 months to respond. Panera’s data incident is a taste of what’s to come: Gartner predicts that 50% of business ethics violations will stem from data incidents like these. In the era of “Big Data,” billion dollar incumbents without agile data governance will likely continue to violate data ethics.
Given the inevitability of such incidents, startups that adopt agile data governance will likely be the most resilient companies of the future.
Case in point: Harvard Business Review reports that the stock prices of companies without strong data governance practices drop 150% more than companies that do adopt strong practices. Despite this difference, only 10% of Fortune 500 companies actually employ the data transparency principle identified in the report. Practices include clearly disclosing data practices and giving users control over their privacy settings.
Sure, data incidents are becoming more common. But that doesn’t mean startups don’t suffer from them. In fact, up to 60% of startups fold after a cyber attack.
Self-service tools like WebMD’s are part of agile data governance. These tools allow startups to simplify manual processes, like responding to customer requests to control their data. Instead, startups can focus on safely delivering value to their customers.
Get ahead of the curve
For so long, the public seemed to care less about their data.
That’s changing. Senior executives at major companies have been publicly interrogatedfor not taking data governance seriously. Some, like Facebook and Apple, are even claiming to lead with privacy. Ultimately, data privacy risks significantly rise in Third Wave industries where errors can alter access to key basic needs, such as healthcare, housing, and transportation.
While many incumbents have well-resourced legal and compliance departments, agile data governance goes beyond the “risk mitigation” missions of those functions. Agile governance means that time-consuming and error-prone workflows are streamlined so that companies serve their customers more quickly and safely.
Case in point: even after being advised by an army of lawyers, Zuckerberg’s 30,000-word Senate testimony about Cambridge Analytica included “ethics” only once, and it excluded “data governance” completely.
And even if companies do have legal departments, most don’t make their commitment to governance clear. Less than 15% of consumers say they know which companies protect their data the best. Startups can take advantage of this knowledge gap by adopting agile data governance and educate their customers about how to protect themselves in the risky world of the Third Wave.
Some incumbents may always be safe. But those in highly-regulated Third Wave industries, such as automotive, healthcare, and telecom should be worried; customers trust these incumbents the least. Startups that adopt agile data governance, however, will be trusted the most, and the time to act is now.
Google has responded to a report this week from Belgian public broadcaster VRT NWS, which revealed that contractors were given access to Google Assistant voice recordings, including those which contained sensitive information — like addresses, conversations between parents and children, business calls, and others containing all sorts of private information. As a result of the report, Google says it’s now preparing to investigate and take action against the contractor who leaked this information to the news outlet.
The company, by way of a blog post, explained that it partners with language experts around the world who review and transcribe a “small set of queries” to help Google better understand various languages.
Only around 0.2 percent of all audio snippets are reviewed by language experts, and these snippets are not associated with Google accounts during the review process, the company says. Other background conversations or noises are not supposed to be transcribed.
The leaker had listened to over 1,000 recordings, and found 153 were accidental in nature — meaning, it was clear the user hadn’t intended to ask for Google’s help. In addition, the report found that determining a user’s identity was often possible because the recordings themselves would reveal personal details. Some of the recordings contained highly sensitive information, like “bedroom conversations,” medical inquiries, or people in what appeared to be domestic violence situations, to name a few.
Google defended the transcription process as being a necessary part of providing voice assistant technologies to its international users.
But instead of focusing on its lack of transparency with consumers over who’s really listening to their voice data, Google says it’s going after the leaker themselves.
“[Transcription] is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant,” writes David Monsees, Product Manager for Search at Google, in the blog post. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” he said.
As voice assistant devices are becoming a more common part of consumers’ everyday lives, there’s increased scrutiny on how tech companies are handline the voice recordings, who’s listening on the other end, what records are being stored, and for how long, among other things.
This is not an issue that only Google is facing.
Earlier this month, Amazon responded to a U.S. senator’s inquiry over how it was handling consumers’ voice records. The inquiry had followed a CNET investigation which discovered Alexa recordings were kept unless manually deleted by users, and that some voice transcripts were never deleted. In addition, a Bloomberg report recently found that Amazon workers and contractors during the review process had access to the recordings, as well as an account number, the user’s first name, and the device’s serial number.
Further, a coalition of consumer privacy groups recently lodged a complaint with the U.S. Federal Trade Commission which claims Amazon Alexa is violating the U.S. Children’s Online Privacy Protection Act (COPPA) by failing to obtain proper consent over the company’s use of the kids’ data.
Neither Amazon nor Google have gone out of their way to alert consumers as to how the voice recordings are being used.
The issues around the lack of disclosure and transparency could be yet another signal to U.S. regulators that tech companies aren’t able to make responsible decisions on their own when it comes to consumer data privacy.
Google also notes today that people do have a way to opt-out of having their audio data stored. Users can either turn off audio data storage entirely, or choose to have the data auto-delete every 3 months or every 18 months.
The company also says it will work to better explain how this voice data is used going forward.
“We’re always working to improve how we explain our settings and privacy practices to people, and will be reviewing opportunities to further clarify how data is used to improve speech technology,” said Monsees.
Dataform, a U.K. company started by ex-Googlers that wants to make it easier for businesses to manage their data warehouses, has picked up $ 2 million in funding. Leading the round is LocalGlobe, with participation from a number of unnamed angel investors. The startup is also an alumni of Silicon Valley accelerator Y Combinator and graduated in late 2018.
Founded by former Google employees Lewis Hemens and Guillaume-Henri Huon, Dataform has set out to help data-rich companies draw insights from the data stored in their data warehouses. Mining data for insights and business intelligence typically requires a team of data engineers and analysts. Dataform wants to simply this task and in turn make it faster and cheaper for organisations to take full advantage of their data assets.
“Businesses are generating more and more data that they are now centralising into cloud data warehouses like Google BigQuery, AWS Redshift or Snowflake. [However,] to exploit this data, such as conducting analytics or using BI tools, they need to convert the vast amount of raw data into a list of clean, reliable and up-to-date datasets,” explains Dataform co-founder Guillaume-Henri Huon. .
“Data teams don’t have the right tools to manage data in the warehouse efficiently. As a result, they have to spend most of their time building custom infrastructure and making sure their data pipelines work”.
Huon says Dataform solves this by offering a complete toolkit to manage data in data warehouses. Data teams can build new datasets and set them to update automatically every day, or more frequently. The entire process is managed via a single interface and setting up a new dataset is said to take as little as 5 minutes. “On top of this, we have an open source framework that helps managing data using engineering best practices, including reusable functions, testing and dependency management.
Meanwhile, Dataform says the seed funding will help the company continue to grow both its sales and engineering teams. It will also be used to further develop its product. The startup generates revenue based on a classic SaaS model: typically charging per number of users.
The UK’s Information Commissioner is starting off the week with a GDPR bang: this morning, it announced that it has fined British Airways and its parent International Airlines Group (IAG) £183.39 million ($ 230 million) in connection with a data breach that took place last year that affected a whopping 500,000 customers browsing and booking tickets online. In an investigation, the ICO said that it found “that a variety of information was compromised by poor security arrangements at [BA], including log in, payment card, and travel booking details as well name and address information.”
The fine — 1.5% of BA’s total revenues for the year that ended December 31, 2018 — is the highest-ever that the ICO has levelled at a company over a data breach (previous “record holder” Facebook was fined a mere £500,000 last year by comparison).
And it is significant for another reason: it shows that data breaches can be not just just a public relations liability, destroying consumer trust in the organization, but a financial liability, too. IAG is currently seeing volatile trading in London, with shares down 1.5% at the moment.
In a statement to the market, the two leaders of IAG defended the company and said that its own investigations found that no evidence of fraudulent activity was found on accounts linked to the theft (although as you may know, data from breaches may not always be used in the place where it’s been stolen).
“We are surprised and disappointed in this initial finding from the ICO,” said Alex Cruz, British Airways chairman and chief executive. “British Airways responded quickly to a criminal act to steal customers’ data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused.”
Willie Walsh, International Airlines Group chief executive, added in his own comment that “British Airways will be making representations to the ICO in relation to the proposed fine. We intend to take all appropriate steps to defend the airline’s position vigorously, including making any necessary appeals.”
The degree to which companies are going to be held accountable for these kinds of breaches is going to be a lot more transparent going forward: the ICO’s announcement is part of a new directive to disclose the details of its fines and investigations to the public.
“People’s personal data is just that – personal,” said Information Commissioner Elizabeth Denham in a statement. “When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”
The ICO said in a statement this morning that the fine is related to infringements of the General Data Protection Regulation (GDPR), which went into effect last year prior to the breach. More specifically, the incident involved malware on BA.com that diverted user traffic to a fraudulent site, where customer details were subsequently harvested by the malicious hackers.
BA notified the ICO of the incident in September, but the breach was believed to have first started in June. Since then, the ICO said that British Airways “has cooperated with the ICO investigation and has made improvements to its security arrangements since these events came to light.” But it should be pointed out that even before this breach, there were other examples of the company treating data protection lightly. (Now, it seems BA has learned its lesson the hard way.)
From the statement issued by IAG today, it sounds like BA will choose to try to appeal the fine and overall ruling.
While there are a lot of question marks over how the UK will interface with the rest of Europe over regulatory cases such as this one after it leaves the EU, for now it’s working in concert with the bigger group.
The ICO says it has been “lead supervisory authority on behalf of other EU Member State data protection authorities” in this case, liaising with other regulators in the process. This also means that these authorities where its residents were also affected by the breach will also have a chance to provide input on the ruling before it is completely final.
Microsoft today announced that is first data center regions in the Middle East are now online. The data centers are located in Abu Dhabi and Dubai and will offer local access to the usual suite of services, including Azure’s cloud computing services and Office 365. Support for Dynamics 365 and Microsoft’s Power Platform will arrive later this year.
“In our experience, local datacenter infrastructure supports and stimulates economic development for both customers and partners alike, enabling companies, governments and regulated industries to realize the benefits of the cloud for innovation and new projects, as well as bolstering the technology ecosystem that supports these projects,” Microsoft’s corporate VP Azure Global writes in today’s announcement. “We anticipate the cloud services delivered from UAE to have a positive impact on job creation, entrepreneurship and economic growth across the region.”
The company first announced these new regions last March. Back in 2017, Microsoft’s cloud rival, Amazon’s AWS, said it would offer a region in Bahrain in early 2019. This region is not online yet, but is still listed as ‘coming soon‘ on the service’s infrastructure map. Google currently has no data center presence in the Middle East and hasn’t announced any plans to change this.
Cao Xudong turned up on the side of the road in jeans and a black T-shirt printed with the word “Momenta,” the name of his startup.
Before founding the company — which last year topped $ 1 billion in valuation to become China’s first autonomous driving “unicorn” — he’d already led an enviable life, but he was convinced that autonomous driving would be the real big thing.
Cao isn’t just going for the moonshot of fully autonomous vehicles, which he says could be 20 years away. Instead, he’s taking a two-legged approach of selling semi-automated software while investing in research for next-gen self-driving tech.
Cao, pronounced ‘tsao’, was pursuing his Ph.D. in engineering mechanics when an opportunity came up to work at Microsoft’s fundamental research arm in Asia, putatively the “West Point” for China’s first generation of artificial intelligence experts. He held out there for more than four years before quitting to put his hands on something more practical: a startup.
“Academic research for AI was getting quite mature at the time,” said now 33-year-old Cao in an interview with TechCrunch, reflecting on his decision to quit Microsoft. “But the industry that puts AI into application had just begun. I believed the industrial wave would be even more extensive and intense than the academic wave that lasted from 2012 to 2015.”
In 2015, Cao joined SenseTime, now the world’s highest-valued AI startup, thanks in part to the lucrative face-recognition technology it sells to the government. During his 17-month stint, Cao built the company’s research division from zero staff into a 100-people strong team.
Before long, Cao found himself craving for a new adventure again. The founder said he doesn’t care about the result as much as the chance to “do something.” That tendency was already evident during his time at the prestigious Tsinghua University, where he was a member of the outdoors club. He wasn’t particularly drawn to hiking, he said, but the opportunity to embrace challenges and be with similarly resilient, daring people was enticing enough.
And if making driverless vehicles would allow him to leave a mark in the world, he’s all in for that.
Make the computer, not the car
Cao walked me up to a car outfitted with the cameras and radars you might spot on an autonomous vehicle, with unseen computer codes installed in the trunk. We hopped in. Our driver picked a route from the high-definition map that Momenta had built, and as soon as we approached the highway, the autonomous mode switched on by itself. The sensors then started feeding real-time data about the surroundings into the map, with which the computer could make decisions on the road.
Momenta staff installing sensors to a testing car. / Photo: Momenta
Momenta won’t make cars or hardware, Cao assured. Rather, it gives cars autonomous features by making their brains, or deep-learning capacities. It’s in effect a so-called Tier 2 supplier, akin to Intel’s Mobileye, that sells to Tier 1 suppliers who actually produce the automotive parts. It also sells directly to original equipment manufacturers (OMEs) that design cars, order parts from suppliers and assemble the final product. Under both circumstances, Momenta works with clients to specify the final piece of software.
Momenta believes this asset-light approach would allow it to develop state-of-the-art driving tech. By selling software to car and parts makers, it not only brings in income but also sources mountains of data, including how and when humans intervene, to train its codes at relatively low costs.
The company declined to share who its clients are but said they include top carmakers and Tier 1 suppliers in China and overseas. There won’t be many of them because a “partnership” in the auto sector demands deep, resource-intensive collaboration, so less is believed to be more. What we do know is Momenta counts Daimler AG as a backer. It’s also the first Chinese startup that the Mercedes-Benz parent had ever invested in, though Cao would not disclose whether Daimler is a client.
“Say you operate 10,000 autonomous cars to reap data. That could easily cost you $ 1 billion a year. 100,000 cars would cost $ 10 billion, which is a terrifying number for any tech giant,” Cao said. “If you want to acquire seas of data that have a meaningful reach, you have to build a product for the mass market.”
Highway Pilot, the semi-autonomous solution that was controlling our car, is Momenta’s first mass-produced software. More will launch in the coming seasons, including a fully autonomous parking solution and a self-driving robotaxi package for urban use.
In the long run, the startup said it aims to tackle inefficiencies in China’s $ 44 billion logistics market. People hear about warehousing robots built by Alibaba and JD.com, but overall, China is still on the lower end of logistics efficiency. In 2018, logistics costs accounted for nearly 15 percent of national gross domestic product. In the same year, the World Bank ranked China 26th in its logistics performance index, a global benchmark for efficiency in the industry.
Cao Xudong, co-founder and CEO of Momenta / Photo: Momenta
Cao, an unassuming CEO, raised his voice as explained the company’s two-legged strategy. The twin approach forms a “closed loop,” a term that Cao repeatedly summoned to talk about the company’s competitive edge. Instead of picking between the presence and future, as Waymo does with Level 4 — a designation given to cars that can operate under basic situations without human intervention — and Tesla with half-autonomous driving, Momenta works on both. It uses revenue-generating businesses like Highway Pilot to fund research in robotaxis, and the sensor data collected from real-life scenarios to feed models in the lab. Results from the lab, in turn, could soup up what gets deployed on public roads.
Human or machine
During the 40-minute ride in midday traffic, our car was able to change lanes, merge into traffic, create distance from reckless drivers by itself except for one brief moment. Toward the end of the trip, our driver decided to grab the wheel for a lane change as we approached a car dangerously parked in the middle of the exit ramp. Momenta names this an “interactive lane change,” which it claims is designed to be part of its automated system and by its strict definition is not a human “intervention”.
“Human-car interaction will continue to dominate for a long time, perhaps for another 20 years,” Cao noted, adding the setup brings safety to the next level because the car knows exactly what the driver is doing through its inner-cabin cameras.
“For example, if the driver is looking down at their cellphone, the [Momenta] system will alert them to pay attention,” he said.
I wasn’t allowed to film during the ride, so here’s some footage from Momenta to give a sneak peek of its highway solution.
Human beings are already further along the autonomous spectrum than many of us think. Cao, like a lot of other AI scientists, believes robots will eventually take over the wheel. Alphabet-owned Waymo has been running robotaxis in Arizona for several months now, and smaller startups like Drive.ai are also offering a similar service in Texas.
Despite all the hype and boom in the industry, there remains thorny questions around passenger safety, regulatory schema and a host of other issues for the fast-moving tech. Uber’s fatal self-driving crash last year delayed the company’s future projects and prompted a public backlash. As a Shanghai-based venture capitalist recently suggested to me: “I don’t think humanity is ready for self-driving.”
The biggest problem of the industry, he argued, is not tech-related but social. “Self-driving poses challenges to society’s legal system, culture, ethics and justice.”
Cao is well aware of the contention. He acknowledged that as a company with the power to steer future cars, Momenta has to “bear a lot of responsibility for safety.” As such, he required all executives in the company to ride a certain number of autonomous miles so if there’s any loophole in the system, the managers will likely stumble across it before the customers do.
“With this policy in place, the management will pay serious attention to system safety,” Cao asserted.
Momenta’s new headquarters in Suzhou, China / Photo: Momenta
In terms of actually designing the software to be reliable and to trace accountability, Momenta appoints an “architect of system research and development,” who essentially is in charge of analyzing the black box of autonomous driving algorithms. A deep learning model has to be “explainable,” said Cao, which is key to finding out what went wrong: Is it the sensor, the computer, or the navigation app that’s not working?
Going forward, Cao said the company is in no rush to make a profit as it is still spending heavily on R&D, but he assured that margins of the software it sells “are high.” The startup is also blessed with sizable fundings, which Cao’s resume certainly helped attract, and so did his other co-founders Ren Shaoqing and Xia Yan, who were also alumni of Microsoft Research Asia.
As of last October, Momenta had raised at least $ 200 million from big-name investors including GGV Capital, Sequoia Capital, Hillhouse Capital, Kai-Fu Lee’s Sinovation Ventures, Lei Jun’s Shunwei Capital, electric vehicle maker NIO’s investment arm, WeChat operator Tencent and the government of Suzhou, which will house Momenta’s new 4,000 sq-meter headquarters right next to the city’s high-speed trail station.
When a bullet train speeds past Suzhou, passengers are able to see from their windows Momenta’s recognizable M-shape building, which, in the years to come, might become a new landmark of the historic city in eastern China.