Wij willen met u aan tafel zitten en in een openhartig gesprek uitvinden welke uitdagingen en vragen er bij u spelen om zo, gezamelijk, tot een beste oplossing te komen. Oftewel, hoe kan de techniek u ondersteunen in plaats van dat u de techniek moet ondersteunen.

A massive database storing tens of millions of SMS text messages, most of which were sent by businesses to potential customers, has been found online.

The database is run by TrueDialog, a business SMS provider for businesses and higher education providers, which lets companies, colleges, and universities send bulk text messages to their customers and students. The Austin, Texas-based company says one of the advantages to its service is that recipients can also text back, allowing them to have two-way conversations with brands or businesses.

The database stored years of sent and received text messages from its customers and processed by TrueDialog. But because the database was left unprotected on the internet without a password, none of the data was encrypted and anyone could look inside.

Security researchers Noam Rotem and Ran Locar found the exposed database earlier this month as part of their internet scanning efforts.

TechCrunch examined a portion of the data, which contained detailed logs of messages sent by customers who used TrueDialog’s system, including phone numbers and SMS message contents. The database contained information about university finance applications, marketing messages from businesses with discount codes, and job alerts, among other things.

But the data also contained sensitive text messages, such as two-factor codes and other security messages, which may have allowed anyone viewing the data to gain access to a person’s online accounts. Many of the messages we reviewed contained codes to access online medical services to obtain, and password reset and login codes for sites including Facebook and Google accounts.

The data also contained usernames and passwords of TrueDialog’s customers, which if used could have been used to access and impersonate their accounts.

Because some of the two-way message conversations contained a unique conversation code, it’s possible to read entire chains of conversations. One table alone had tens of millions of messages, many of which were message recipients trying to opt-out of receiving text messages.

TechCrunch contacted TrueDialog about the exposure, which promptly pulled the database offline. Despite reaching out several times, TrueDialog’s chief executive John Wright would not acknowledge the breach nor return several requests for comment. Wright also did not answer any of our questions — including whether the company would inform customers of the security lapse and if he plans to inform regulators, such as state attorneys general, per state data breach notification laws.

The company is just one of many SMS providers that have in recent months left systems — and sensitive text messages — on the internet for anyone to access. Not only that but it’s another example of why SMS text messages may be convenient but is not a secure way to communicate — particularly for sensitive data, like sending two-factor codes.

Read more:


TechCrunch

If you’ve ever bought an Android phone, there’s a good chance you booted it up to find it pre-loaded with junk you definitely didn’t ask for.

These pre-installed apps can be clunky, annoying to remove, rarely updated… and, it turns out, full of security holes.

Security firm Kryptowire built a tool to automatically scan a large number of Android devices for signs of security shortcomings and, in a study funded by the U.S. Department of Homeland Security, ran it on phones from 29 different vendors. Now, the majority of these vendors are ones most people have never heard of — but a few big names like Asus, Samsung and Sony make appearances.

Kryptowire says they found vulnerabilities of all different varieties, from apps that can be forced to install other apps, to tools that can be tricked into recording audio, to those that can silently mess with your system settings. Some of the vulnerabilities can only be triggered by other apps that come pre-installed (thus limiting the attack vector to those along the supply chain); others, meanwhile, can seemingly be triggered by any app the user might install down the road.

Kryptowire has a full list of observed vulnerabilities here, broken down by type and manufacturer. The firm says it found 146 vulnerabilities in all.

As Wired points out, Google is well aware of this potential attack route. In 2018 it launched a program called the Build Test Suite (or BTS) that all partner OEMs must pass. BTS scans a device’s firmware for any known security issues hiding amongst its pre-installed apps, flagging these bad apps as Potentially Harmful Applications (or PHAs). As Google puts it in its 2018 Android security report:

OEMs submit their new or updated build images to BTS. BTS then runs a series of tests that look for security issues on the system image. One of these security tests scans for pre-installed PHAs included in the system image. If we find a PHA on the build, we work with the OEM partner to remediate and remove the PHA from the build before it can be offered to users.

During its first calendar year, BTS prevented 242 builds with PHAs from entering the ecosystem.

Anytime BTS detects an issue we work with our OEM partners to remediate and understand how the application was included in the build. This teamwork has allowed us to identify and mitigate systemic threats to the ecosystem.

Alas, one automated system can’t catch everything — and when an issue does sneak by, there’s no certainty that a patch or fix will ever arrive (especially on lower-end devices, where long-term support tends to be limited).

We reached out to Google for comment on the report, but have yet to hear back.

Update — Google’s response:

We appreciate the work of the research community who collaborate with us to responsibly fix and disclose issues such as these.


TechCrunch

Zamna — which uses a blockchain to securely share and verify data between airlines and travel authorities to check passenger identities — has raised a $ 5m seed funding round led by VC firms LocalGlobe and Oxford Capital, alongside Seedcamp, the London Co-Investment Fund (LCIF), Telefonica, and a number of angel investors.

Participation has also come from existing investor IAG (International Airlines Group), which is now its first commercial client. The company is also changed its name from VChain Technology to Zamna.

When VChain-now-Zamna first appeared, I must admit I was confused. Using blockchain to verify passenger data seemed like a hammer to crack a nut. But it turns out to have some surprisingly useful applications.

The idea is to use it to verify and connect the passenger data sets which are currently silo-ed between airlines, governments and security agencies. By doing this, says Zamna, you can reduce the need for manual or other checks by up to 90 percent. If that’s the case, then it’s quite a leap in efficiency.

In theory, as more passenger identities are verified digitally over time and shared securely between parties, using a blockchain in the middle to maintain data security and passenger privacy, the airport security process could become virtually seamless and allow passengers to sail through airports without needing physical documentation or repeated ID checks. Sounds good to me.

Zamna says its proprietary Advance Passenger Information (API) validation platform for biographic and biometric data, is already being deployed by some airlines and immigration authorities. It recently started working with Emirates Airline and the UAE’s General Directorate of Residency and Foreigners (GDRFA) to deliver check-in and transit checks.

Here’s how it works: Zamna’s platform is built on algorithms that check the accuracy of Advanced Passenger Information or biometric data, without having to share any of that data with third parties, because it attaches an anonymous token to the already verified data. Airlines, airports and governments can then access that secure, immutable and distributed network of validated tokens without having actually needing to ‘see’ the data an agency, or competing airline, holds. Zamna’s technology can then be used by any of these parties to validate passengers’ biographic and biometric data, using cryptography to check you are who you say you are.

So, what was wrong with the previous security measures in airports for airlines and border control that Zamna might be fixing?

Speaking to TechCrunch, Irra Ariella Khi, co-founder and CEO of Zamna, says: “There is a preconception that when you arrive at the airport somehow – as if by magic – the airline knows who you are, the security agencies know who you are, and the governments of departure and destination both know that you are flying between their countries and have established that it is both legitimate and secure for you to do so. You may even assume that the respective security authorities have exchanged some intelligence about you as a passenger, to establish that both you and your fellow passengers are safe to board the same plane.”

“However,” she says, “the reality is far from this. There is no easy and secure way for airlines and government agencies to share or cross-reference your data – which remains siloed (for valid data protection reasons). They must, therefore, repeat manual one-off data checks each time you travel. Even if you have provided your identity data and checked in advance, and if you travel from the same airport on the same airline many times over, you will find that you are still subject to the same one-off passenger processing (which you have probably already experienced many times before). Importantly, there is an ‘identity verification event’, whereby the airline must check both the document of identity which you carry, as well as establish that it belongs to your physical identity.”

There are three main trends in this space. Governments are demanding more accurate passenger data from airlines (for both departure and destination) – and increasing the regulatory fines imposed for incorrect data provided to them by the airlines. Secondly, Airlines also have to manage the repatriation of passengers and luggage if they are refused entry by a government due to incorrect data, which is costly. And thirdly, ETA (electronic transit authorizations, such as eVisas) are on the rise, and governments and airlines will need to satisfy themselves that a passenger’s data matches exactly that of their relevant ETA in order to establish that they have correct status to travel. This is the case with ESTAs for all US-bound travelers. Many other countries have similar requirements. Critically for UK travelers – this will also be the case for all passengers traveling into Europe under the incoming ETIAS regulations.

The upshot is that airlines are imposing increased document and identity checks at the airports – regardless of whether the passenger has been a regular flier, and irrespective of whether they have checked-in in advance.

Zamna’s data verification platform pulls together multiple stakeholders (airlines, governments, security agencies) with a way to validate and revalidate passenger identity and data (both biographic and biometric), and to securely establish data ownership – before passengers arrive at the airport.

It doesn’t require any new infrastructure at the airport, and none of these entities have to share data, because the ‘sharing without sharing’ is performed by Zamna’s blockchain platform in the middle of all the data sources.

Remus Brett, Partner at LocalGlobe, says: “With passenger numbers expected to double in the next 20 years, new technology-driven solutions are the only way airlines, airports and governments will be able to cope. We’re delighted to be working with the Zamna team and believe they can play a key role in addressing these challenges.” Dupsy Abiola, Global Head of Innovation at International Airlines Group, adds: “Zamna is working with IAG on a digital transformation project involving British Airways and the other IAG carriers. It’s very exciting.”

Zamna is a strategic partner to the International Air Transport Association (IATA) and an active member of IATA’s “One ID” working group.


TechCrunch

U.S. security experts are conceding that China has won the race to develop and deploy the 5G telecommunications infrastructure seen as underpinning the next generation of technological advancement and warn that the country and its allies must develop a response — and quickly.

The challenge we have in the development of the 5G network, at least in the early stage, is the dominance of the Huawei firm,” said Tom Ridge, the former US Secretary of Homeland Security and governor of Pennsylvania on a conference call organized recently by Global Cyber Policy Watch. “To embed that technology into a critical piece of infrastructure which is telecom is a huge national security risk.” 

Already some $ 500 million is being allocated to the development of end-to-end encryption software and other technologies through the latest budget for the U.S. Department of Defense, but these officials warn that the money is too little and potentially too late, unless more drastic moves are made.

(You can also hear more about this at TechCrunch Disrupt in SF next week, where we’ll be interviewing startup founders and investors who build businesses by working with governments.)

The problems posed by China’s dominance in this critical component of new telecommunications technologies cut across public and private sector security concerns. They range from intellectual property theft to theft of state secrets and could curtail the ways the U.S. government shares critical intelligence information with its allies, along with opening up the U.S. to direct foreign espionage by the Chinese government, Ridge and other security experts warned.


TechCrunch

Last week, users around the world found Wikipedia down after the online, crowdsourced encyclopedia became the target of a massive, sustained DDoS attack — one that it is still actively fighting several days later (even though the site is now back up). Now, in a coincidental twist of timing, Wikipedia’s parent, the Wikimedia Foundation, is announcing a donation aimed at helping the group better cope with situations just like this: Craig Newmark Philanthropies, a charity funded by the Craigslist founder, is giving $ 2.5 million to Wikimedia to help it improve its security.

The gift would have been in the works before the security breach last week, and it underscores a persistent paradox. The non-profit is considered to be one of the 10 most popular sites on the web, with people from some 1 billion different devices accessing it each month, with upwards of 18 billion visits in that period (the latter figure is from 2016 so likely now higher). Wikipedia is used as reference point by millions every day to get the facts on everything from Apple to Zynga, mushrooms and Myanmar history, and as a wiki, it was built from the start for interactivity.

But in this day and age when anything is game for malicious hackers, it’s an easy target, sitting out in the open and generally lacking in the kinds of funds that private companies and other for-profit entities have to protect themselves from security breaches. Alongside networks of volunteers who put in free time to contribute security work to Wikimedia, the  organization only had two people on its security staff two years ago — one of them part-time.

That has been getting fixed, very gradually, by John Bennett, the Wikimedia Foundation’s Director of Security who joined the organization in January 2018, and told TechCrunch in an interview that he’s been working on a more cenrtralised and coherent system, bringing on more staff to help build both tools to combat nefarious activity both on the site and on Wikimedia’s systems; and crucially, put policies in place to help prevent breaches in the future.

“We’ve lived in this bubble of ‘no one is out to get us,’” he said of the general goodwill that surrounds not-for-profit, public organizations like the Wikimedia Foundation. “But we’re definitely seeing that change. We have skilled and determined attackers wishing to do harm to us. So we’re very grateful for this gift to bolster our efforts.

“We weren’t a sitting duck before the breach last week, with a lot of security capabilities built up. But this gift will help improve our posture and build upon on what we started and have been building these last two years.”

The security team collaborates with other parts of the organization to handle some of the more pointed issues. He notes that Wikimedia uses a lot of machine learning that has been developed to monitor pages for vandalism, and an anti-harassment team also works alongside them. (Newmark’s contribution today, in fact, is not the first donation he’s made to the organization. In the past he has donated around $ 2 million towards various projects including the Community Health Initiative, the anti-harassment program; and the more general Wikimedia Endowment).

The security breach that caused the DDoS is currently being responded to by the site reliability engineering team, who are still engaged and monitoring the situation, and Bennett declined to comment more on that.

You can support Wikipedia and Wikimedia, too.


TechCrunch

Zao went viral in China this weekend for its realistic face-swapping videos, but after controversy about its user policy, WeChat restricted access to the app on its messaging platform.

Users can still upload videos they created with Zao to WeChat, but if they try to download the app or send an invite link to another WeChat user, a message is displayed that says “this web page has been reported multiple times and contains security risks. To maintain a safe online environment, access to this page has been blocked.”23011567479434 .pic

Developed by a unit of Momo, one of China’s most popular dating apps, Zao creates videos that replace the faces of celebrities in scenes from popular movies, shows and music videos with a selfie uploaded by the user.

The app, currently available only in China, went viral as users shared their videos through WeChat and other social media platforms in China. But concerns about the potential misuse of deepfake technology coupled with a clause (now deleted) in Zao’s terms of use that gave it full ownership and copyright to content uploaded or created on it, in addition to “completely free, irrevocable, perpetual, transferrable, and re-licensable rights,” caused controversy.

By going viral quickly and being very easy to use (Zao’s videos can be generated from a single selfie, though it suggests that users upload photos from several angles for better results), the app has also focused more attention on deepfake technology and how it can potentially be used to spread misinformation or harass people.

Zao was released last Friday and quickly became the top free iOS app in China, according to App Annie. A statement posted on Sept. 1 to Zao’s Weibo account says “we completely understand everybody’s concerns about the privacy issue. We are aware of the issue and we are thinking about how to fix the problems, we need a little time.” Its terms and conditions now say user-generated content will only be used by the company to improve the app and that all deleted content will be removed from its servers.

TechCrunch has contacted Zao for comment.


TechCrunch

If you can’t trust your bank, government or your medical provider to protect your data, what makes you think students are any safer?

Turns out, according to one student security researcher, they’re not.

Eighteen-year-old Bill Demirkapi, a recent high school graduate in Boston, Massachusetts, spent much of his latter school years with an eye on his own student data. Through self-taught pen testing and bug hunting, Demirkapi found several vulnerabilities in a his school’s learning management system, Blackboard, and his school district’s student information system, known as Aspen and built by Follett, which centralizes student data, including performance, grades, and health records.

The former student reported the flaws and revealed his findings at the Def Con security conference on Friday.

“I’ve always been fascinated with the idea of hacking,” Demirkapi told TechCrunch prior to his talk. “I started researching but I learned by doing,” he said.

Among one of the more damaging issues Demirkapi found in Follett’s student information system was an improper access control vulnerability, which if exploited could have allowed an attacker to read and write to the central Aspen database and obtain any student’s data.

Blackboard’s Community Engagement platform had several vulnerabilities, including an information disclosure bug. A debugging misconfiguration allowed him to discover two subdomains, which spat back the credentials for Apple app provisioning accounts for dozens of school districts, as well as the database credentials for most if not every Blackboard’s Community Engagement platform, said Demirkapi.

“School data or student data should be taken as seriously as health data. The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”
Bill Demirkapi, security researcher

Another set of vulnerabilities could have allowed an authorized user — like a student — to carry out SQL injection attacks. Demirkapi said six databases could be tricked into disclosing data by injecting SQL commands, including grades, school attendance data, punishment history, library balances, and other sensitive and private data.

Some of the SQL injection flaws were blind attacks, meaning dumping the entire database would have been more difficult but not impossible.

In all, over 5,000 schools and over five million students and teachers were impacted by the SQL injection vulnerabilities alone, he said.

Demirkapi said he was mindful to not access any student records other than his own. But he warned that any low-skilled attacker could have done considerable damage by accessing and obtaining student records, not least thanks to the simplicity of the database’s password. He wouldn’t say what it was, only that it was “worse than ‘1234’.”

But finding the vulnerabilities was only one part of the challenge. Disclosing them to the companies turned out to be just as tricky.

Demirkapi admitted that his disclosure with Follett could have been better. He found that one of the bugs gave him improper access to create his own “group resource,” such as a snippet of text, which was viewable to every user on the system.

“What does an immature 11th grader do when you hand him a very, very, loud megaphone?” he said. “Yell into it.”

And that’s exactly what he did. He sent out a message to every user, displaying each user’s login cookies on their screen. “No worries, I didn’t steal them,” the alert read.

“The school wasn’t thrilled with it,” he said. “Fortunately, I got off with a two-day suspension.”

He conceded it wasn’t one of his smartest ideas. He wanted to show his proof-of-concept but was unable to contact Follett with details of the vulnerability. He later went through his school, which set up a meeting, and disclosed the bugs to the company.

Blackboard, however, ignored Demirkapi’s responses for several months, he said. He knows because after the first month of being ignored, he included an email tracker, allowing him to see how often the email was opened — which turned out to be several times in the first few hours after sending. And yet the company still did not respond to the researcher’s bug report.

Blackboard eventually fixed the vulnerabilities, but Demirkapi said he found that the companies “weren’t really prepared to handle vulnerability reports,” despite Blackboard ostensibly having a published vulnerability disclosure process.

“It surprised me how insecure student data is,” he said. “School data or student data should be taken as seriously as health data,” he said. “The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”

He said if a teenager had discovered serious security flaws, it was likely that more advanced attackers could do far more damage.

Heather Phillips, a spokesperson for Blackboard, said the company appreciated Demirkapi’s disclosure.

“We have addressed several issues that were brought to our attention by Mr. Demirkapi and have no indication that these vulnerabilities were exploited or that any clients’ personal information was accessed by Mr. Demirkapi or any other unauthorized party,” the statement said. “One of the lessons learned from this particular exchange is that we could improve how we communicate with security researchers who bring these issues to our attention.”

Follet spokesperson Tom Kline said the company “developed and deployed a patch to address the web vulnerability” in July 2018.

The student researcher said he was not deterred by the issues he faced with disclosure.

“I’m 100% set already on doing computer security as a career,” he said. “Just because some vendors aren’t the best examples of good responsible disclosure or have a good security program doesn’t mean they’re representative of the entire security field.”


TechCrunch

Created by R the Company. Powered by SiteMuze.