How conscious companies can thread IoT’s security and privacy needles
10 min read

How conscious companies can thread IoT’s security and privacy needles

How conscious companies can thread IoT’s security and privacy needles

Could the secret to building Internet-connected devices that balance utility, safety, security, and privacy reside in the offices of Forcite, an upstart motorcycle helmet maker in Australia?

The Forcite MK1 helmet, currently in development in Sydney and expected to ship to its early crowd-funding supporters this March, is the latest attempt to thread the peculiar needle of adding Internet-driven safety features to motorcycle gear.

A successful debut would be no small feat, given bikers’ general skepticism of and reluctance to adopt new technology. While riding a motorcycle, experienced riders know that even the most trivial of distractions can lead to a crash. Despite a plethora of available audio and camera add-ons, most motorcycle helmets today still hit stores without any such add-ons.

The carbon-fiber, 3.4-pound MK1 features a 166-degree Sony HD 1080p 30 FPS video camera built into the chin guard. It supports up to five hours of continuous recording, which riders can download to their phones and back up on Forcite’s servers. It has built-in speakers and microphones for listening to music, GPS audio cues, taking phone calls, and communicating with other Forcite riders via a built-in VoIP system. And it has a handlebar-mounted controller that the company hopes is as easy to use as flicking on a turn signal. But Forcite is hoping that one particular feature will prove a game changer for the biker community.

It’s a patented, cloud-based machine-learning system dubbed Raydar that connects to the riders’ phone, and takes input from mobile apps, GPS signals, and traffic cameras to inform riders in real time about current road conditions through color-coded, in-helmet LEDs. Using a system similar in feel to those used by Formula 1 racers, it guides riders through road directions, inclement weather, and traffic jams, while alerting them about nearby police officers.

Forcite CEO and co-founder Alfred Boyadgis hopes that his company can lead a broader industry in building devices that both safely integrate technology and value user privacy.

Safeguarding privacy is a bold claim for a helmet that is constantly sending telemetry data such as rider location back to the manufacturer, but Boyadgis says Forcite takes its responsibility for user information seriously. It plans to anonymize data such as abrupt slowdowns at specific locations, for example, and use it in two ways: The first is to warn riders of unexpected road hazards or poorly designed intersections. The second is to sell it to municipalities, as social road condition-monitoring apps like Google’s Waze have been doing for years.

Boyadgis notably doesn’t approve of everything Waze is known for doing.



READ MORE ON SECURING THE INTERNET OF THINGS

Have a Tesla Model 3? This app can track its location
How to protect your smart TV from hackers
FBI’s router reboot call reminds us why to check for updates
Why hackers love your Wi-Fi (and how to protect it)
Time for a Department of the Internet of Things?
5 questions to ask before buying an IOT device


“What’s not ethical is tracking people’s specific locations, and then targeting them with ads based on their location. Riding past a motorcycle store, and the store has paid us to say, ‘Hey, we have a jacket sale today—here’s a discount.’ That’s not ethical,” Boyadgis says. “If someone asks, ‘What information do you have on us?’ I should be able to generate a report on that. There is ethical conduct related to wearable technology, and being sneaky in the background is not going to benefit the user or the company in the long run.”

Boyadgis says Forcite has to juggle its responsibilities of ensuring rider safety and privacy with knowing how the helmet is functioning. Unless the rider disables the helmet’s Bluetooth connection to the phone, which is an option, Forcite knows when riders have powered on or off a helmet, or have dropped it on the ground. (Dropping a helmet decreases its effectiveness.)

While optional video footage, which can be backed up to Forcite for insurance purposes, is not anonymized (Forcite says anonymization is in the works), rider location information is anonymized. Any law enforcement attempt to subpoena that information would return empty-handed, as the company doesn’t have it. And while the helmet can raise or lower the volume of music and notifications based on rider speed or sudden braking, Boyadgis says Forcite does not record metadata on music, calls, and other interactions between helmet and rider.

When it comes to the moment-to-moment behaviors of riders wearing the MK1, Boyadgis says, “We don’t know what they’re doing, and we don’t care to know.”

While some companies in the so-called Internet of Things realm are taking their security seriously, too few are. Thoughtful change is needed, experts say, as regulations on IoT manufacturers might not be effective amid enforcement challenges.

“IoT are the computers you don’t realize you’re using every day.”—Stephen Ridley, founder and chief technology officer, Senrio.

Many of today’s most popular connected devices were designed and released without such concern for user privacy. Amazon.com’s Ring combination doorbell-home security camera, for example, ironically combines privacy violations with lax cybersecurity standards. The connected-camera marketplace has become so notorious for its poor cybersecurity practices that Consumer Reports on Monday published a stern letter to the 25 top manufacturers, admonishing them to improve their security and privacy practices.

Hacked Wi-Fi routers and other devices employing repeated or easy-to-guess security credentials get sucked into botnets running YouTube ad fraud schemes. Known botnets (such as Mirai) continue to grow. The growing popularity of digital voice assistants exposes security risks for users of devices running Apple’s Siri, Amazon’s Alexa, and Google’s Home. And connected cars are vulnerable to new variants of old hacks (not to mention new hacks).

IoT manufacturers pass security buck to consumers

Studies from cybersecurity and antivirus software companies Kaspersky Lab and F-Secure show that hackers targeted connected devices 12 times more frequently in the first half of 2019 than they did during the same period of 2018, a strong indication that hackers increasingly see them as targets worthy of exploitation.

At this point, IoT-based attacks are beyond theoretical. A Microsoft report presented at the Black Hat cybersecurity conference in August showed how a malicious Russian hacking organization exploited IoT vulnerabilities to attack and gain access to corporate networks. And when vendors fail to patch known vulnerabilities, or ensure that existing patches are installed, consumers and businesses are left exposed.

Rick Ramgattie, who was a senior security analyst at Independent Security Evaluators in September, when it published a study with the Cyber Independent Testing Lab citing unpatched vulnerabilities in routers (a reprisal of a 2013 ISE Wi-Fi router vulnerability study), told The Parallax that of the 13 popular routers the researchers tested, none “stood out as being good on security.”

Forcite helmet
Carbon-fiber helmets on Forcite’s production line. Photo courtesy Forcite.

“The number of vulns we found in these devices was higher than we expected,” said Ramgattie, a co-author of the report who is now a senior security engineer at cryptocurrency company Gemini. Based on the evidence of ongoing, unpatched vulnerabilities putting device users at risk of getting hacked, the most logical conclusion was that manufacturers are not conducting even basic security tests of their products, he said. “When there’s a lot of really common issues, then their security assessment wasn’t sufficient—or they didn’t get one.”

Long-term solutions for improving IoT security appear to be inversely proportional to the plethora of IoT security vulnerabilities, says Brian Knopf, who focuses on the subject at SecureWorks. Knopf, an avid connected-device user at his Southern California home, as well as an IoT security researcher and advocate, has soured on consumers not purchasing more secure options when available, and on device manufacturers dragging their feet on implementing security standards. During his tenure as director of application security for Linksys and Belkin, he says, the company fought with him over fixing critical vulnerabilities in its Wi-Fi routers.

“I was under the false impression that if I was inside [a manufacturer], I could change things. Turns out, I got more of an education into the reality of security than they got into the threats,” he says. “The problem we’ve had, from a security standpoint, for a long time is that we think you secure everything you can. The reality is that margins are very slim to begin with, so when I try to add hardware security [such as] Secure Elements, burying traces, potting the PCB, and testing firmware with SAST and DAST, that adds to our BOM [bill of materials] costs.”

Progress made, but at a glacial pace

Part of the problem in securing IoT devices is that none of the stakeholders—including manufacturing partners, security experts, and government agencies—know how to motivate manufacturers to build more secure devices, or encourage consumers to spend extra on more secure devices. Despite employing them, Knopf says, they aren’t listening to security experts like him. And in the wake of broadly applicable, hard-to-change laws like the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and antiquated interpretations of the Fourth and Fifth Amendments of the U.S. Constitution, experts are hesitant to rely on laws to force tech companies to adhere to cybersecurity guidelines.

“If someone asks, ‘What information do you have on us?’ I should be able to generate a report on that. There is ethical conduct related to wearable technology, and being sneaky in the background is not going to benefit the user or the company in the long run.”—Alfred Boyadgis, Forcite CEO and co-founder.

Nevertheless, one move some are saying is a step in the right direction is California’s new IoT cybersecurity law, which went into effect on January 1. Device manufacturers must outfit their connected devices with “reasonable” security precautions, including unique default passwords and forcing users to set their own passwords, with the intent of stopping hackers from accessing the device without permission and using the device to leak user or network data.

Many cybersecurity experts, including Stephen Ridley, the founder and chief technology officer of Senrio, which helps organizations detect and track devices on their computer networks, say the law is “a good start” but will have to be amended over time.

Ridley compares today’s challenges in securing IoT to Microsoft’s journey from its release of Windows 95 to developing its most secure operating system to date, Windows 10. “To lock it down, we’re looking at 10 to 20 years out. And that’s on computers we know we’re using every day. IoT are the computers you don’t realize you’re using every day.”

Given connected devices’ often personal uses—tracking our heartbeats, lighting our homes, monitoring our milk consumption—Ridley says consumers should think of IoT security as a form of personal hygiene. Yet most manufacturers aren’t doing the IoT equivalent of washing hands and brushing teeth. They’re rarely enforcing essential basics, such as creating unique passwords for devices, or detailing software components, he says.

Perhaps not coincidentally, medical-device makers have made more progress toward securing their products than other category of IoT vendor. After years of discussions and debates with manufacturers, security experts, and medical administrators, the U.S. Food and Drug Administration has recently taken steps to push for more secure devices without potentially cumbersome legislation. It now offers pre- and post-market guidance to medical-device makers, to which some major device manufacturers appear to be adhering.

A hacker looks for cybersecurity vulnerabilities on medical devices at DefCon’s BioHacking Village, August 11, 2018. Photo by Seth Rosenblatt/The Parallax

Stephanie Domas, executive vice president at medical-device cybersecurity company MedSec, says the FDA was instrumental in getting hospitals and device makers to agree on standards.

“In the beginning, it felt like I was mediating couples therapy, there was so much misinformation on both sides,” she says. “We’re trying to get more towards baseline features and expectations. The way devices update [with security patches] or having unique passwords becomes the norm.”

One security expert who has worked in the medical industry for more than a decade, and requested anonymity to speak without their employer’s permission, says the FDA’s guidance has played a hard-to-quantify but integral role in improving medical-device security.

“When I started in this industry, there was very little thought of how to protect a network-connected device. Now, several companies have changed that thought process entirely and are doing positive things to make protecting the devices easier. Many have even created product security officers or offices,” the expert said in a written statement. “While I can’t explicitly cite this as caused by the guidance, I’m sure this has played a part [in] these companies’ decisions. The industry as a whole seems to be moving in a much better direction.”

“By making these decisions visible, decision makers can encourage their suppliers to, in turn, make better decisions.”—Allan Friedman, director of cybersecurity initiatives, National Telecommunications and Information Administration.

Concurrent with the FDA’s approach have been efforts at the Commerce Department’s National Telecommunications and Information Administration to create community-driven guidance for a software bill of materials (SBOM), which would encourage manufacturers to share details of the software components that power large programs, as well as connected devices.

Allan Friedman, the National Telecommunications and Information Administration’s director of cybersecurity initiatives, applauds significant work accomplished in 2019 toward defining what goes into an SBOM, how to create it, what it should be used for, and how it could be used with a specific class of devices—in this case, medical devices.

Open questions to confront this year include how to automate the process, and how to encourage manufacturers to adopt using SBOMs to better secure their products. Public transparency might encourage better security practices by creating incentives to use up-to-date and secure, high-quality components throughout the supply chain, Friedman says.

“By making these decisions visible, decision makers can encourage their suppliers to, in turn, make better decisions,” Friedman argues. “An SBOM can be helpful because devices can be black boxes. It doesn’t require a large testing model, and doesn’t require you paying [for] consultants or certifications or testing labs to improve your security.”

Ultimately, consumers will bear the brunt of the impact of any of IoT security improvements, no matter whether they’re enforced by government or endorsed by manufacturers.

For Steve James, a Brisbane, Australia-based motorcyclist who has ordered Forcite’s MK1 helmet, that security comes as a matter of trust. He’d be “wary” of a similar helmet from a Chinese company. “I cannot be sure how secure it is,” he says, or feel safe from “having my info hijacked or sold.” He trusts Forcite, a company that doesn’t yet have products on the market, he says, because of how it’s engaged with early adopters.

“When they design something, they put it to us and say, ‘What do you think?’” Many other device makers, he says, are revealing the bare minimum when it comes to how they’re integrating technology and using the data it generates, he says. “It’s all a big secret.”

Update, January 16 at 2:00 p.m. PST: Clarified the specifics of Brian Knopf’s comments on security testing.

Enjoying these posts? Subscribe for more