Top 7 Security Mistakes When Designing a Mobile App

Feb. 10, 2017
As transit systems embrace mobile apps as a new way of connecting with customers, it is crucial that they prioritize security.

Mobile apps — and in particular mobile-based payment systems — can be a double-edged sword for mass transit systems. While mobile apps offer new opportunities for riders and transit authorities, they may also increase the risks of hacking, identity theft, fraud, extortion and service disruption if not handled correctly.

Transit authorities must consider the full slate of risks with mobile apps before launching these products into the market. Attacks on mobile devices, apps and the back-end servers that run this software are increasingly common, and will continue to be a top target for cybercriminals. In fact, the majority of Internet security reports (McAfee, Symantec, TrendMicro, Kaspersky, Verizon, etc.) have noted a steady increase in mobile attacks and mobile malware since at least 2012, with most predicting this trend will worsen in the coming years as mobile technology becomes even more ubiquitous.

The risks to transit users should be obvious, not only because of the recent “ransomware” attack on the San Francisco Municipal Transportation Agency (SFMTA), but also due to the steady rise of high-profile attacks, fraud attempts and other problems with popular mobile apps in recent years, including Starbucks, Samsung’s LoopPay, Venmo and more.

Central to this threat is the failure of mobile app design, as security mistakes are rampant in the mobile development space. Consider NowSecure’s 2016 Mobile Security Report, which found that 25 percent of mobile apps include at least one high risk security flaw. Additionally, Symantec’s 2016 Internet Security Threat Report found a 214 percent increase in new mobile vulnerabilities since 2013.

For mass transit authorities, it is critical that they address these common security mistakes in order to protect their riders, as well as their own networks and systems, when making mobile apps available.

Here are seven security mistakes to avoid:

Failing to understand how the app puts users, devices and systems at risk.

The first step is to understand the full set of risks the transit authority and its customers may encounter through the mobile app. “Threat modeling” is an exercise that will help the organization to understand potential threats and attacks, allowing it to develop both mitigations and contingency measures up front.

Riders are primarily at risk of personally identifiable information (PII) theft, financial theft/fraud and credential theft (i.e., login and password). The transit system itself is at risk of attacks on the app’s back-end or cloud-based services, when attackers seek to steal data or disrupt services. Attacks such as denial-of-service, data theft, ransomware, and defacement are all possible, depending on the hacker’s motivations.

Not baking security into the app’s design.

Security often takes a backseat to other considerations like cost, usability, functionality and time-to-market when it comes to mobile app design.

This is the exact opposite of what should happen. Within the information security community, it is well known that an ad hoc and reactive security program (i.e., patching vulnerabilities instead of avoiding them in the first place) is more expensive than designing secure code from the start.

When designing an app, refer to the OWASP Top 10 list of mobile app vulnerabilities to make sure every single one of these is accounted for in planning and design. Additionally, consider using a Security Development Lifecycle (SDL) process to ensure the app’s code is secure by design, by default and in deployment.

Inadequate security testing.

Companies often fail to undertake rigorous security testing of a new app before making it public. These tests are necessary in all cases, even when security has been baked into the app.

Testing should include thorough vulnerability scans as well as a “penetration test” (also referred to as a “black box” test) of the app by skilled professionals. A penetration test simulates real world attacks by criminal hackers and is the best way to make sure the app is sufficiently secure. However, this type of testing requires specialized expertise, which is why most companies will hire outside security testing firms.

Using weak or no encryption to protect user data.

Developers often make basic mistakes with encryption (35 percent of mobile device communications go unencrypted, according to SecureNow), which in this case could expose transit riders to PII and financial theft, account takeovers and more.

To avoid this pitfall, make sure the app provides “end-to-end” SSL encryption of all data as it is transmitted between the phone and the back-end server/cloud. (Note: In the past few years, a number of SSL vulnerabilities have come to light. To alleviate this, refer to for best practices and options for the most secure implementation of SSL.) Data that does not leave the phone (“data at rest”) must also be protected, preferably with encryption that is built directly into the device platform itself.

Poor secrets management that exposes credentials, API keys, private certificate keys.

Mobile apps regularly expose sensitive data, which criminal hackers can use to compromise the user or the app itself.

Attackers will use a number of tricks to try to pull out these secrets, so it is important that the app is fully vetted against them. For instance, does the app accidentally store logins/passwords in a plain-text file on the phone? Does it reveal too much information through its logs or crash reports, which hackers can use to find weaknesses in the code? Are credentials hard-coded directly into the app’s code? Is an API key baked into the binary?

The best recommendation is to avoid storing any secrets, keys, passwords or certificates in source code or configuration files as these could be leaked to the public.

Unnecessary features that add risks.

Limit the app’s features and permission requests to only what is necessary.

For instance, a transit authority might be tempted to require access to GPS data on the user’s phone, in order to alert them to nearby transit stops. It might also choose to add web content inside the app, using a feature like UIWebView.

However, by increasing the app’s features, a company will also increase its “attack surface” (i.e., more code = more problems), which weakens its security. Additionally, the more private user data that is accessed, stored or utilized by the app means any subsequent breaches could be far more damaging.

Failing to develop a security incident response plan.

Unfortunately, there is no such thing as 100 percent secure code. Even when security is built into the app from the start, and everything else is done right by the developer team, future vulnerabilities will emerge.

It is therefore critical for transit authorities to develop solid contingency plans. These should include three components: (1) have incident monitoring tools in place (such as IDS/IPS, SIEM, exfiltration monitoring, etc.) that can detect unusual activity on the back-end network as early as possible; (2) designate an internal or external incident response team that can react immediately to any discovered threats; and (3) have procedures and policies in place to limit the damage from a successful attack, such as shutting down parts of the network.

In conclusion, transit systems should not be afraid to embrace mobile apps as a new way of connecting with customers, but it is important that they prioritize security. The best approach is to assume the app will eventually be breached: developing and operating from this mindset will ensure a high level of preventive and contingency defenses that will limit an organization’s risks. By building security in from the start, testing for weaknesses and planning for the worst, transit authorities can achieve a high degree of confidence in their mobile apps.

Chris Weber is co-founder of Casaba Security.