Can a Programming Language Reduce Vulnerabilities?

Rust offers a safer programming language, but adoption is still a problem despite recent signs of increasing popularity.

When Microsoft wanted to rewrite a security-critical network processing agent to eliminate memory-safety vulnerabilities causing recurring headaches for the Microsoft Security Response Center (MSRC), the company tasked an intern and told him to rewrite the code in Rust. 

Rust, a programming language that has claimed the title of “most loved” among developers for five years in a row, could change the vulnerability landscape by practically eliminating certain types of memory-safety errors. The language’s claim to fame is that it provides the speed and control of C and C++, while delivering security and safety guarantees of other languages, such as Go and Python. Nearly 70% of the vulnerabilities that the MSRC processes are classified as memory-safety issues, so eliminating the class of vulnerabilities is critical.

Discussing his newly found preference for Rust, Alexander Clarke, the MSRC software intern, stated in a blog post that, while it may be easier to write a program that will compile in C++, the resulting program is more likely to have errors and vulnerabilities.

“The [Rust] compiler’s error messages are justly famous for how useful they are,” he says. “Through the error messages, Rust enforces safe programming concepts by telling you exactly why the code isn’t correct, while providing possible suggestions on how to fix it.”

More than a decade after Mozilla adopted and began rewriting code for its Firefox browser using Rust, the language may be ready to take off. While adoption continues to be anemic — only 5.1% of developers use the Rust language, according to the “StackOverflow 2020 Developer Survey” — a number of large companies have committed to using Rust in specific development projects. 

The Mozilla Foundation shipped code developed using the language in its Firefox browser starting in 2016. In 2019, Microsoft stated its intention to adopt Rust more widely for writing system software in Windows. And in February, Mozilla spun off the project to be managed by the new Rust Foundation, with founding sponsors Microsoft, Google, Amazon, and Huawei.

Why the increasingly popularity? It’s not just about speed and security, at least not for developers, says Ashley Williams, interim executive director of Rust Foundation.

“My joke answer is that we have an animal mascot,” she laughs. “In reality, when people talk about loving Rust, there is the language and the compiler, but also the notion that the community should be welcoming and the package management should be first-class. There are all these values that people appreciate.”

For companies, the decision boils down to the capabilities Rust does not allow. When the language is properly used, the compiler alerts on — and refuses to compile — certain coding patterns that lead to buffer overflows, use-after-free vulnerabilities, double-free memory issues, and null-pointer deferences. 

“You make a blood pact with the compiler,” says Williams. “You write your code in a specific way so the compiler knows your code is correct.”

For Microsoft, the errors that Rust can prevent account for the majority of vulnerabilities for which the company assigns Common Vulnerability and Exposures (CVE) identifiers. Using the programming language to build its core system components can help reduce a major source of vulnerabilities, said Ryan Levick, principal cloud developer advocate at Microsoft, in a blog post.

“We believe Rust changes the game when it comes to writing safe systems software,” he said. “Rust provides the performance and control needed to write low-level systems, while empowering software developers to write robust, secure programs.”

Yet programming languages promising extra security have not always done so.

In January 1996, Sun Microsystems announced Java 1.0. The language boasted portable code — as in “write once, run anywhere” — but Sun also touted a number of security attributes, such as automated memory management — that is, “garbage collection” — as well as type safety and the ability to isolate applets from modifying system resources.

Fast forward to today. With adoption at about 40%, Java is the fifth most-used language — behind JavaScript, HTML/CSS, SQL, and Python, according to the StackOverflow survey. However, Java programs accounted for 15% of the more than 6,000 vulnerabilities found in open source components in 2019, behind C, which accounted for 30%, and PHP, which accounted for 27%, according to “The 2020 State of Open Source Security” report published by software security firm WhiteSource.

Java shows that developers, in the name of efficiency, often will not use security features and instead continue to create insecure code. 

Rust is more opinionated in its approach than Java, but the language will likely not avoid the potential to have security undermined by developers. While Rust provides memory safety, it also allows a way around it — the “UNSAFE” keyword. Using the keyword is a way for a developer to override the compiler and prevent the compiler from checking a block of code — ostensibly because the developer asserts the code is safe. 

Many Rust enthusiasts — “Rustaceans,” as they are called — argue that overusage of the keyword undermines the Rust model. While the debate is nuanced, Williams understands the point.

“There are people who use the UNSAFE block in a way that is unsafe,” she says. “If you put something in the unsafe block, the compiler won’t check it, and if you are wrong then you could introduce a memory error.”

Yet, she points out, even if using the capability to only override the compiler correctly, vulnerabilities will likely creep into developers’ programs, and — because security researchers and hackers tend to find the problems that developer leave behind — those vulnerabilities will be found. Case in point: The Rust-focused security site RustSec lists more than 250 vulnerabilities in the Rust packages — or “crates” — and the language. 

“The vulnerability landscape is not an absolute one, so there are always new vulnerability areas,” says Williams. “Some languages can be safer than others, but … there is no such thing as a fully secure system, especially if your target language has a lot of hackers looking at it.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Recommended Reading:

More Insights


Next Post

Threat Roundup for March 5 to March 12

Fri Mar 12 , 2021
Today, Talos is publishing a glimpse into the most prevalent threats we’ve observed between March 5 and March 12. As with previous roundups, this post isn’t meant to be an in-depth analysis. Instead, this post will summarize the threats we’ve observed by highlighting key behavioral characteristics, indicators of compromise, and […]