Researchers design new solution to widespread side-channel attacks

The proposal provides a chip-level safeguard against sensitive data being transmitted after it’s accessed.

Image Enlarge

A team of Michigan researchers have designed a new technique that could put a wide category of computer processor vulnerabilities to rest. Called speculative execution attacks, the exploits have been found to provide access to sensitive information on Intel chips and cloud servers through a loophole in how instructions are handled in modern processors. The researchers’ proposal, called Non-speculative Data Access, or NDA, provides a chip-level safeguard against sensitive data being transmitted after it’s accessed.

The team is comprised of Profs. Baris Kasikci and Thomas Wenisch and PhD students Ofir Weisse, Ian Neal, and Kevin Loughlin. They presented their work, titled “NDA: Preventing Speculative Execution Attacks at Their Source,” at the ACM International Symposium on Microarchitecture (MICRO 2019).

In an effort to increase processor efficiency, the order in which a program’s operations are performed has been made “smarter” over time. While naive solutions on early processors performed all the program’s steps strictly in the order provided (called in-order processors), innovations like pipelining and out-of-order execution allowed, for example, data needed by one step to be retrieved while the previous step was still being executed.

Speculative out-of-order execution took this approach to a new level, allowing a program to retrieve and operate on data it hasn’t confirmed should actually be accessed. Specifically, when the program reaches a branch, such as an if-statement, the processor guesses the direction of the if-statement (true or false). Many common program branches have predictable outcomes that can be used to boost performance by speculatively assuming the program’s flow. If the speculation was wrong, the processor can revert its steps and perform a do-over.

When the likely outcome of an operation is high, performing it speculatively offers a huge performance boost despite the rare do-over it requires.

“It turns out if you’re right 99% of the time, so even with the throwaway penalty, you’re still more efficient than if you weren’t guessing in the first place,” says Loughlin.

This process has been found susceptible to many hardware-based attacks that work around software safeguards. In particular, performance can be boosted by speculatively retrieving data from memory and saving it in a local on-chip cache. Operations using that data can then be performed much more quickly. If attackers observe the behavior of a processor and how long it performs certain operations, it can determine which data is being stored on-chip (moved nearby for faster operations) and which is stored further away in memory. From there, the attacker could infer the values being stored in the processor’s on-chip cache.

The implication of this exploit is that one application can steal data from another. Further compounding the risk, cloud servers can become vulnerable to one malicious client accessing data which belongs to other victim clients.

There are different solutions to this, ranging from the worst for performance (reverting to in-order execution) to those that prioritize performance (patching specific exploitation techniques as they become public). To this point, the latter approach has been greatly preferred.

“The first attack variants were all using the processor’s cache to leak information, so then you started seeing research papers proposing a redesign of the cache to make it impossible to leak information that way,” says Weisse. He and the team say that this approach doesn’t address the fundamental problem, since the cache is only one of many ways to transmit a secret.

“The fundamental problem is that attackers can speculatively access secrets, and then can try to transmit them in numerous ways.”

And of course, other papers have demonstrated that microarchitectural structures besides the cache could be used just as easily for these attacks. The team behind NDA demonstrated using the TLB as well as the cache to leak secrets, as part of their paper’s motivation.

The authors present NDA as one option for that fundamental solution. The technique works by allowing a system to perform only one speculative operation on a piece of data. This means that an instruction to fetch data speculatively will be allowed, but no further processing can be performed with the speculatively-fetched data — this breaks the chain of operations needed to leak secrets.

This preserves some of the performance gain provided by a fully speculative out-of-order processor because the program still gets the benefits of fetching data speculatively. But after that, it’s not allowed to send the result of that access elsewhere until it’s been validated that the accessed data is no longer speculative.

“That way we don’t completely inhibit speculation, but the inputs of speculative steps can’t be based on other speculative steps,” says Weisse. “They can be based only on non-speculative steps.”

This solution essentially halts the process of a speculative execution attack between its first two steps — access (of secret data) and transmission (via cache, TLB, or others). The exploit is useless if speculatively-accessed data can’t be transmitted outside the program. In doing so, NDA overcomes the key problem with defenses that tackle specific technical faults — the solution is agnostic to the transmission channel used by the attacker.

While the technique comes with a performance cost of 10-20%, the system’s partially speculative execution still greatly outperforms naive solutions (i.e., reverting to in-order execution) which can be over 400% slower than current processors.

As to whether this solution or something like it will ultimately put an end to speculative execution attacks as we know them today, the authors are open to speculation.

“I think at the end of the day there are going to be provably secure ways to mitigate all of these attacks,” says Loughlin. “but I think people will make a conscious security-performance tradeoff to leave themselves open to certain things that they view as not feasible. I think that there will be an end from a conceptual standpoint; I have no idea in practice.”

Explore:
Baris Kasikci; Cybersecurity; Research News; Security and Privacy; Thomas Wenisch