为什么我们不每天生成椭圆曲线
Why we don’t generate elliptic curves every day

原始链接: https://words.filippo.io/dispatches/parameters/

密码学工程师兼作家 Filippo Valsorda 发表了一篇 100 字的博客文章,标题为“为什么我们不每天生成椭圆曲线”。 Valsorda 解释说,虽然自定义参数在理论上似乎很有吸引力,但它们会带来安全问题,并且与创建和验证所涉及的工作相比几乎没有什么好处。 他强调了在加密系统协商期间验证自定义参数会增加的复杂性,并强调为攻击者提供的每一个额外控制层都为构建更有效的攻击创造了机会。 同样,他指出系统管理员必须与定期维护分开配置自定义参数,而固定参数集使开发人员能够优化性能并提高效率。 最终,该帖子认为标准化和统一性为网络安全措施提供了更大的一致性和可靠性。 虽然理论上的争论可能集中在创建加密密钥的方法上,但现实应用程序的现实使得大多数争论变得微不足道。 尽管最初的看法是,自定义参数生成提供的优势微乎其微,但却带来了相当大的风险,使其收益可以忽略不计。 作为一项思想实验,Valsorda 质疑最珍贵的个人加密密钥可能是什么,并建议任何试图屏蔽自定义参数配置背后的某些元素的行为都会邀请坚定的黑客找到相关加密系统组件中的弱点。

有趣的是,原文中提到的前向保密的概念也适用于 SSL/TLS 证书。 配置网络服务器或客户端时,启用前向保密可确保即使证书颁发机构的私钥被泄露或泄露,先前建立的连接也不会轻易破译其加密流量。 此外,最近斯诺登的爆料凸显出美国政府据报道通过窃听谷歌的光缆来监视美国公民。 因此,对涉及证书的第三方的不信任是有道理的。 幸运的是,有一些开源解决方案可以提供流行的闭源服务的替代方案。 Let's Encrypt 为网站提供免费的 HTTPS 证书,OpenVPN 提供高级隐私和网络选项。 至于有关参数选择过程的具体问题,本文讨论了使用通常称为“vanilla”的短而简单的参数集,这些参数集的可变性较低,因此由于其值范围有限,因此隐藏缺陷的可能性较低。 然而,实际上,大多数曲线最终都是根据实际因素来选择的,例如在各种设备中的易用性以及跨多个平台的兼容性。 因此,普通参数集并不经常被使用。 相反,通用行业标准或标准化指南(例如 NIST 标准)往往占据主导地位。 这些实践遵循这样的理念:拥有更大的值范围可以提高参数计算过程中的效率,特别是在资源受限的设备上。
相关文章

原文

With all the talk recently of how the NIST curve parameters were selected, a reasonable observer could wonder why we all use the same curves instead of generating them along with keys, like we do for Diffie-Hellman parameters. (You might have memories of waiting around for openssl dhparam to run and then configuring the result in a web server for TLS.)

Thing is, user-generated parameters (such as custom elliptic curves) are not safe, and have no significant benefits. This is one of the lessons learned of modern cryptography engineering, and it contradicts conventional wisdom from the ‘90s.

Generating parameters is supposed to help with two things: first, it solves the question of how to pick parameters we can all agree on; second, there’s the idea that if we’re all using different parameters we are not putting all our eggs in the same basket and there isn’t a juicy precomputation target for attackers.

Picking trustworthy standard parameters is not prohibitively hard, and most importantly it is a job for the relatively few people whose job is specifying cryptography, instead of falling on the many many more who use it. Given the opportunity to make some people do a lot of extra work to save a lot of people some work, we should always take it.

Not putting all our eggs in one basket is a consideration that might have made sense in a thankfully gone-by era of cryptography when primitives were somewhat regularly weakened and broken. Back then it might have been reassuring that yeah, an attacker might be able to break one key, but maybe they won’t get to break them all, and hopefully the damage will be limited. Today, we consider it completely unacceptable for even a single key to fall to cryptanalysis (as opposed to implementation error or side channel analysis), and we design systems accordingly. For example, device manufacturers embed the same public key in all their devices, and every mailbox user is protected by the same certificate (and really by the same root certificate authority keys), and so on.

Even more generally, it’s really not of any consolation to hear that not everyone’s key is broken if your key is broken. Especially when whose key gets broken depends only on who the attacker concentrates their resources on, rather than on random chance.

The last time I can remember when custom parameters helped in practice was in 2015, for the Logjam attack. The researchers pointed out that a nation-state attacker could do a large pre-computation to target some very popular 1024-bit Diffie-Hellman parameters. However, the better take away was that 1024-bit Diffie-Hellman was just too weak to be used at all. Also, as we will see later, the custom parameters negotiation introduced complexity that led to the worst parts of the attack.

In modern times, if a scheme is so close to the brink of failure that you need to edge by saying that not all keys will fall at once, we just call that broken. It could be a corollary of Kerckhoff's Principle, which says that a cryptosystem should be secure even if everything about the system, except the key, is public knowledge:

A cryptosystem should be secure even if all the parameters, except the key, are shared across every user.

Ok, so generating parameters doesn’t help much, but isn’t it better than nothing? No, custom parameters are much worse than nothing.

First, it’s usually a very slow process: openssl dhparam 2048 takes more than 17 seconds on my M2 machine, and the docs of dsa.GenerateParameters say

This function can take many seconds, even on fast machines.

This means it can’t be done on the fly, but needs to be a separate operation handled and configured by the system administrator.

Second, and most importantly, verifying the validity of parameters is even harder than generating them. For example, picking a random prime is way easier than adversarially checking if a given number is prime. This adds a tremendous amount of complexity to the security-critical, attacker-reachable hot path. Any degree of freedom given to the attacker is an opportunity to build a better attack, any required runtime check is an opportunity for an implementation bug.

There are whole classes of attacks that are just impossible given fixed parameters, such as the 2020 Windows vulnerability that allowed complete TLS MitM and X.509 spoofing by exploiting custom curves. The beauty of that attack is that the parameters weren’t even invalid, but simply controlling the parameters allowed the attacker to fake signatures. On the lower end of the severity spectrum, there’s been a string of DoS vulnerabilities because uncaught parameter edge cases could break expectations of surrounding code and cause crashes or extremely slow operations.

This is ultimately a big part of what made DSA much less popular and safe than RSA and ECDSA. ECDSA is not the best signature algorithm, by far, but at least it (usually!) doesn’t require generating and validating parameters.

Moreover, when doing negotiation in a protocol, it’s much simpler (and hence safer) to pick between curves A, B, or C or groups 1, 2, or 3 than it is to pick arbitrary parameters. For the former there’s the tried and proven method of having the client advertise support and the server pick. It’s not foolproof and can lead to downgrades without a transcript, but (unfortunately but sometimes unavoidably) most protocols already do many dimensions of parameter negotiation like that. For arbitrary parameters the client expresses some complex or incomplete preferences (if you are lucky), the server produces the parameters, and the client has to check they are valid and compliant with the preferences.

For example, the worst part of the Logjam Attack was a downgrade where a MitM convinced the server to pick and sign weak Diffie-Hellman parameters (by requesting “export” cipher suites, even if the client didn’t support them), and then broke them and retroactively fixed the transcript. Had the DH groups been fixed and standardized, the client would have just rejected the unsupported groups injected by the MitM, but instead here the client had to just say “huh, I guess the server really likes these weak parameters, at this point I either go along with it or break the connection”. This hints at an even deeper issue in how DH parameters are negotiated in TLS 1.0–1.2, which is part of why finite field DH is being deprecated in favor of elliptic curve DH: there is no way for the client to express any opinions on the group selection, it can only accept the server’s choice or disconnect, too late in the handshake to select an alternative key exchange. This is also a direct consequence of the lack of standardized groups: with standardized groups the client could have listed the ones it supports, and the server could have refrained from picking DH if there was no acceptable overlap, like ECDH curves always worked. None of these are really intrinsic flaws of the finite field Diffie-Hellman primitive: DH is somewhat less efficient than ECDH, but otherwise perfectly serviceable. The issue is that DH was traditionally specified with custom parameters (groups) while ECDH was almost always specified with standardized curves, so the former ended up much less safe than the latter.

Finally, always operating over the same parameters allows implementers to target and optimize code, using tools like fiat-crypto to generate arithmetic code specifically for operations modulo a fixed prime, instead of having to resort to generic big integer libraries, which are necessarily slower and often more complex and not constant time. Fixed fields let us optimize memory allocations, multiplication chains for inversions, low-level carry arithmetic, and so on. An optimized P-256 curve implementation will always be faster than a generic Weierstrass curve implementation, and often safer, too.

In conclusion, user generated parameters are a legacy design that proved to be much more trouble than it's worth, and modern cryptography is better off with fixed parameter sets.

If you got this far, you might want to follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @[email protected].

Il Ponte Rotto, the Broken Bridge of Rome, seen from Tiber Island. This easily overlooked structure in the middle of the river, hidden by vegetation, is all that's left of what was two thousand years ago the longest and most important bridge over the Tiber. It was destroyed many times over, to the point that there's legends about it being cursed (article in Italian, but well worth a read, Google Translate does a good job). It hosted at times an aqueduct, a chapel, and even a hanging garden. One of my favorite spots.

A single arch of a stone bridge is lit in the foreground, pictured from the side and below. The stone is greyed by rain and sediment, there's vegatation hiding the sides. The water passes under and around it. The night sky behind it is dark blue.

My awesome clients—Sigsum, Protocol Labs, Latacora, Interchain, Smallstep, Ava Labs, and Tailscale—are funding all my work for the community and through our retainer contracts they get face time and unlimited access to advice on Go and cryptography.

Here are a few words from some of them!

Latacora — Latacora bootstraps security practices for startups. Instead of wasting your time trying to hire a security person who is good at everything from Android security to AWS IAM strategies to SOC2 and apparently has the time to answer all your security questionnaires plus never gets sick or takes a day off, you hire us. We provide a crack team of professionals prepped with processes and power tools, coupling individual security capabilities with strategic program management and tactical project management.

Ava Labs — We at Ava Labs, maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team.

联系我们 contact @ memedata.com