(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40044665

本文讨论密码学中的一个众所周知的问题,称为独特签名攻击 (DSA) 或椭圆曲线数字签名算法 (ECDSA) 隐藏数字问题。 该问题源于密码系统中随机选择的数字和模数之间的混淆。 虽然直观上,较长的模数可能看起来更安全,但如果只有一部分包含随机值,则其附加位实际上可能会带来弱点。 在这种情况下,在使用 DSA 或 ECDSA 生成数字签名期间,必须为每个签名选择一个随机数“k”。 然而,由于随机数生成过程中的独特模式,模数中的某些高阶位始终为零,使得签名容易受到攻击。 这些攻击可能导致私钥恢复,从而可能导致对受保护数据的未经授权的访问。 该文本引用了在 PuTTY 中发现的此缺陷的具体实例,PuTTY 是一种流行的 SSH 客户端,由于其广泛使用而产生了广泛的影响。 该问题源于应用程序尝试从 Microsoft Windows 平台上可用的有限来源生成足够的熵,最终导致生成的数字出现可预测的模式。 通过分析从同一私钥获得的多个签名,攻击者能够识别这些偏差,揭示潜在的漏洞。 必须认识到在密码学应用中采用稳健且可靠的方法来生成真正随机数的重要性,确保使用这些技术传输或处理的敏感信息的安全性。 此外,持续的警惕和测试对于维护这些关键系统的完整性和安全性是必要的。

相关文章

原文


This is one of the all-time cryptography footguns, an absolutely perfect example of how systems development intuition fails in cryptography engineering.

The problem here is the distinction between an n-bit random number and n-bit modulus. In DSA, if you're working with a 521-bit modulus, and you need a random k value for it, k needs to be random across all 521-bits.

Systems programming intuition tells you that a 512-bit random number is, to within mind-boggling tolerances, as unguessable as a 521-bit random number. But that's not the point. A 512 bit modulus leaves 9 zero bits, which are legible to cryptanalysis as bias. In the DSA/ECDSA equation, this reduces through linear algebra to the Hidden Number Problem, solvable over some number of sample signatures for the private key using CVP.

Later

Here you go, from Sean Devlin's Cryptopals Set 8:

https://cryptopals.com/sets/8/challenges/62.txt



What's really interesting to me is that there was a known solution to the DSA/ECDSA nonce generation problem, RFC 6979, which was published 4 years before the vulnerability was introduced into PuTTY. And it sounds like the developer knew about this RFC at the time but didn't implement it because the much earlier version of deterministic nonce generation that PuTTY already had seemed similar enough and the differences were assessed to not be security critical.

So I think the other lesson here is that deviating from a cryptographic right answer is a major footgun unless you understand exactly why the recommendation works the way it does and exactly what the implications are of you doing it differently.



I think 6979 is a bit of a red herring here. 6979 is about deterministic nonce generation, which is what you do to dodge the problem of having an insecure RNG. But the problem here isn't that the RNG is secure; it's more fundamentally a problem of not understanding what the rules of the nonce are.

But I may be hair-splitting. Like, yeah, they freelanced their own deterministic nonce generation. Either way, I think this code long predates 6979.



I was going based off the commit link posted further down the thread (https://git.tartarus.org/?p=simon/putty.git;a=commitdiff;h=c...). You're right that the PuTTY deterministic nonce generating code does appear to significantly predate the RFC, but it sounds like the developer made a conscious decision when 6979 came out not to switch from what PuTTY already had because they looked similar enough. The PuTTY release that introduced support for elliptic curve cryptography (and introduced this vulnerability) was 0.68, which came out in 2017, 4 years after the RFC.

You're right that this was not an RNG security problem and instead was a problem with not understanding ECDSA nonce rules. However the notable fact for me was that the developer was apparently aware of the recommended way to deterministically generate nonces prior to the vulnerability being introduced and made a choice not to implement the RFC because what PuTTY was already doing seemed close enough, without fully understanding the implications of doing so. To put it another way, understanding ECDSA nonce rules would have avoided this vulnerability, but so too would implementing the RFC recommended way even if the developer did not fully understand why it was better than the existing implementation.



Right, the reason I'm splitting hairs is that in my intuition, the whole rationale for 6979 is to eliminate the possibility of fully repeated nonces, for instance because your RNG is "unseeded". You can still end up with a biased nonce even if you're using a perfectly seeded random bit generator (as you can see here). But yeah I think we understand each other.

As you can probably see, I just love this bug class, is all.



> As you can probably see, I just love this bug class, is all.

I agree! DSA nonce issues are a great class of cryptographic bug in that they're sort of weirdly unexpected failure properties when you first hear about it.



And then you find out about special soundness and that this is not only expected behaviour, but crucial to the security definitions and you realise that signatures are absolutely cursed.


RFC6979 attempts to guarantee that the nonce is unbiased (under the assumption that HMAC's output is indistinguishable from random). It's definitely attempting to give a stronger property than simply preventing a repeated nonce.

See step (h) in Section 3.2. The nonce is selected by rejection sampling. Thus, under the above assumption about HMAC, the result is indistinguishable from uniformly random in the [1, q-1] range.



I don't think anybody consciously looked at 9 zero bits and thought this is fine, but it rather looks like unfortunate effect of plugging old code into new algorithms without proper verification.


You could be right. If you look at the old code, dsa_gen_k(), that was removed during the commit (https://git.tartarus.org/?p=simon/putty.git;a=commitdiff;h=c...), it does basically no bounds checking, presumably because at the time it was written it was assumed that all modulus values would be many fewer bits than the size of a SHA-512 output.

So it would have been pretty easy to just reuse the function for a modulus value that was too big without encountering any errors. And the old code was written 15+ years before it was used for P-521, so it's entirely possible the developer forgot the limitations of the dsa_gen_k() function. So maybe there's another lesson here about bounds checking inputs and outputs even if they don't apply to anything you're currently doing.



I mean, bounds checking should really be caught by complete test coverage, shouldn't it? Or fuzzing? It doesn't address the more fundamental problem of cryptanalysis attacks, but it would definitely help mitigate the simple mistakes which can lead to exploitable implementations.


No, it’s actually far worse than that. This is like if you bought prestressed concrete rated for 100kg and you loaded it with 50kg. This is less than the limit, so it’s good right? Nope, the way it works is that you have to give it exactly 100kg of load or else it’s weak to tension and your building falls over in the wind. The problem here is that not that they needed 521 bits of entropy and 512 was too little but that 521 bits of entropy of which 512 are legit and the top 9 bits are all zeroes breaks the algorithm completely and makes it not secure at all. In fact I think copying 9 bits from the other 512, while not great, would have probably made this basically not a problem. I am not a cryptographer though, so don’t quote me on that ;)


The article has a good writeup. Clear, actionable, concise.

If you have a bit of instinct for this, it feels obvious that 'reducing' a smaller number by a larger one is not going to obscure the smaller 1 in any meaningful way, and instead it will leave it completely unchanged.

I don't think this is so much what you make it out to be, but a poor understanding of basic discrete maths (also I think you mean the 521 bit modulus leaves 9 zero bits, the modulus normally refers to the divisor not the remainder)

https://mathworld.wolfram.com/Modulus.html



It's not the difference between an n-bit random number and an n-bit modulus. It's the difference between a 512-bit random number and a 521-bit random number. It's very simple, but wording it as number vs. modulus is needlessly confusing, just adding to the problem you are bemoaning.


The issue with cryptography is that you have to be precise, that means the communication needs to involve far more detail, even if it can initially be confusing.

This is one of the major reasons that crypto is hard and if you try to get around the "hard" bit your "shortcut" will probably come back to bite you. When it comes to crypto and accuracy (and hence security), more communication, and detailed communication are probably the solution not the problem.



The very existence of 521-bit ECDSA is a footgun just waiting to go off.

To any programmer who is accustomed to thinking in binary but hasn't heard the full story about why it ended up being such an odd number, 521 is virtually indistinguishable at a glance from the nice round number that is 512. Heck, when I first read about it, I thought it was a typo!



The size is unexpected, but I believe this would have been an issue even if it really was 512-bit ECDSA rather than 521. Taking a random 512-bit number, which is what the PuTTY nonce function produced, and taking it modulo another 512-bit number, would also bias the output. Not as severely as having 9 bits that are always zero, but enough to be potentially exploitable anyways.

To avoid this issue, you either want your random value to be significantly larger than the modulus (which is what EDDSA does) or you want to generate random values of the right number of bits until one happens to be smaller than the modulus (which is what RFC 6979 does).



> Systems programming intuition tells you that a 512-bit random number is, to within mind-boggling tolerances, as unguessable as a 521-bit random number.

Sure, but the other half of systems programming intuition tells you "the end user is going to truncate this value to 8 bits and still expect it to be random".



Here it is the difference between a bit that is part of a cryptographically-safe random number and just happens to be zero, but had equal chance of being one, and a bit that is zero every time because of the way the numbers are being generated.


I don't have a substantive comment to offer, but good on Simon Tatham for the clear and forthcoming write-up. No damage-control lawyerly BS, no 'ego', just the facts about the issue.

It's reassuring to see a solid disclosure after a security issue, and we too often see half-truths and deceptive downplaying, e.g. LastPass.



Yes, Simon is a brilliant person (hi Simon!) and would be the last person on earth to do any spin. He also doesn't owe anyone anything, PuTTY was a gift from him to the world when there was no good alternative on Windows, a gift that has had an incalculably large benefit to so many people that no one should forget.


I had the pleasure to meet him in person and the guy is just so grounded and nice to interact and help you with stuff in a non-judgmental way.

Many people I know, with less than 1% of his contributions to OSS, have inflated egos and are just full of themselves, so it is refreshing to have people such as Simon in the OSS community.



I wish this announcement included the backstory of how someone discovered this vulnerability.

Public keys are enough of a pain in the ass with PuTTY / KiTTY that I stick with password auth for my windows SSH'ing needs.

KiTTY even let's you save the passwords so you don't have to type it in, a horrible security practice no doubt, but so convenient... Perhaps more secure than the putty-gen'd ECDSA P521 keys? A tad bit ironic.



We found it by investigating the security of SSH as part of a larger research program focussing on SSH, which also resulted in our publication of the Terrapin vulnerability.

This particular bug basically fell into our hands while staring at the source code during our investigation of the security of SSH client signatures.



This vulnerability has very little to do with P-521 per se. The issue is with ECDSA: any use of ECDSA with biased nonce generation, regardless of the elliptic curve it's implemented over, immediately causes secret key leakage.

(Rant: All these years later, we're all still doing penance for the fact that Schnorr signatures were patented and so everyone used ECDSA instead. It's an absolute garbage fire of a signature scheme and should be abandoned yesterday for many reasons, e.g., no real proof of security, terrible footguns like this.)



Schnorr wouldn't have helped in this specific case, since Schnorr is equally vulnerable to biased nonces (https://ecc2017.cs.ru.nl/slides/ecc2017-tibouchi.pdf).

EdDSA, which is essentially deterministic Schnorr, does solve the problem.

Also, the use of P-521 didn't specifically cause the vulnerability, but the bad interaction between SHA512 and P-521 did play a role. It is unfortunate that nature conspired against us to make 2^511 - 1 a composite number. The fact that you have to go up to 521 bits to get a Mersenne prime whereas the natural target length for a hash output is 512 bits is the fatal interaction here.



Excellent points all around, and thank you for the pointer to the ECC slides :)

(And indeed, nature could have been kinder to us and given us a Mersenne between 127 and 521...)



> Schnorr signatures

Never heard of (which probably demonstrates that I know pretty much nothing about cryptography?), so seeing a name spelled like "Schn...r" in this context makes at least me think of an entirely different luminary in the area. Thought it was a typo at first.



You still need a source of entropy, which is easier for an OS. An app has to resort to the user moving the mouse or bashing keys, which is a worse UX, although I guess they did that for actual key generation (if PuTTY did that) but it would be annoying to do it every time you made a connection.


Assuming I'm reading it right, this is an absolutely classic vulnerability, something people who study cryptographic vulnerability research would instinctually check for, so what's taken so long is probably for anyone to bother evaluating the P-521 implementation in PuTTY.


Windows has a built in ssh-agent included with openssh, no need for pagent anymore.

Ssh agent will manage your ssh keys through windows registry windows login process.

Also if you use wsl, you can access your ssh keys in wsl from the windows ssh-agent via npiperelay



100% agreed.

For me, the stakes are very low. It's my windows "gaming" machine, and has access to a few low-value hosts.

Otherwise I'd invest the time to learn wtf is pageant ;D



If the hosts are under your control, and never connect to untrusted hosts, then you are ok. The user authentication is encrypted, so the signatures are not visible to a man in the middle.


Sometimes useful reminder: you may not need PuTTY today. On the one side Windows Terminal does a lot of the classic VT* terminal emulation that old ConHost did not. On the other side Windows ships "real" OpenSSH now as a feature that turns on automatically with Windows "Dev Mode". No built in GUI for the SSH agent, but at this point if you are familiar with SSH then using a CLI SSH agent shouldn't be scary. If you are "upgrading" from PuTTY you just need to export your keys to a different format, but that's about the only big change.

PuTTY was a great tool for many years and a lot of people have good reasons to not want to let it go. As with most software it accretes habits and processes built on top of it that are hard to leave. But also useful to sometimes remind about the new options because you never know who wants to be in the Lucky 10K to learn that Windows Terminal now has deeper, "true" terminal emulation or that Windows has ssh "built-in".



I am on a corporate desktop, so I cannot use the Microsoft variant of the ssh-agent:
  C:\Users\luser>ssh-agent
  unable to start ssh-agent service, error :1058
Obviously, getting that changed globally (or even for myself) is impossible.

PuTTY has a workaround, allowing PAGENT.EXE to be used in place of the forbidden/inaccessible Microsoft agent:

https://tartarus.org/~simon/putty-snapshots/htmldoc/Chapter9...

So PuTTY remains quite relevant because of the mechanisms that Microsoft has chosen.



I'm sorry that you need to work around the inability to run a simple Windows service because of some mistakenly bad corporate policy trying to micro-manage which Windows services are allowed to run. I don't think the long term solution should be "shadow IT install an older app just because it pretends to be a GUI rather than a Windows service", but I'm glad it is working for you in the short term.

If you need ammunition to encourage your corporate IT to allow you to run the proper ssh-agent service to do your job instead of increasing your attack surface by installing PuTTY/Pageant, you could collect a list of vulnerabilities such as the one posted here (look at the huge count of affected versions on just this one!). There should be plenty of vulnerability maintenance evidence on the Microsoft-shipped version of an open source tool with a lot of eyeballs because it is "the standard" for almost all platforms over the "single developer" tool that took at least a decade off from active development (and it shows).



> If you need ammunition to encourage your corporate IT to allow you to run the proper ssh-agent service to do your job instead of increasing your attack surface by installing PuTTY/Pageant, you could collect a list of vulnerabilities such as the one posted here...

This made me laugh :-) Grandparent is probably happy to just fly under the radar. The suggested conversation would probably play out thusly:

> IT! You idiots! Your dumb policies are forcing me to use this insecure software! Look how many vulnerabilities it has had over the years!

>> Hold up. Rewind. What's this software that you've installed?

> It's called PuTTY. And if you just change this policy I could...

>> And how insecure is it?

> Just check out all these vulnerabilities! It's probably not worse than the average, but it's unnecessary extra attack surface area that...

>> I'm going to need you to uninstall that. Now. And I'll need confirmation via email that you have done so by EOB, with your boss and the CISO on CC.

> But if you just change this boneheaded policy...

>> Now, please. We have a security incident on our hands. We can discuss policy another time. Is there anything else installed on your laptop that I should be aware of?



Let me just explain my situation.

We were directed to use our new corporate SFTP instead of direct communication with our vendors and customers.

I tried direct ssh on the second account they gave us, got a shell, pulled /etc/passed, and my manager mailed it to corporate security.

We had a long talk about configuring ssh. I don't know if it helped.



And it’s why shadow IT exists and shadow IT is why companies don’t fall apart.

It’s also why web apps are so popular and why the blackberry failed.



Really helpful. I found challenge in getting a Windows system (no admin) into a state where i can use it productively, and having a functional ssh-agent was one of the remaining pain points.


ALWAYS GOOGLE THE ERROR MESSAGE, with context about what you’re doing!

I encountered this, too, but the fix is quite simple.

That service is set to “manual” by default, (or maybe “disabled”) and setting it to “automatic” then starting it will get you running.

It is unlikely that this is a corporate lockdown measure.



It’s possible to lock this down, don’t misunderstand me. I’m saying it’s unlikely for a security team to make this decision on a system where pageant.exe is allowed to run.


There are a few different options in Windows that are all measurably superior to PuTTY:

Install WSL2 - you get the Linux SSH of your choice.

As mentioned above, Windows now ships with OpenSSH and windows terminal is good.

My favourite, but now probably obsolete solution was to install MobaXTerm which shipped with an SSH client. It's still great and there is a usable "free" version of it, but WSL2 does everything for me now when I'm forced to use windows.



I may not need PuTTY, but I like me a nice GUI that I can point and click with

ssh command is absolutely fine, but I much prefer a list of saved presets versus ~/.ssh/config file fuckery



Too few nerds are willing to admit this. I use git all-day-long, but need to check stackoverflow to use the command line for anything more complicated than switching branches...and I'm ok with that. I save my brain space for more useful things.


So this signing method requires a 521 bit random number, and this flaw caused the top 9 bits of that number to be zero instead, and somehow after 60 signatures this leaks the private key?

Anyone care to explain how exactly? How it it any different to the top 9 bits being zero by chance (which happens in 1 out of ~500 attempts anyway)



For the attack all 60 signatures need a nonce that is special in this way. If for example only one out of the 60 is short, the attack fails in the lattice reduction step. The reason is that in the attack, all 60 short nonces "collude" to make up a very special short vector in the lattice, which is much shorter than usual because it is short in all 60 dimensions, not just one out of 500 dimensions. The approximate shortest vector is then obtainable in polynomial time, and this happens to contain the secret key by construction. As an analogy: Imagine you had a treasure map with 60 steps "go left, go right, go up, go down, go down again" etc. If only one out of 60 instructions where correct, you wouldn't know where the treasure is. All of the instructions need to be correct to get there.


Doesn't make sense to me as well, even when fully random after 30,000 signatures you would get around 60 signatures where the nonce starts with nine zero bits.

I suspect there must be something else at play here.

EDIT: the nonce is PRIVATE, so the scenario I described would not work because we wouldn't know for which of the 30k signatures the nonce starts with 9 zero bits. Makes sense now.



If I'm understanding correctly, the difference is between knowing you have 60 and having to try (30000!/(60! * (30000 - 60)!) combinations and seeing if they worked, which is quite a few.


I mean the write up indicates you'd need access to the server side, or the pageant with the private key loaded, which both seem to be like... umm... at that point don't we have bigger issues?


Not sure about the pageant part, but it's a major problem when connecting to a compromised server leaks the client's private key.

(For example, if an attacker has compromised server A and you connect to it, they can now use your key to connect to server B which you also use)



Your title says "PuTTY-Generated" but the OP article says "The problem is not with how the key was originally generated; it doesn't matter whether it came from PuTTYgen or somewhere else. What matters is whether it was ever used with PuTTY or Pageant".


Good catch! I wrote the title before I had dug into the matter and forgot to update it. Thanks for pointing that out.

Any k generation and subsequent signature generation are going to be impacted.



The answer being, per that post: author was worried about low quality randomness on Windows and ran it through a sha512 hash function which outputs fewer than 521 bits so the remaining ones will be left zero


Is there any way to check for how an SSH key was generated and with what type?
    ssh-keygen -l -f 
Can be used to show the key's bit-size and fingerprint, but I am not sure whether I used Putty or ssh-keygen on Ubuntu/Debian for some of my SSH keys. Also, it would be nice if I would know the command to list key-types directly for keys unlocked in my ssh agent; not through a file (I use KeeAgent from KeyPass on Windows, linked through npiperelay into WSL1/WSL2).


It says the key may have been lost if it had ever been used with Putty. If you have keys of this type and have ever used Putty you should revoke them.


I use pageant as my SSH Agent and WSL to access it through ssh-agent. I only used to generate Keys with Putty, (Puttygen), but reverted to standard Linux `ssh-keygen` in the last 2-3 years.

I am still wondering what the exact steps are to show the key type.



I believe ssh-keygen -t ecdsa -b 521 pub keys will have ecdsa-sha2-nistp521 in plaintext at the start. I don't know how to tell from the priv key.

And I think converted key pairs in Putty format (.ppk) will have PuTTY-User-Key-File-2: ecdsa-sha2-nistp521 in plaintext.

For Pageant you should be able to select view keys from the system tray icon context menu and it should show the key type in the list.

For ssh-agent I think ssh-add -L should list the public keys (with key type) in the same format as the authorized_keys file

I'm not an expert, so if anyone is please correct me where I'm wrong!



You can look in the key file. From the OP:

"has an id starting ecdsa-sha2-nistp521 in [...] the key file" He also mentions some other places the information shows up.



Ah, yes - there it is (in KeePass/KeeAgent, under `Advanced`, click on the private key file (*.ppk) and then on Open > Internal Viewer).

> PuTTY-User-Key-File-2: ssh-rsa..Encryption: aes256-cbc

Indeed I seem to have used Puttygen in the past.

For keys from Linux ssh-keygen, the private key starts with:

> -----BEGIN OPENSSH PRIVATE KEY-----

and the public key starts with

> ssh-ed25519



A complete aside, but I just realised that putty is named after the old fashioned adhesive clay used to cement window panes into the frames of windows.

PuTTY… Windows… gosh I feel dumb. I’ve used putty for almost 25 years but didn’t put two and two together until I just remembered how the pheasants in my garden would peck out the window putty to get at the ladybirds hibernating underneath.



If you haven't already, this is probably a good time to switch to EdDSA keys. EdDSA signatures don't require RNG nor modular math unlike ECSDA signatures.


EdDSA signatures are specified to use deterministic nonce generation, so you're correct that they do not require randomness. But they certainly do require modular arithmetic in order to implement the elliptic curve operations!


This exposed client keys, not server keys. The client keys are at risk only in a handful of specific scenarios - e.g., if used to connect to rogue or compromised servers, or used for signing outside SSH.

This is not exploitable by simply passively watching traffic, so even for client keys, if you're certain that they were used in a constrained way, you should be fine. The difficulty is knowing that for sure, so it's still prudent to rotate.



> (The problem is not with how the key was originally generated; it doesn't matter whether it came from PuTTYgen or somewhere else. What matters is whether it was ever used with PuTTY or Pageant.)

Sounds like your server keys are safe.



There is no indication or mention that key exchange was compromised. SSH has forward secrecy, so compromising the authentication keys does not compromise the encryption keys.


To those wondering:
  (Incidentally, none of this affects Ed25519. The spec for that system includes its own idea of how you should do deterministic nonce generation - completely different again, naturally - and we did it that way rather than our way, so that we could use the existing test vectors.)


edit: ignore below, I misinterpreted what q meant in this context and thought it was the private key.

>The clever trick is to compute a secure hash whose input includes the message to be signed and also the private key [...]

> PuTTY's technique worked by making a SHA-512 hash, and then reducing it mod q, where q is the order of the group used in the DSA system. For integer DSA (for which PuTTY's technique was originally developed), q is about 160 bits; for elliptic-curve DSA (which came later) it has about the same number of bits as the curve modulus, so 256 or 384 or 521 bits for the NIST curves.

I know hindsight is 20/20, but why did PuTTY implement it this way to begin with? Given the description in the first paragraph, I'd naively implemented it as

    SHA-512(message || private_key)[:number_of_bits_required]
or if I was being paranoid I would have done
    SHA-512(SHA-512(message) || SHA-512(private_key))[:number_of_bits_required]
Moduloing it by q makes no sense unless you're trying to save a few cycles.


SHA-512(...)[:521] would still produce the same vulnerability, there would be 9 unchanging bits (assuming the [:521] would pad the 512 bits to 521). Those 9 guessable bits are enough to recover the key from 60 signatures, as the post explained in detail.

A more interesting question (while we are on the 20/20 hindsight express) is why the dsa_gen_k() function did not include an assert(digest_len <= 512).



It appears that some of this was designed in the late 90s/early aughts, when Windows didn't have a cryptographically-secure random number generator.

PuTTY carries its own formats from this time.



This is the text from the original article.

"For this reason, since PuTTY was developed on Windows before it had any cryptographic random number generator at all, PuTTY has always generated its k using a deterministic method, avoiding the need for random numbers at all."



The problem is that 521 > 512, not that they used modulo. To get a sufficiently large nonce out of the hash, you need to think in terms of expanding the number of bits, not reducing it.


Ah okay, after re-reading the message it looks like using the SHA-512 result directly (without modulo) would still have the issue. I originally thought the problem was moduloing by the key, which was approximately 521 bits

> In all of those cases except P521, the bias introduced by reducing a 512-bit number mod q is negligible. But in the case of P521, where q has 521 bits (i.e. more than 512), reducing a 512-bit number mod q has no effect at all – you get a value of k whose top 9 bits are always zero.



Presumably because 512 of bits (a) seemed "random enough" and (b) was the nicest block size that fit comfortably in 521 bits of modulus. This is a common mistake.


From TFA it seems more like they already had the SHA512-based implementation for DSA (where it was fine), and reused it when implementing ECDSA without realizing that it wasn't suitable in situations with moduli larger than 512 bits.


The nonce is taken modulo the order of the prime-order subgroup. For DSA that's generally a 256ish-bit prime (e.g.: choose a 1024-bit prime p such that a 256-bit prime q divides p-1; then there exists an order-q subgroup of Zp).

For P-521, the base field is 2^521 - 1, but the modulus used when computing the nonce is not that value, it's the order of the P-521 curve. By Hasse's theorem, that's roughly p +- sqrt(p), which is essentially p for such large numbers (the cofactor of P-521 is 1, so the order of the group is prime).

So: both are 521-bit numbers, but the group order is less than 2^521-1. Its hex representation is 0x01fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffa51868783bf2f966b7fcc0148f709a5d03bb5c9b8899c47aebb6fb71e91386409.



Caution here. If your modulus is too close to the maximum truncated value, there can be a bias in the upper bits, too. For example, if you reduce a number between 0 and 15 by the modulus 13, the values 0, 1 and 2 will be twice as likely as the values 3-12. This means that the highest bit will be 0 in 11 out of 16 cases. Even such a small bias might be exploitable (for example, sub 1 bit bias up to 160-bit ECDSA here: Gao, Wang, Hu, He https://eprint.iacr.org/2024/296.pdf)


I missed it was [0,15]

This doesn't make 13 a power of two. I'm aware of rejection sampling; my point was if you have a N bit value X and want M bits, truncating X to M bits and X MOD 2*M is the same. Neither solve the problem where M > N, which is what TFA is about.



> This doesn't make 13 a power of two.

Where did I imply that it is?

> I'm aware of rejection sampling; my point was if you have a N bit value X and want M bits, truncating X to M bits and X MOD 2*M is the same.

Sure.

> Neither solve the problem where M > N, which is what TFA is about.

If you observe my other comments, you'll see I'm well aware of what the article is about.



> Where did I imply that it is?

You used 13 as an example in a response to my comment that was:

Isn't modulo the same as truncation when dealing with powers of two?



> You used 13 as an example

I don't see the number 13 in any of my comments on this thread (except this one, or where I quoted you). Perhaps you are confusing me with someone else?



You'd want something like this:
  Truncate(
    SHA-512(0x01 || message || private_key)
      || SHA-512(0x02 || message || private_key),
    bitsNeeded
  )
Two separate hashes, with domain separation, that produce an output of at least n+64 bits (if it is to be reduced mod 2^n - k, for some small integer k). In this case, 1024 bits reduced mod 2^521-1 is safe.

Even better, though, is to just use RFC 6979 and not implement it yourself.



ECDSA strikes again. Bad nonce generation (here: generating a 521 bit nonce by calculating a 512 bit number and then taking that modulo 2^521, which means the top bits of the nonce are always zero) leading to private key compromise. Classic.


We recommend against ECDSA for internal accounts. I know several of our partners use it though. Hard to not be cynical about this even though this issue seems mostly harmless.


Cool, I never use keys generated by putty, so I'm good. I only use keys generated by and stored on hardware tokens like yubikeys and openpgp smartcards.

Edit: To clarify further, the key in that case is not even handled by putty. All crypto ops are done on the token and the private key never leaves it. It can't be exported even because that defeats the purpose of using a hardware token. So putty will just tell the token or smartcard what to sign and the token returns the output.

That's why it's safe against this attack. Putty never handles the private key material in this scenario. So I never imported the private key in putty or pageant and I couldn't even if I wanted to. The agent just declares the public keys on the token.

I see all the downvotes but I didn't explain it properly. I've been using smart cards so long that these things are kinda a given for me.

I can really recommend doing it this way or doing the more modern fido2 auth. Hardware authentication is amazing and it even works on Android over nfc these days.

The biggest vulnerability I see is the issue of malware connecting to the unlocked token via the SSH agent, but I'm only using tokens that have touch to sign for this reason. They require a touch on the token for every operation.



From the article:

> (The problem is not with how the key was originally generated; it doesn't matter whether it came from PuTTYgen or somewhere else. What matters is whether it was ever used with PuTTY or Pageant.)



Edit: beat to it, whoops! Never underestimate the Internet’s drive to post easy corrections >.>

——

Unfortunately, about that:

“(The problem is not with how the key was originally generated; it doesn't matter whether it came from PuTTYgen or somewhere else. What matters is whether it was ever used with PuTTY or Pageant.)”



Friendly reminder you can entirely disable elliptic curve algorithms in your sshd_config and generate rsa keys larger than 4096 bits, 8192 or however large you like work just fine.

I have never trusted EC crypto because of all the magic involved with it, a sufficient reason to move from RSA has never been presented with compelling evidence as far as I am concerned. I do not care that it is faster, i prefer slow and secure to fast and complicated. It's a lot easier to explain RSA and why it's secure than the mile long justifications on curve crypto. The issue doesn't need to be in the algorithm, if the implementation is sufficiently difficult that works just as well as an intentionally misdesigned algorithm.



The benefit of EC is not speed, it is much smaller key sizes.

Roughly speaking, an RSA key has to be 8 times as large as an EC key for the same security level.



Actually, with currently common key sizes, ECC up to 384 bits will fall to QC before RSA with 1024 bits, because fewer bits means fewer qubits needed.

The main disadvantage of RSA is the structure of finite fields, which allows specialized solutions to factoring (number field sieve). We do not know similar structures for elliptic curves, so for those we only have general attacks, thus allowing shorter key lengths.



One time putty took down our prod rac cluster on Xmas eve, and I spent three days fixing it..

One of the on call engineers had copied some documentation to his clipboard which had like:

Dbfile 1 location > /u01/ora/whatever

And accidentally right clicked…

I absolutely blame that software for ruining my Christmas one year, and I can’t really forgive it.

Please use OpenSSH.

联系我们 contact @ memedata.com