Normal practice in deploying post-quantum cryptography is to deploy ECC+PQ. IETF's TLS working group is standardizing ECC+PQ. But IETF management is also non-consensually ramming a particular NSA-driven document through the IETF process, a "non-hybrid" document that adds just PQ as another TLS option.
Don't worry: we're standardizing cars with seatbelts. Also, recognizing generous funding from the National Morgue Association, we're going to standardize cars without seatbelts as another option, ignoring the safety objections. That's okay, right?
Last month I posted part 1 of this story. Today's part 2 highlighted the corruption. This blog post, part 3, highlights the dodging in a particular posting at the beginning of this month by an IETF "security area director". Part 4 will give an example of how dissent on this topic has been censored.
Consensus means whatever the people in power want to do. Recall from my previous blog post that "adoption" of a document is a preliminary step before an IETF "working group" works on, and decides whether to standardize, the document. In April 2025, the chairs of the IETF TLS WG called for "adoption" of this NSA-driven document. During the call period, 20 people expressed unequivocal support for adoption, 2 people expressed conditional support for adoption, and 7 people expressed unequivocal opposition to adoption. (Details for verification.)
The chairs claimed that "we have consensus to adopt this draft". I promptly asked for explanation.
Before the chairs could even reply, an "area director" interrupted, claiming, inter alia, the following: "There is clearly consensus based on the 67 responses to the adoption call. ... The vast majority was in favour of adoption ... There were a few dissenting opinions".
After these lies by the "area director" were debunked, the chairs said that they had declared consensus "because there is clearly sufficient interest to work on this draft" specifically "enough people willing to review the draft".
I can understand not everybody being familiar with the specific definition of "consensus" that antitrust law requires standards-development organizations to follow. But it's astonishing to see chairs substituting a consensus-evaluation procedure that simply ignores objections.
Stonewalling. The chairs said I could escalate. IETF procedures say that an unresolved dispute can be brought "to the attention of the Area Director(s) for the area in which the Working Group is chartered", and then "The Area Director(s) shall attempt to resolve the dispute".
I filed a complaint with the "security area directors" in early June 2025. One of them never replied. The other, the same one who had claimed that there was "clearly consensus", sent a series of excuses for not handling the complaint. For example, one excuse was that the PDF format "discourages participation".
Do IETF procedures say "The Area Director(s) shall attempt to resolve the dispute unless the dispute is documented in a PDF"? No.
I sent email two days later systematically addressing the excuses. The "area director" never replied.
It isn't clear under IETF procedures whether a non-reply allows an appeal. It is, however, clear that an appeal can't be filed after two months. I escalated to the "Internet Engineering Steering Group" (IESG) in August 2025.
(These aren't even marginally independent groups. The "area directors" are the IESG members. IESG appoints the WG chairs.)
IESG didn't reply until October 2025. It rejected one of the "Area Director" excuses for having ignored my complaint, but endorsed another excuse. I promptly filed a revised complaint with the "area director", jumping through the hoops that IESG had set. There were then further runarounds.
The switch. Suddenly, on 1 November 2025, IESG publicly instructed the "area director" to address the following question: "Was rough consensus to adopt draft-connolly-tls-mlkem-key-agreement in the TLS Working Group appropriately called by the WG chairs?"
The "area director" posted his conclusion mere hours later: "I agree with the TLS WG Chairs that the Adoption Call result was that there was rough consensus to adopt the document".
Dodging procedural objections. Before looking at how the "area director" argued for this conclusion, I'd like to emphasize three things that the "area director" didn't do.
First, did the "area director" address my complaint about the chair action on this topic? No.
One reason this matters is that the law requires standards-development organizations to provide an "appeals process". Structurally, the "area director" isn't quoting and answering the points in my complaint; the "area director" puts the entire burden on the reader to try to figure out what's supposedly answering what, and to realize that many points remain unanswered.
Second, did the "area director" address the chairs claiming that "we have consensus to adopt this draft"? Or the previous claim from the "area director" that there was "clearly consensus"? No. Instead IESG and this "area director" quietly shifted from "consensus" to "rough consensus". (Did you notice this shift when I quoted IESG's "rough consensus" instruction?)
One reason this matters is that "consensus" is another of the legal requirements for standards-development organizations. The law doesn't allow "rough consensus". Also, IETF claims that "decision-making requires achieving broad consensus". "broad consensus" is even stronger than "consensus", since it's saying that there's consensus in a broad group.
Third, the way that my complaint had established the lack of consensus was, first, by reviewing the general definition of "consensus" (which I paraphrased from the definition in the law, omitting a citation only because the TLS chairs had threatened me with a list ban if I mentioned the law again), and then applying the components of that definition to the situation at hand. Did the area director follow this structure? Here's the definition of "consensus", or "rough consensus" if we're switching to that, and now let's apply that definition? No. Nobody reading this message from the "area director" can figure out what the "area director" believes these words mean.
Wow, look at that: "due process" is another of the legal requirements for standards-development organizations. Part of due process is simply making clear what procedures are being applied. Could it possibly be that the people writing the law were thinking through how standardization processes could be abused?
Numbers. Without further ado, let's look at what the "security area director" did write.
The IESG has requested that I evaluate the WG Adoption call results for ML-KEM Post-Quantum Key Agreement for TLS 1.3 (draft-connolly-tls-mlkem-key-agreement). Please see below.
As noted above, IESG had instructed the "area director" to answer the following question: "Was rough consensus to adopt draft-connolly-tls-mlkem-key-agreement in the TLS Working Group appropriately called by the WG chairs?"
Side note: Given that the "area director" posted all of the following on the same day that IESG instructed the "area director" to write this, presumably this was all written in advance and coordinated with the rest of IESG. I guess the real point of finally (on 1 November 2025) addressing the adoption decision (from 15 April 2025) was to try to provide cover for the "last call" a few days later (5 November 2025).
ExecSum
I agree with the TLS WG Chairs that the Adoption Call result was that there was rough consensus to adopt the document.
As noted above, the TLS WG chairs had claimed "consensus", and the "area director" had claimed that there was "clearly consensus". The "area director" is now quietly shifting to a weaker claim.
Timeline
April 1: Sean and Joe announce WG Adoption Call [ about 40 messages sent in the thread ]
"About 40"? What happened to the "area director" previously writing "There is clearly consensus based on the 67 responses to the adoption call"? And why is the number of messages supposed to matter in the first place?
April 15: Sean announces the Adoption Call passed. [ another 50 messages are sent in the thread ]
Messages after the specified adoption-call deadline can't justify the claim that "the Adoption Call result was that there was rough consensus to adopt the document". The adoption call failed to reach consensus.
April 18 to today: A chain of (attempted) Appeals by D. J. Bernstein to the AD(s), IESG and IAB, parts of which are still in process.
The fact that the ADs and IESG stonewalled in response to complaints doesn't mean that they were "attempted" complaints.
Outcome
30 people participated in the consensus call, 23 were in favour of adoption, 6 against and 1 ambivalent (names included at the bottom of this email).
These numbers are much closer to reality than the "area director" previously writing "There is clearly consensus based on the 67 responses to the adoption call. ... The vast majority was in favour of adoption ... There were a few dissenting opinions".
Also, given that the "area director" is continually making claims that aren't true (see examples below) and seems generally allergic to providing evidence (the text I'm quoting below has, amazingly, zero URLs), it's a relief to see the "area director" providing names to back up the claimed numbers here.
But somehow, even after being caught lying about the numbers before, the "area director" still can't resist shading the numbers a bit.
The actual numbers were 20 people unequivocally supporting adoption, 2 people conditionally supporting adoption, and 7 people unequivocally opposing adoption. Clearly 7 is close to 6, and 20+2 is close to 23, but, hmmm, not exactly. Let's check the details:
-
How does the "area director" end up with 6 negative votes rather than 7? By falsely listing Thomas Bellebaum as "ambivalent" and falsely attributing a "prefer not, but okay if we do" position to Bellebaum. In fact, Bellbaum had written "I agree with Stephen on this one and would not support adoption of non-hybrids." (This was in reply to Stephen Farrell, who had written "I'm opposed to adoption, at this time.")
-
How does the "area director" end up with 23 positive votes rather than 22? By falsely listing the document author (Deirdre Connolly) as having stated a pro-adoption position during the call. The "area director" seems generally clueless about conflict-of-interest issues and probably doesn't find it obvious that an author shouldn't vote, but the simple fact is that the author didn't vote. She sent three messages during the call period; all of those messages are merely commenting on specific points, not casting a vote on the adoption question.
The document author didn't object to the "area director" fudging the numbers. Bellebaum did politely object; the "area director" didn't argue, beyond trying to save face with comments such as "Thanks for the clarification".
More to the point, the "area director" has never explained whether or how the tallies of positive and negative votes are supposed to be relevant to the "rough consensus" claim. The "area director" also hasn't commented on IETF saying that IETF doesn't make decisions by voting.
Bogus arguments for the draft. I mentioned in my previous blog post that IETF claims that "IETF participants use their best engineering judgment to find the best solution for the whole Internet, not just the best solution for any particular network, technology, vendor, or user".
In favour argument summary
While there is a lack of substantiating why adoption is desired - which is typical
Okay, the "area director" seems to have some basic awareness that this document flunks the "engineering judgment" criterion. The "area director" tries to defend this by saying that other documents flunk too. So confidence-inspiring!
- the big use case seems to be to support those parties relying on NIST and FIPS for their security requirements.
Wrong. Anything+PQ, and in particular ECC+PQ, complies with NIST's standards when the PQ part does. See NIST SP 800-227: "This publication approves the use of the key combiner (14) for any t > 1 if at least one shared secret (i.e., Sj for some j) is generated from the key-establishment methods in SP 800-56A [1] or SP 800-56B [2] or an approved KEM." For example, if the PQ part is ML-KEM as per FIPS 203, then NIST allows ECC+PQ too.
What's next: claiming that using PQ in an Internet protocol would violate NIST standards unless NIST has standardized that particular Internet protocol?
This encompasses much more than just the US government as other certification bodies and other national governments have come to rely on the outcome of the NIST competition, which was the only public multi-year post-quantum cryptography effort to evaluate the security of proposed new post-quantum algorithms.
I won't bother addressing the errors here, since the bottom-line claim is orthogonal to the issue at hand. The TLS WG already has an ECC+PQ document using NIST-approved PQ; the question is whether to also have a document allowing the ECC seatbelt to be removed.
It was also argued pure PQ has less complexity.
You know what would be even less complicated? Encrypting with the null cipher!
There was a claim that PQ is less complex than ECC+PQ. There was no response to Andrey Jivsov objecting that having a PQ option makes the ecosystem more complicated. The basic error in the PQ-less-complex claim is that it ignores ECC+PQ already being there.
How the "area director" described the objections.
Opposed argument summary
Most of the arguments against adoption are focused on the fact that a failsafe is better than no failsafe, irrespective of which post-quantum algorithm is used,
This is the closest that the "area director" comes to acknowledging the central security argument for ECC+PQ. Of course, the "area director" spends as little time as possible on security. Compare this to my own objection to adoption, which started with SIKE as a concrete example of the dangers and continued with "SIKE is not an isolated example: https://cr.yp.to/papers.html#qrcsp shows that 48% of the 69 round-1 submissions to the NIST competition have been broken by now".
and that the practical costs for hybrids are negligible.
Hmmm. By listing this as part of an "opposed argument summary", is the "area director" suggesting that this was disputed? When and where was the dispute?
As noted above, I've seen unquantified NSA/GCHQ fearmongering about costs, but that was outside IETF. If NSA and GCHQ tried the same arguments on a public mailing list then they'd end up being faced with questions that they can't answer.
It was also argued that having an RFC gives too much promotion or sense of approval to a not recommended algorithm.
When I wrote my own summary of the objections, I provided a quote and link for each point. The "area director" doesn't do this. If the "area director" is accurately presenting an argument that was raised, why not provide a quote and a link? Is the "area director" misrepresenting the argument? Making up a strawman? The reader can't tell.
I have expanded some of the arguments and my interpretation of the weight of these below.
This comment about "weight" is revealing. What we'll see again and again is that the "area director" is expressing the weight that he places on each argument (within the arguments selected and phrased by the "area director"), i.e., the extent to which he is convinced or not convinced by those arguments.
Given that IESG has power under IETF rules to unilaterally block publications approved by WGs, it's unsurprising that the "area directors", in their roles as IESG members, will end up evaluating the merits of WG-approved documents. But that isn't what this "area director" was instructed to do here. There isn't a WG-approved document at this point. Instead the "area director" was instructed to evaluate whether the chairs "appropriately" called "rough consensus" to "adopt" the document. The "area director" is supposed to be evaluating procedurally what the WG decision-makers did. Instead the "area director" is putting his thumb on the scale in favor of the document.
Non-hybrid as "basic flaw"
The argument by some opponents that non-hybrids are a "basic flaw" seems to miscategorize what a "basic flaw" is. There is currently no known "basic flaw" against MLKEM.
I think that the "area director" is trying to make some sort of claim here about ML-KEM not having been attacked, but the wording is so unclear as to be unevaluatable. Why doesn't KyberSlash count? How about Clangover? How about the continuing advances in lattice attacks that have already reduced ML-KEM below its claimed security targets, the most recent news being from last month?
More importantly, claiming that ML-KEM isn't "known" to have problems is utterly failing to address the point of the ECC seatbelt. It's like saying "This car hasn't crashed, so the absence of seatbelts isn't a basic flaw".
As was raised, it is rather odd to be arguing we must immediately move to use post-quantum algorithms while at the same time argue these might contain fundamental basic flaws.
Here the "area director" is reasonably capturing a statement from one document proponent (original wording: "I find it to be cognitive dissonance to simultaneously argue that the quantum threat requires immediate work, and yet we are also somehow uncertain of if the algorithms are totally broken. Both cannot be true at the same time").
But I promptly followed up explaining the error: "Rolling out PQ is trying to reduce the damage from an attacker having a quantum computer within the security lifetime of the user data. Doing that as ECC+PQ instead of just PQ is trying to reduce the damage in case the PQ part is broken. These actions are compatible, so how exactly do you believe they're contradictory?"
There was, of course, no reply at the time. The "area director" now simply repeats the erroneous argument.
As TLS (or IETF) is not phasing out all non-hybrid classics,
"Non-hybrid classics" is weird terminology. Sometimes pre-quantum algorithms (ECC, RSA, etc.) are called "classical", so I guess the claim here is that using just ECC in TLS isn't being phased out. That's a bizarre claim. There are intensive efforts to roll out ECC+PQ in TLS to try to protect against quantum computers. Cloudflare reports the usage of post-quantum cryptography having risen to about 50% of all browsers that it sees (compared to 20% a year ago); within those connections, 95% use ECC+MLKEM768 and 5% use ECC+Kyber768.
The "area director" also gives no explanation of why the "not phasing out" claim is supposed to be relevant here.
I find this argument not strong enough
See how the "area director" is saying the weight that the "area director" places on each argument (within the arguments selected and phrased by the "area director"), rather than evaluating whether there was consensus to adopt the document?
to override the consensus of allowing non-hybrid standards from being defined
Circular argument. There wasn't consensus to adopt the document in the first place.
especially in light of the strong consensus for marking these as "not recommended".
I think many readers will be baffled by this comment. If something is "not recommended", wouldn't that be an argument against standardizing it, rather than an argument for standardizing it?
The answer is that "not recommended" doesn't mean what you think it means: the "area director" is resorting to confusing jargon. I don't think there's any point getting into the weeds on this.
Incompetent planning for the future.
Non-hybrids are a future end goal
Additionally, since if/when we do end up in an era with a CRQC, we are ultimately designing for a world where the classic components offer less to no value.
If someone is trying to argue for removing ECC, there's a big difference between the plausible scenario of ECC having "less" value and the extreme scenario of ECC having "no" value. It's wrong for the "area director" to be conflating these possibilities.
As I put it almost two years ago: "Concretely, think about a demo showing that spending a billion dollars on quantum computation can break a thousand X25519 keys. Yikes! We should be aiming for much higher security than that! We don't even want a billion-dollar attack to be able to break one key! Users who care about the security of their data will be happy that we deployed post-quantum cryptography. But are the users going to say 'Let's turn off X25519 and make each session a million dollars cheaper to attack'? I'm skeptical. I think users will need to see much cheaper attacks before agreeing that X25519 has negligible security value."
Furthermore, let's think for a moment about the idea that one will eventually want to transition to just ML-KEM, the specific proposal that the "area director" is portraying as the future. Here are three ways that this can easily be wrong:
-
Maybe ML-KEM's implementation issues end up convincing the community to shift to a more robust option, analogously to what happened with ECC.
-
Maybe the advances in public attacks continue to the point of breaking ML-KEM outright.
-
Maybe the cliff stops crumbling and ML-KEM survives, but more efficient options also survive. At this point there are quite a few options more efficient than ML-KEM. (Random example: SMAUG. The current SMAUG software isn't as fast as the ML-KEM software, but this is outweighed by SMAUG using less network traffic than ML-KEM.) Probably some options will be broken, but ML-KEM would have to be remarkably lucky to end up as the most efficient remaining option.
Does this "area director" think that all of the more efficient options are going to be broken, while ML-KEM won't? Sounds absurdly overconfident. More likely is that the "area director" doesn't even realize that there are more efficient options. For anyone thinking "presumably those newer options have received less scrutiny than ML-KEM": we're talking about what to do long-term, remember?
Taking ML-KEM as the PQ component of ECC+PQ is working for getting something rolled out now. Hopefully ML-KEM will turn out to not be a security disaster (or a patent disaster). But, for guessing what will be best to do in 5 or 10 or 15 years, picking ML-KEM is premature.
When and where to exactly draw the line of still using a classic component safeguard is speculation at best.
Here the "area director" is clearly attacking a strawman.
Already supporting pure post quantum algorithms now to gain experience
How is rolling out PQ supposed to be gaining experience that isn't gained from the current rollout of ECC+PQ?
Also, I think it's important to call out the word "pure" here as incoherent, indefensible marketing. What we're actually talking about isn't modifying ML-KEM in any way; it's simply hashing the ML-KEM session key together with other inputs. Is ML-KEM no longer "pure" when it's plugged into TLS, which also hashes session keys? (The word "pure" also showed up in a few of the earlier quotes.)
while not recommending it at this time seems a valid strategy for the future, allowing people and organizations their own timeline of deciding when/if to go from hybrid to pure PQ.
Here we again see the area director making a decision to support the document, rather than evaluating whether there was consensus in the WG to adopt the document.
Again getting the complexity evaluation backwards.
Added complexity of hybrids
There was some discussion on whether or not hybrids add more complexity, and thus add risk, compared to non-hybrids. While arguments were made that proper classic algorithms add only a trivial amount of extra resources, it was also pointed out that there is a cost of implementation, deployment and maintenance.
Here the "area director" is again making the same mistake explained earlier: ignoring the fact that ECC+PQ is already there, and thus getting the complexity evaluation backwards.
The "thus add risk" logic is also wrong. Again, all of these options are more complex than the null cipher.
Additionally, the existence of draft-ietf-tls-hybrid-design and the extensive discussions around "chempat" vs "xwing" vs "kitchensink" shows that there is at least some complexity that is added by the hybrid solutions.
No, the details of how to combine ECC with PQ in TLS are already settled and deployed.
Looking beyond TLS: Chempat hashes the transcript (similarly to TLS), making it robust for a wide range of protocols. The other options add fragility by hashing less for the sake of minor cost savings. Each of these options is under 10 lines of code. The "area director" exaggerates the complexity by mentioning "extensive discussions", and spends much more effort hyping this complexity as a risk than acknowledging the risks of further PQ attacks.
Anyway, it's not as if the presence of this document has eliminated the discussions of ECC+PQ details, nor is there any credible mechanism by which it could do so. Again, the actual choice at hand is whether to have PQ as an option alongside ECC+PQ. Adding that option adds complexity. The "area director" is getting the complexity comparison backwards by instead comparing (1) PQ in isolation to (2) ECC+PQ in isolation.
Botching the evaluation of human factors.
RFCs being interpreted as IETF recommendation
It seems there is disagreement about whether the existence of an RFC itself qualifies as the IETF defacto "recommending" this in the view of IETF outsiders/ implemeners whom do not take into account any IANA registry RECOMMENDED setting or the Mandatory-To-Implement (MTI) reommendations.
I would expect a purchasing manager to have instructions along the lines of "Buy only products complying with the standards", and to never see IETF's confusing jumble of further designations.
This is an area where we recently found out there is little consensus on an IETF wide crypto policy statement via an RFC. The decision on whether an RFC adds value to a Code Point should therefor be taken independently of any such notion of how outsiders might interpret the existence of an RFC.
From a security perspective, it's a big mistake to ignore the human factor, such as the impact of a purchasing manager saying "This is the most efficient standard so I'll pick that".
In this case, while Section 3 could be considered informative, I believe Section 4 and Section 5 are useful (normative) content that assists implementers.
Is this supposed to have something to do with the consensus question?
And people have proposed extending the Security Considerations to more clearly state that this algorithm is not recommended at this point in time. Without an RFC, these recommendations cannot be published by the IETF in a way that implementers would be known to consume.
Ah, yes, "known to consume"! There was, um, one of those, uh, studies showing the details of, um, how implementors use RFCs, which, uh, showed that 100% of the implementors diligently consumed the warnings in the RFCs. Yeah, that's the ticket. I'm sure the URL for this study is sitting around here somewhere.
Let's get back to the real world. Even if an implementor does see a "This document is a bad idea" warning, this simply doesn't matter when the implementors are chasing contracts issued by purchasing managers who simply care what's standardized and haven't seen the warning.
It's much smarter for the document to (1) eliminate making the proposal that it's warning about and (2) focus, starting in the title, on saying why such proposals are bad. This makes people more likely to see the warning, and at the same time it removes the core problem of the bad proposal being standardized.
Fictions regarding country actions.
Say no to Nation State algorithms
The history and birth of MLKEM from Kyber through a competition of the international Cryptographic Community, organized through US NIST can hardly be called or compared to unilateral dictated nation state algorithm selection.
NIST repeatedly refused to designate the "NIST Post-Quantum Cryptography Standardization Process" as a "competition". It even wrote that the process "should not be treated as a competition".
Certainly there were competition-like aspects to the process. I tend to refer to it as a competition. But in the end the selection of algorithms to standardize was made by NIST, with input behind the scenes from NSA.
There has been no other comparable public effort to gather cryptographers and publicly discuss post-quantum crypto candidates in a multi-years effort.
Nonsense. The premier multi-year effort by cryptographers to "publicly discuss post-quantum crypto candidates" is the cryptographic literature.
In fact, other nation states are heavily relying on the results produced by this competition.
Here's the objection from Stephen Farrell that the "area director" isn't quoting or linking to: "I don't see what criteria we might use in adopting this that wouldn't leave the WG open to accusations of favouritism if we don't adopt other pure PQ national standards that will certainly arise".
After reading this objection, you can see how the "area director" is sort of responding to it by suggesting that everybody is following NIST (i.e., that the "certainly arise" part is wrong).
But that's not true. NIST's selections are controversial. For example, ISO is considering not just ML-KEM but also
-
Classic McEliece, where NIST has said it's waiting for ISO ("After the ISO standardization process has been completed, NIST may consider developing a standard for Classic McEliece based on the ISO standard"), and
-
FrodoKEM, which NIST said "will not be considered further for standardization".
ISO is also now considering NTRU, where the advertisement includes "All patents related to NTRU have expired" (very different from the ML-KEM situation).
BSI, which sets cryptographic standards for Germany, recommends not just ML-KEM but also FrodoKEM (which it describes as "more conservative" than ML-KEM) and Classic McEliece ("conservative and very thoroughly analysed"). Meanwhile China has called for submissions of new post-quantum proposals for standardization.
I could keep going, but this is enough evidence to show that Farrell's prediction was correct; the "area director" is once again wrong.
The use of MLKEM in the IETF will not set a precedent for having to accept other nation state cryptography.
Notice how the "area director" is dodging Farrell's point. If NSA can pressure the TLS WG into standardizing non-hybrid ML-KEM, why can't China pressure the TLS WG into standardizing something China wants? What criteria will IETF use to answer this question without leaving the WG "open to accusations of favouritism"? If you want people to believe that it isn't about the money then you need a really convincing alternative story.
Not recommending pure PQ right now
There was a strong consensus that pure PQ should not be recommended at this time, which is reflected in the document. There was some discussion on RECOMMENDED N vs D, which is something that can be discussed in the WG during the document's lifecycle before WGLC. It was further argued that adopting and publishing this document gives the WG control over the accompanying warning text, such as Security Considerations, that can reflect the current consensus of not recommending pure MLKEM over hybrid at publication time.
This is just rehashing earlier text, even if the detailed wording is a bit different.
Conclusion
The pure MLKEM code points exist.
Irrelevant. The question is whether they're being standardized.
An international market segment that wants to use pure MLKEM exists
"International"? Like Swedish company Ericsson setting up its "Ericsson Federal Technologies Group" in 2024 to receive U.S. military contracts?
as can be seen by the consensus call outcome
Um, how?
along with existing implementations of the draft on mainstream devices and software.
Yes, NSA waving around money has convinced some corporations to provide software. How is this supposed to justify the claim that "there was rough consensus to adopt the document"?
There is a rough consensus to adopt the document
Repeating a claim doesn't make it true.
with a strong consensus for RECOMMENDED N and not MTI, which is reflected in the draft.
Irrelevant. What matters is whether the document is standardized.
The reasons to not publish MLKEM as an RFC seem more based on personal opinions of risk and trust not shared amongst all participants as facts.
This sort of dismissal might be more convincing if it were coming from someone providing more URLs and fewer easily debunked claims. But it's in any case not addressing the consensus question.
Based on the above, I believe the WG Chairs made the correct call that there was rough consensus for adopting draft-connolly-tls-mlkem-key-agreement
The chairs claimed that "we have consensus to adopt this draft" (based on claiming that "there were enough people willing to review the draft", never mind the number of objections). That claim is wrong. The call for adoption failed to reach consensus.
The "area director" claimed that "There is clearly consensus based on the 67 responses to the adoption call. ... The vast majority was in favour of adoption ... There were a few dissenting opinions". These statements still haven't been retracted; they were and are outright lies about what happened. Again, the actual tallies were 20 people unequivocally supporting adoption, 2 people conditionally supporting adoption, and 7 people unequivocally opposing adoption.
Without admitting error, the "area director" has retreated to a claim of "rough consensus". The mishmash of ad-hoc comments from the "area director" certainly doesn't demonstrate any coherent meaning of "rough consensus".
It's fascinating that IETF's advertising to the public claims that IETF's "decision-making requires achieving broad consensus", but IETF's WG procedures allow controversial documents to be pushed through on the basis of "rough consensus". To be clear, that's only if the "area director" approves of the documents, as you can see from the same "area director" issuing yet another mishmash of ad-hoc comments to overturn a separate chair decision in September 2025.
You would think that the WG procedures would define "rough consensus". They don't. All they say is that "51% of the working group does not qualify as 'rough consensus' and 99% is better than rough", not even making clear whether 51% of voters within a larger working group can qualify. This leaves a vast range of ambiguous intermediate cases up to the people in power.
Version: This is version 2025.11.23 of the 20251123-dodging.html web page.