来自难题的随机数:LWE玩具RNG
Random Numbers from Hard Problems: LWE Based Toy RNG

原始链接: https://blog.s20n.dev/posts/lwe-rng/

伪随机数生成器 (PRNG) 的目标是产生看似真正随机的输出。决策学习带误差 (DLWE) 问题具有这种特性:其输出在计算上与随机数无法区分。 DLWE 依赖于一个包含秘密密钥 (`s`)、公共随机向量 (`a_i`)、小噪声 (`e_i`) 和素数模数 (`q`) 的方程来生成公共输出 (`b_i`)。核心假设是,知道 `a_i` 和 `q` 无法让任何人确定 `s` 或预测 `b_i`,超出随机机会。 然而,标准的 DLWE 需要每个样本都使用真正的随机 `a_i` 值,这与 PRNG 的确定性本质相冲突——PRNG *必须* 可以从种子重现。因此,直接基于 DLWE 实现 PRNG 需要进行修改以解决对确定性输入的需求。

最近在Hacker News上分享了一个基于学习带误差(LWE)的新随机数生成器(RNG),但很快受到了批评。虽然传统的RNG,如Blum-Blum-Shub,提供了安全证明,但这种新方法缺乏证明,并且根据初步测试似乎存在缺陷。 评论指出核心数学运算——平方值——存在问题,这限制了可能的输出,并大大降低了随机性。分析表明,该RNG很快就会陷入重复循环,远低于理论上可能的唯一值数量。 一位评论者演示了在几百次迭代中就出现了循环,即使在计算中使用了相对较大的质数。本质上,该RNG没有有效地利用其参数的全部潜力,并被认为“完全且彻底地损坏了”。
相关文章

原文

From the above definition of PRNG, we know that the goal of an RNG is to generate an output that is indistinguishable from truly random. The Decision LWE (DLWE) problem has exactly this property.

This is the famous LWE equation

\[ b_i = a_i \cdot s + e_i \pmod q \]

Where

$s$ (secret vector)
A secret vector (the key) that no one knows.
$a_i$ (public vector)
A public vector of random numbers.
$e_i$ (small noise)
A small error value (+1 / 0 / -1) to smudge the result
$q$ (prime modulus)
A large prime number modulus
$b_i$ (output)
The final result, which is publicly known

The DLWE assumption states that the output $b_i$ of the LWE problem is computationally indistinguishable from a uniform random number in $\mathbb{Z}_q$.

In other words, when $s$ and $e_i$ are secret and only $a_i$ and $q$ are known publicly, the output $b_i$ is computationally indistinguishable from a random number between 0 and q-1.

Unfortunately, the LWE assumption requires use to get $a_i$ from a random uniform distribution for each sample. Since a PRNG must be entirely deterministic and reproducible from its seed, we cannot generate a truly random $a_i$​ at each step, so we need to deviate from the LWE problem a bit.

联系我们 contact @ memedata.com