Loup Vaillant [Sun, 3 Mar 2019 21:56:29 +0000 (22:56 +0100)]
Added secure channel protocols (experimental)
At long last, the NaCl family of crypto libraries is gaining direct
support for secure channels.
Up until now, the choices were basically invent our own protocol, or
give up and use a TLS library, thus voiding the usability improvements
of NaCl libraries.
Now we have a solution. It's still a bit experimental, it's not yet
documented, but it's there. And soon, we will finally be able to shift
the cryptographic right answer for secure channels away from TLS, and
towards the NaCl family. Or perhaps just Monocypher, if for some reason
Libsodium doesn't follow suit. :-)
Loup Vaillant [Fri, 22 Feb 2019 20:14:06 +0000 (21:14 +0100)]
Added comment on speed tests
The way I measure timings is not perfectly portable. Users who
get weird results are encouraged to modify this bit of code to
have proper measurements.
Loup Vaillant [Sun, 17 Feb 2019 18:25:52 +0000 (19:25 +0100)]
Removed division by zero in speed benchmarks
If some library is so fast that it goes below the resolution of the
timer we're using to measure it, the measured duration may be zero, and
then trigger a division by zero when we convert it to a speed in Hz.
This could possibly happen with a very fast library (Libsodium), on a
very fast machine, with a sufficiently low resolution timer.
This patch reworks and simplifies things a bit, and adds an explicit
check. We now print "too fast to be measured" instead of dividing by
zero.
Loup Vaillant [Sat, 26 Jan 2019 14:44:01 +0000 (15:44 +0100)]
Allow the test suite to customise its random seed
This will only affect the property based tests, not the test vectors
themselves. The idea is to let paranoid users run the test suite with
lots and lots of different streams of random numbers, just to be safe.
Test vector generation could undergo a similar transformation, though it
is less likely to be worth the trouble (we'd have to generate the test
vectors, compile the test suite all over again).
Loup Vaillant [Fri, 25 Jan 2019 14:43:02 +0000 (15:43 +0100)]
Link SHA-512 code when using -DED25519_SHA512
When the $CFLAGS variable contains the -DED25519_SHA512 option (by
default it doesn't), the code from src/optional/sha512.c is
automatically linked to the final libraries (libmonocypher.a and
libmonocypher.so).
That way, users who need to install a ED25519 compliant version of
Monocypher can do so simply by altering the compilation options with the
$CFLAGS variable.
Loup Vaillant [Sun, 20 Jan 2019 21:42:38 +0000 (22:42 +0100)]
Made L an array of *signed* integers
Was unsigned previously, causing a bunch of implementation defined
conversions. No machine nowadays are no 2's complement, but it's still
cleaner that way.
Loup Vaillant [Thu, 6 Dec 2018 00:04:37 +0000 (01:04 +0100)]
Decoupled window widths, minimised stack usage
The width of the pre-computed window affects the program size. It has
been set to 5 (8 elements) so we can approach maximum performance
without bloating the program too much.
The width of the cached window affects the *stack* size. It has been set
to 3 (2 elements) to avoid blowing up the stack (this matters most on
embedded environments). The performance hit is measurable, yet very
reasonable.
Footgun wielders can adjust those widths as they see fit.
Loup Vaillant [Wed, 5 Dec 2018 22:16:55 +0000 (23:16 +0100)]
Parameterise sliding window width with a macro
This is more general, perhaps even more readable this way. This also
lays the groundwork for using different window widths for the
pre-computed window and the cached one. (The cached window has to be
smaller to save stack space, while the pre-computed constant is allowed
to be bigger).
Loup Vaillant [Thu, 16 Aug 2018 19:29:13 +0000 (21:29 +0200)]
Added tests for HChacha20
Not that it needed any (XChacha20 were enough), but it's easier to
communicate to outsiders that HChacha20 is correct when we have explicit
test vectors.
Loup Vaillant [Wed, 15 Aug 2018 18:02:03 +0000 (20:02 +0200)]
Properly prevent S malleability
S malleability was mostly prevented in a previous commit, for reasons
that had nothing to do with S malleability. This mislead users into
thinking Monocypher was not S malleable.
To avoid confusion, I properly verify that S is strictly lower than L
(the order of the curve). S malleability is no longer a thing.
We still have nonce malleability, but that one can't be helped.
Also added Wycheproof test vectors about malleability.
Loup Vaillant [Sat, 11 Aug 2018 18:05:28 +0000 (20:05 +0200)]
Signed sliding windows for EdDSA
Signed sliding windows are effectively one bit wider than their unsigned
counterparts, without doubling the size of the corresponding look up
table. Going from 4-bit unsigned to 5-bit signed allowed us to gain
almost 17 additions on average.
This gain is less impressive than it sounds: the whole operation still
costs 254 doublings and 56 additions, and going signed made window
construction and look up a bit slower. Overall, we barely gained 2.5%.
We could gain a bit more speed still by precomputing the look up table
for the base point, but the gains would be similar, and the costs in
code size and complexity would be even bigger.
Loup Vaillant [Sat, 11 Aug 2018 16:19:35 +0000 (18:19 +0200)]
Reduced EdDSA malleability for sliding windows
Signed sliding windows can overflow the initial scalar by one bit. This
is not a problem when the scalar is reduced modulo L, which is smaller
than 2^253. The second half of the signature however is controlled by
the attacker, and can be any value.
Legitimate signatures however always reduce modulo L. They don't really
have to, but this helps with determinism, and enables test vectors. So
we can safely reject any signature whose second half exceeds L.
This patch rejects anything above 2^253-1, thus guaranteeing that the
three most significant bits are cleared. This eliminate s-malleability
in most cases, but not all. Besides, there is still nonce malleability.
Users should still assume signatures are malleable.
Loup Vaillant [Sat, 11 Aug 2018 15:36:14 +0000 (17:36 +0200)]
EdDSA sliding windows now indicate the number
This is in preparation for signed sliding windows. Instead of choosing
-1 for doing nothing, and an index to point to the table, we write how
much we add directly (that means 0 for nothing). We divide the number
by 2 to get the index.
The double scalarmult routine doesn't handle negative values yet.
Loup Vaillant [Wed, 8 Aug 2018 21:24:25 +0000 (23:24 +0200)]
Signed comb with unsigned table
Or, bitwiseshiftleft saves the day. The current code is hacky as hell,
but it works, and it cleared up my confusion. Turns out a suitable
signed comb is quite different from an unsigned one: the table itself
should represent -1 and 1 bits, instead of 0 and 1 bits.
Right now the same effect is achieved with 2 additions (more precisely,
an addition and a subtraction). With the proper table, it can be one
operation.
Loup Vaillant [Sat, 4 Aug 2018 19:47:40 +0000 (21:47 +0200)]
Avoids the first doubling for EdDSA signatures
The overhead of this first multiplication is not much, but it's
measurable.
Note the use of a macro for the constant time lookup and addition. It
could have been a function, but the function call overhead eats up all
the gains (I guess there are too many arguments to push to and pop from
the stack).
Loup Vaillant [Sat, 4 Aug 2018 19:37:14 +0000 (21:37 +0200)]
Avoids the first few doublings in EdDSA verification
Legitimate scalars with EdDSA verification are at most 253-bit long.
That's 3 bits less than the full 256 bits. By starting the loop at the
highest bit set, we can save a couple doublings. It's not much, but
it's measurable.
Loup Vaillant [Sat, 4 Aug 2018 19:08:53 +0000 (21:08 +0200)]
Comb for EdDSA signatures in Niels coordinates
While it takes a bit more space to encode, this also avoids some initial
overhead, and significantly reduces stack size.
Note: we could do away with the T2 coordinate to reduce the overhead of
constant time lookup, but this would also require more work per point
addition. Experiments suggest the bigger table is a little faster.
Loup Vaillant [Sat, 4 Aug 2018 13:30:54 +0000 (15:30 +0200)]
All field element constants have the proper invariants
A number of pre-computed constant didn't follow the ideal invariants set
forth by the carry propagation logic. This increased the risk of limb
overflow.
Now all such constants are generated with fe_frombytes(), which
guarantees they can withstand the same number of additions and
subtraction before needing carry propagation. This reduces the risks,
and simplifies the analysis of code using field arithmetic.
Turns out this commit was a huge blunder. Carry propagation works by
minimising the absolute value of each limb. The reverted patch did not
do that, resulting in limbs that were basically twice as big as they
should be.
While it could still work, this would at least reduce the margin for
error. Better safe than sorry, and keep the more versatile loading
routine we had before.
Likewise, constants should minimise the absolute value of their limbs.
Failing to do so caused what was described in issue #107.
Loup Vaillant [Fri, 3 Aug 2018 21:25:55 +0000 (23:25 +0200)]
Cleaner fe_frombytes() (loading field elements)
The old version of fe_frombytes() from the ref10 implementation was not
as clean as I wanted it to be: instead of loading exactly the right
bytes, it played fast and loose, then used a carry operation to
compensate.
It works, but there's a more direct, simpler, and I suspect faster
approach: put the right bits in the right place to begin with.
Loup Vaillant [Fri, 3 Aug 2018 17:28:31 +0000 (19:28 +0200)]
Specialised adding code for EdDSA signatures
- Saved one multiplication by assuming Z=1
- Hoisted wipes out of loops
- Removed wipes for variable time additions
This made both signatures and verification a bit faster. (Note: current
signature verification speed is only 23% slower than key exchange. I
didn't think it could be that fast.)
Loup Vaillant [Fri, 3 Aug 2018 16:47:15 +0000 (18:47 +0200)]
Full pre-computed table for EdDSA signatures
The main gain for now comes from reducing the amount of constant time
lookup. We could reduce the table's size even further, *or* save a few
multiplications.
I'm currently a little suspicious of the way I generated the table. If
it passes the tests, it shouldn't have any error, but it still requires
some checking.
Point addition used to use 8 intermediate variables. That's 6 more than
what was needed. Removing them made wiping faster, and shrank the stack
by 240 bytes. (Stack size may matter in embedded systems.)
This will require some tweaking yet. We could use signed combs to add 1
bit to the table (currently 4) without making it any bigger. We should
hoist buffer wipes out of the double and add operations. We could
consider other means of adding and doubling, to save multiplications and
to reduce table sizes (smaller tables have faster constant time lookup).
But we're faster than key exchange already. We're on the right track.
There are two reasons: the first is that there are 2 special cases where
the ladder or conversion gives the wrong results (the ladder is correct,
it was just hijacked for another purpose).
The second reason is, fixed based scalar multiplication can be made even
faster in Twisted Edwards space, by using a comb algorithm. The current
patch uses a window, so it is slower. It will get faster again.
Reduces the number of additions in ge_double_scalarmult_vartime().
Verification is now 80% as fast as signing (in naive implementations,
it's only 50% as fast).
It could be even faster, but it's probably not worth the trouble:
- We could precompute the lookup table for the base point instead of
constructing a cache. This would save about 8 point additions total,
at the cost of 64 lines of code just to lay out the 320 precomputed
constants.
- We could use special, cheaper additions for the precomputed base
point, at the cost of an additional addition function.
- We could use *signed* sliding windows to further reduce the number of
additions, at the cost of an additional point subtraction function
(two if combined with special additions for the base point). Besides,
I don't understand how they work.
The low hanging fruits have been taken. Signature verification is
faster than ever before. This is good enough.
Fusing the two scalar multiplication together halves the number of
required doublings. The code is just as simple, speeds up signature
verification quite a bit.
General scalar multiplication for EdDSA was only used for checking, and
thus no longer needs to be constant time. Removing buffer wipes and
using a variable time ladder simplifies and speeds up checks quite a
bit.
The one that was removed for version 2.0.4, because a bogus special
cases caused it to accept forged signature (a big fat vulnerability).
To avoid the vulnerability, this optimisation is only used for signing
and public key generation. Those never multiply the base point by zero,
and as such should not hit any nasty special case. More specifically:
- Public key generation works the same as X25519: the scalar is trimmed
before it is multiplied by the base point.
- Signing multiplies the base point by a nonce, which by construction is
a random number between 1 and L-1 (L is the order of the curve).
External scrutiny will be needed to confirm this is safe.
Note: signing is now even faster than it was, because multiplying by a
known point (the base point) lets us avoid a conversion and a division.
Some users tend to rely on security properties that are not provided by
cryptographic signatures. This has lead to serious problems int the
past, such as BitCoin transaction malleability (a replay attack where
the recipient could repeat a previously existing transaction).
Mitigations for signature malleability are possible, but they're at best
easily misunderstood, and at worst incomplete. Better warn the users in
the manual than encouraging the reliance on non-standard security
properties.
Those test vectors assume SHA-512, and thus are only activated with the
-DED25519_SHA512 compilation option.
Note the omission of malleability test vectors. Monocypher will
happily accept signatures even when S is not in canonical form. This
is contrary to RFC 8032, which requires implementations to check that S
is lower than L.
I believe RFC 8032 is wrong. Non-malleability means that someone who
only knows the public key, message, and signature, cannot produce
another valid signature. It does *not* mean there is only one valid
signature. In fact, when we know the private key, we can produce a
virtually unlimited number of different, valid, canonical signatures.
Like ECDSA, EdDSA uses a nonce. Unlike ECDSA, that nonce doesn't come
from a random source, it comes from a hash of the message itself. This
determinism prevents nonce reuse, among other problems. However,
nothing prevents someone to bypass this rule, and use a random nonce
instead. This will naturally produce a different, yet valid, signature.
EdDSA signatures are not unique. The difference between this and
malleability is subtle enough that advertising non-malleability will
lead users to believe in uniqueness, and bake that faulty assumption in
their designs, which will then be insecure.
Loup Vaillant [Wed, 27 Jun 2018 09:16:56 +0000 (11:16 +0200)]
Easier tarball generation
- TARBALL_VERSION now defaults to the contents of VERSION.md
- TARBALL_DIR now defaults to the current directory
- tarball_ignore is now included in the tarball (one can use the tarball
to generate other tarballs).
Loup Vaillant [Sun, 24 Jun 2018 13:58:55 +0000 (15:58 +0200)]
Don't free() NULL pointers
The alloc() function in the test suite unconditionally succeeds when
trying to allocate zero bytes. It does so by returning NULL right away,
without exiting the program. This was for portability for platforms
that refuse to allocate zero bytes.
Unfortunately, this meant that the test suite later called free() on
those NULL pointers, which is undefined. Wrapping free() in a dealloc()
function avoids this error.
Loup Vaillant [Sat, 23 Jun 2018 18:34:48 +0000 (20:34 +0200)]
EdDSA no longer accepts all zero signatures
This fixes the critical vulnerability in commit e4cbf84384ffdce194895078c88680be0c341d76 (compute signatures in
Montgomery space (faster)), somewhere between versions 0.8 and 1.0, and
detected by the tests in the parent commit.
The fix basically reverts the optimisation, effectively halving the
performance of EdDSA.
It appears the conversion to Montgomery space and back didn't handle
every edge case correctly. That optimisation will be re-introduced once
the issue has been fully understood. This will probably require expert
advice.