Made the 3 options (from source, from lib, system wide installation)
clearer, and stated the ability to change compilation flags explicitly.
(Those flags are all standards, but not everyone may know them).
Loup Vaillant [Thu, 14 Mar 2019 22:45:44 +0000 (23:45 +0100)]
Clarified why some buffers are not wiped
ge_msub() and ge_double_scalarmult_vartime() aren't clear why they don't
wipe their buffers. I have added warnings that they indeed don't do so,
and thus should not be used to process secrets.
This also makes clear to auditors that failing to wipe the buffers was
intentional.
Loup Vaillant [Wed, 13 Mar 2019 23:10:26 +0000 (00:10 +0100)]
Improved the key exchange API
crypto_kex_ctx is now differentiated into a client specific context, and
a server specific context. The distinction is entirely artificial (it's
the same thing under the hood), but it prevents some misuses at compile
time, making the API easier to use.
The name of the arguments have also been changed: "local" and "remote"
have been replaced by "client" and "server" whenever appropriate. The
previous names made implementation easier, but their meaning was context
dependent, and thus confusing. The new names have stable meanings, and
thus easier to document and use.
Fabio Scotoni [Tue, 12 Mar 2019 06:17:39 +0000 (07:17 +0100)]
man: fix whitespace and macro invocation issues
- There was some trailing whitespace on some of the lines of the new
pages that I hadn't noticed.
- There was a .PP instead of .Pp.
- There was a .Fa with no space after it.
Loup Vaillant [Mon, 4 Mar 2019 22:20:28 +0000 (23:20 +0100)]
Corrected undefined behaviour in kex tests
Calling those functions again on the same status not only does not make
any sense, it can grow the transcript beyond its maximum size of 128
bytes, which triggers a buffer overflow. We needed to save the context
so we could re-run the relevant function where we left of.
It's the second time the TIS interpreter finds a bug that the other
sanitisers didn't.
Loup Vaillant [Sun, 3 Mar 2019 21:56:29 +0000 (22:56 +0100)]
Added secure channel protocols (experimental)
At long last, the NaCl family of crypto libraries is gaining direct
support for secure channels.
Up until now, the choices were basically invent our own protocol, or
give up and use a TLS library, thus voiding the usability improvements
of NaCl libraries.
Now we have a solution. It's still a bit experimental, it's not yet
documented, but it's there. And soon, we will finally be able to shift
the cryptographic right answer for secure channels away from TLS, and
towards the NaCl family. Or perhaps just Monocypher, if for some reason
Libsodium doesn't follow suit. :-)
Loup Vaillant [Fri, 22 Feb 2019 20:14:06 +0000 (21:14 +0100)]
Added comment on speed tests
The way I measure timings is not perfectly portable. Users who
get weird results are encouraged to modify this bit of code to
have proper measurements.
Loup Vaillant [Sun, 17 Feb 2019 18:25:52 +0000 (19:25 +0100)]
Removed division by zero in speed benchmarks
If some library is so fast that it goes below the resolution of the
timer we're using to measure it, the measured duration may be zero, and
then trigger a division by zero when we convert it to a speed in Hz.
This could possibly happen with a very fast library (Libsodium), on a
very fast machine, with a sufficiently low resolution timer.
This patch reworks and simplifies things a bit, and adds an explicit
check. We now print "too fast to be measured" instead of dividing by
zero.
Loup Vaillant [Sat, 26 Jan 2019 14:44:01 +0000 (15:44 +0100)]
Allow the test suite to customise its random seed
This will only affect the property based tests, not the test vectors
themselves. The idea is to let paranoid users run the test suite with
lots and lots of different streams of random numbers, just to be safe.
Test vector generation could undergo a similar transformation, though it
is less likely to be worth the trouble (we'd have to generate the test
vectors, compile the test suite all over again).
Loup Vaillant [Fri, 25 Jan 2019 14:43:02 +0000 (15:43 +0100)]
Link SHA-512 code when using -DED25519_SHA512
When the $CFLAGS variable contains the -DED25519_SHA512 option (by
default it doesn't), the code from src/optional/sha512.c is
automatically linked to the final libraries (libmonocypher.a and
libmonocypher.so).
That way, users who need to install a ED25519 compliant version of
Monocypher can do so simply by altering the compilation options with the
$CFLAGS variable.
Loup Vaillant [Sun, 20 Jan 2019 21:42:38 +0000 (22:42 +0100)]
Made L an array of *signed* integers
Was unsigned previously, causing a bunch of implementation defined
conversions. No machine nowadays are no 2's complement, but it's still
cleaner that way.
Loup Vaillant [Thu, 6 Dec 2018 00:04:37 +0000 (01:04 +0100)]
Decoupled window widths, minimised stack usage
The width of the pre-computed window affects the program size. It has
been set to 5 (8 elements) so we can approach maximum performance
without bloating the program too much.
The width of the cached window affects the *stack* size. It has been set
to 3 (2 elements) to avoid blowing up the stack (this matters most on
embedded environments). The performance hit is measurable, yet very
reasonable.
Footgun wielders can adjust those widths as they see fit.
Loup Vaillant [Wed, 5 Dec 2018 22:16:55 +0000 (23:16 +0100)]
Parameterise sliding window width with a macro
This is more general, perhaps even more readable this way. This also
lays the groundwork for using different window widths for the
pre-computed window and the cached one. (The cached window has to be
smaller to save stack space, while the pre-computed constant is allowed
to be bigger).
Loup Vaillant [Thu, 16 Aug 2018 19:29:13 +0000 (21:29 +0200)]
Added tests for HChacha20
Not that it needed any (XChacha20 were enough), but it's easier to
communicate to outsiders that HChacha20 is correct when we have explicit
test vectors.
Loup Vaillant [Wed, 15 Aug 2018 18:02:03 +0000 (20:02 +0200)]
Properly prevent S malleability
S malleability was mostly prevented in a previous commit, for reasons
that had nothing to do with S malleability. This mislead users into
thinking Monocypher was not S malleable.
To avoid confusion, I properly verify that S is strictly lower than L
(the order of the curve). S malleability is no longer a thing.
We still have nonce malleability, but that one can't be helped.
Also added Wycheproof test vectors about malleability.
Loup Vaillant [Sat, 11 Aug 2018 18:05:28 +0000 (20:05 +0200)]
Signed sliding windows for EdDSA
Signed sliding windows are effectively one bit wider than their unsigned
counterparts, without doubling the size of the corresponding look up
table. Going from 4-bit unsigned to 5-bit signed allowed us to gain
almost 17 additions on average.
This gain is less impressive than it sounds: the whole operation still
costs 254 doublings and 56 additions, and going signed made window
construction and look up a bit slower. Overall, we barely gained 2.5%.
We could gain a bit more speed still by precomputing the look up table
for the base point, but the gains would be similar, and the costs in
code size and complexity would be even bigger.
Loup Vaillant [Sat, 11 Aug 2018 16:19:35 +0000 (18:19 +0200)]
Reduced EdDSA malleability for sliding windows
Signed sliding windows can overflow the initial scalar by one bit. This
is not a problem when the scalar is reduced modulo L, which is smaller
than 2^253. The second half of the signature however is controlled by
the attacker, and can be any value.
Legitimate signatures however always reduce modulo L. They don't really
have to, but this helps with determinism, and enables test vectors. So
we can safely reject any signature whose second half exceeds L.
This patch rejects anything above 2^253-1, thus guaranteeing that the
three most significant bits are cleared. This eliminate s-malleability
in most cases, but not all. Besides, there is still nonce malleability.
Users should still assume signatures are malleable.
Loup Vaillant [Sat, 11 Aug 2018 15:36:14 +0000 (17:36 +0200)]
EdDSA sliding windows now indicate the number
This is in preparation for signed sliding windows. Instead of choosing
-1 for doing nothing, and an index to point to the table, we write how
much we add directly (that means 0 for nothing). We divide the number
by 2 to get the index.
The double scalarmult routine doesn't handle negative values yet.
Loup Vaillant [Wed, 8 Aug 2018 21:24:25 +0000 (23:24 +0200)]
Signed comb with unsigned table
Or, bitwiseshiftleft saves the day. The current code is hacky as hell,
but it works, and it cleared up my confusion. Turns out a suitable
signed comb is quite different from an unsigned one: the table itself
should represent -1 and 1 bits, instead of 0 and 1 bits.
Right now the same effect is achieved with 2 additions (more precisely,
an addition and a subtraction). With the proper table, it can be one
operation.
Loup Vaillant [Sat, 4 Aug 2018 19:47:40 +0000 (21:47 +0200)]
Avoids the first doubling for EdDSA signatures
The overhead of this first multiplication is not much, but it's
measurable.
Note the use of a macro for the constant time lookup and addition. It
could have been a function, but the function call overhead eats up all
the gains (I guess there are too many arguments to push to and pop from
the stack).
Loup Vaillant [Sat, 4 Aug 2018 19:37:14 +0000 (21:37 +0200)]
Avoids the first few doublings in EdDSA verification
Legitimate scalars with EdDSA verification are at most 253-bit long.
That's 3 bits less than the full 256 bits. By starting the loop at the
highest bit set, we can save a couple doublings. It's not much, but
it's measurable.
Loup Vaillant [Sat, 4 Aug 2018 19:08:53 +0000 (21:08 +0200)]
Comb for EdDSA signatures in Niels coordinates
While it takes a bit more space to encode, this also avoids some initial
overhead, and significantly reduces stack size.
Note: we could do away with the T2 coordinate to reduce the overhead of
constant time lookup, but this would also require more work per point
addition. Experiments suggest the bigger table is a little faster.
Loup Vaillant [Sat, 4 Aug 2018 13:30:54 +0000 (15:30 +0200)]
All field element constants have the proper invariants
A number of pre-computed constant didn't follow the ideal invariants set
forth by the carry propagation logic. This increased the risk of limb
overflow.
Now all such constants are generated with fe_frombytes(), which
guarantees they can withstand the same number of additions and
subtraction before needing carry propagation. This reduces the risks,
and simplifies the analysis of code using field arithmetic.
Turns out this commit was a huge blunder. Carry propagation works by
minimising the absolute value of each limb. The reverted patch did not
do that, resulting in limbs that were basically twice as big as they
should be.
While it could still work, this would at least reduce the margin for
error. Better safe than sorry, and keep the more versatile loading
routine we had before.
Likewise, constants should minimise the absolute value of their limbs.
Failing to do so caused what was described in issue #107.
Loup Vaillant [Fri, 3 Aug 2018 21:25:55 +0000 (23:25 +0200)]
Cleaner fe_frombytes() (loading field elements)
The old version of fe_frombytes() from the ref10 implementation was not
as clean as I wanted it to be: instead of loading exactly the right
bytes, it played fast and loose, then used a carry operation to
compensate.
It works, but there's a more direct, simpler, and I suspect faster
approach: put the right bits in the right place to begin with.
Loup Vaillant [Fri, 3 Aug 2018 17:28:31 +0000 (19:28 +0200)]
Specialised adding code for EdDSA signatures
- Saved one multiplication by assuming Z=1
- Hoisted wipes out of loops
- Removed wipes for variable time additions
This made both signatures and verification a bit faster. (Note: current
signature verification speed is only 23% slower than key exchange. I
didn't think it could be that fast.)
Loup Vaillant [Fri, 3 Aug 2018 16:47:15 +0000 (18:47 +0200)]
Full pre-computed table for EdDSA signatures
The main gain for now comes from reducing the amount of constant time
lookup. We could reduce the table's size even further, *or* save a few
multiplications.
I'm currently a little suspicious of the way I generated the table. If
it passes the tests, it shouldn't have any error, but it still requires
some checking.
Point addition used to use 8 intermediate variables. That's 6 more than
what was needed. Removing them made wiping faster, and shrank the stack
by 240 bytes. (Stack size may matter in embedded systems.)