Fabio Scotoni [Mon, 2 Mar 2020 07:36:52 +0000 (08:36 +0100)]
extract_examples.sh: warning cleanup
1. Remove now-unused random_bytes().
2. "warning: empty struct has size 0 in C, size 1 in C++ [-Wc++-compat]"
"warning: empty struct is a GNU extension [-Wgnu-empty-struct]"
clang -Weverything
Fabio Scotoni [Mon, 2 Mar 2020 06:57:22 +0000 (07:57 +0100)]
crypto_chacha20 example overhaul
1. Randomize keys and nonces.
2. Minor alignment fix in second example.
3. Make i unsigned to avoid clang warning about 500-(i-64) changing
signedness with -Weverything.
4. Initialize ctr to 0.
5. Fix obviously wrong encryption by jumping around example
(repeating ctr issue [!], wrong function used in the example).
Fabio Scotoni [Mon, 2 Mar 2020 06:34:14 +0000 (07:34 +0100)]
crypto_argon2i example overhaul
1. The common type for a password is char*; use a cast instead.
C11, para. 6.5(7) suggests this will be largely okay.
2. Wipe the password on failure.
3. Initialize the password size while there.
Does not use strlen(3) to avoid extra stdlib functions.
4. Branch on allocation failure.
Loup Vaillant [Wed, 26 Feb 2020 21:14:57 +0000 (22:14 +0100)]
Replaced fast mappings by even better ones
Turned out there were much simpler ways to compute the mapping, thanks
to the fact that when the prime p is congruent to 5 modulo 8, we have
this nice equality:
x^((p-5)/8) = sqrt(1/x) if x is square,
x^((p-5)/8) = sqrt(sqrt(-1)/x) otherwise
The code was kindly given by Andrew Moon, who got the original trick
from Mike Hamburg.
Fabio Scotoni [Tue, 25 Feb 2020 08:53:24 +0000 (09:53 +0100)]
Examples: const correctness
It's unfortunate that we can't both tell users to wipe keys and
illustrate which arguments are inputs and which ones are outputs
at the same time, but that's just how it is.
Loup Vaillant [Fri, 21 Feb 2020 22:17:29 +0000 (23:17 +0100)]
Elligator script: use x25519_pk test vectors
We're now reading the `x25519_pk.all.vec` generated by Libsodium in
`x25519.c`, to make sure scalarmult is correctly implemented in the
Python script.
While we're at it, we also use them to generate Elligator 2 vectors.
Any addition to the X25519 public key generation will automatically
benefit Elligator 2 as well
TODO: update the makefile to make sure the vectors are generated before
we run `elligator.py`
Loup Vaillant [Thu, 20 Feb 2020 23:00:44 +0000 (00:00 +0100)]
Elligator script: removed erroneous .abs()
Changing the sign of the v coordinate had an effect on the final value
of the final hash, but wasn't detected because my initial tests only
compare to the u coordinate, which appears to be correct.
This doesn't affect the success or failure of the Elligator mapping,
which only look at the u coordinate. Yet another example of incorrect
crypto that looks like it works...
Loup Vaillant [Wed, 19 Feb 2020 22:51:12 +0000 (23:51 +0100)]
Elligator 2 script: fast scalarmult, explicit hash_to_curve
The fast scalar multiplication will let us explore the merging of the
various exponentiations required to perform the conversion to Montgomery
then curve_to_hash.
The explicit hash_to_curve() serves as an implementation guide. Note
the omission of the v coordinate, not needed for X25519. I am not
aware of a compelling use case to convert to Edwards (not all PAKEs need
point addition).
Loup Vaillant [Wed, 19 Feb 2020 20:30:47 +0000 (21:30 +0100)]
Added the fe (field element) type for readability
Having to write those modulo operators everywhere was tiresome. Having
an explicit field element type allows a more direct writing. It also
helps Python throw type errors if we misuse anything.
Loup Vaillant [Mon, 17 Feb 2020 23:03:29 +0000 (00:03 +0100)]
Ported SAGE script to Python3
Turned out SAGE wasn't really needed. Python already has all we need,
and the differences are minimal. Cryptographers will be able to read
the Python code as easily as SAGE.
The disadvantage is that Python3's arithmetic is twice as slow.
The advantage is that Python3 is ubiquitous, and can reasonably be
required to generate test vectors.
Loup Vaillant [Sun, 16 Feb 2020 23:25:24 +0000 (00:25 +0100)]
Added Elligator2 SAGE script
That script prints test vectors to the standard output, in the following
order:
- private key
- public key (X coordinate)
- public key (Y coordinate, never exposed by Monocypher)
- Boolean (0 if we can't convert, 1 if we can)
- hash of the public key (or zero if we couldn't convert)
I could use that script to generate the test vectors automatically, but
I hesitate to introduce a hard dependency on SAGE.
The alternative is to put the test vectors themselves under version
control. We could add a target to the makefile that checks whether the
test vectors and the script are in sync, but that would break if we end
up adding vectors manually (which typically happens whenever project
Whycheproof publishes new vectors).
Loup Vaillant [Fri, 14 Feb 2020 23:27:25 +0000 (00:27 +0100)]
Removed modulo operation in SHA-512
While I expect almost all compilers optimise those down to a bit mask in
practice, it can help naive compilers generate better code. The rest of
Monocypher already took this approach, I just forgot about this one.
Loup Vaillant [Fri, 14 Feb 2020 23:16:45 +0000 (00:16 +0100)]
Removed 64-bit modulo operation in Argon2i
Fixes #156
This modulo operation is implemented on software in many 32-bits
processors, such as the Cortex-M3. This causes the generated binary to
depend a standard library routine that is often not present on such
small machines. This hurts portability and convenience.
Thankfully, this particular modulo is not needed, and can be replaced by
a simple test and subtraction. This is not constant time, but we don't
care: the index we are computing does not depend on any secret, so a
variable timing won't expose anything.
Performance seems to very slightly increase on x86-64. 32-bit machines
may benefit more.
Loup Vaillant [Sat, 8 Feb 2020 17:23:51 +0000 (18:23 +0100)]
Added warning in the Git version of the README
I noticed that some careless hurried people tend to use Monocypher from
the git repository directly. They don't even grab the releases from
GitHub. That's not ideal for two reasons:
1. The master branch isn't always stable.
2. The Git repository misses some automatically generated files.
This patch attempts to get end users away from the Git repository,
towards well tested official releases. Also, for users who think the
tarball are binary releases (they're source releases), or just want to
be done as quickly as possible, I also gave direct links to the main
source and header files.
Loup Vaillant [Fri, 24 Jan 2020 21:19:32 +0000 (22:19 +0100)]
Improved readability of EdDSA verification
Basically, we separated the computation of R_check from the verification
that it is equal to R. The computation of R_check takes s, h_ram and the
public key as parameter, and output R_check.
The primary advantage is a better separation of concerns, which makes
the code more readable in my opinion.
A secondary advantage is that we could now test ge_r_check() separately,
with arbitrary values of s and h_ram. This lets us test difficult to
trigger edge cases, like having s or h_ram exceeding 2^252, and is just
plain more general than only testing valid and invalid signatures.
I very much like this secondary advantage, because EdDSA is to this day
the part of Monocypher that makes me the most nervous.
Loup Vaillant [Thu, 23 Jan 2020 22:27:47 +0000 (23:27 +0100)]
Easier to use ge_madd() and ge_msub()
There are two main changes:
1. ge_madd() and ge_msub() now take a ge_precomp as second argument.
2. ge_msub() can now process secrets.
The pre-computed table have been adjusted accordingly. They're now
arrays of ge_precomp instead of being multiple arrays of fe.
We can also expect a (mostly negligible) performance increase:
- The new tables have slightly better locality.
- ge_msub() is the mirror of ge_madd() instead of using it.
- Using ge_msub() for signatures is now slightly more direct.
Fabio Scotoni [Wed, 15 Jan 2020 12:50:09 +0000 (13:50 +0100)]
Add some comments about what the EC functions do
This hopefully helps both auditing the code (by giving a clear
indication of what a function is supposed to be doing),
and trying to work with it (by minimizing the amount of time people need
to determine if a function is relevant to their interests).