I added all the openssl functions that we call in `tests.c` and in `bench_verify.c` to the m4 check, that way if any of them are missing it won't enable openssl.
I also modified it a little to prevent a segmentation fault when running that program (not that it really matters for autotools)
Tim Ruffing [Mon, 26 Oct 2020 13:38:30 +0000 (14:38 +0100)]
Return NULL early in context_preallocated_create if flags invalid
If the user passes invalid flags to _context_create, and the default
illegal callback does not abort the program (which is possible), then we
work with the result of malloc(0), which may be undefined behavior. This
violates the promise that a library function won't crash after the
illegal callback has been called.
This commit fixes this issue by returning NULL early in _context_create
in that case.
Fabien [Tue, 27 Oct 2020 07:43:10 +0000 (08:43 +0100)]
Run the undefined behaviour sanitizer on Travis
Run UBSAN with both GCC and Clang, on Linux and macOS.
The `halt_on_error=1` option is required to make the build fail if the
sanitizer finds an issue.
Fabien [Mon, 26 Oct 2020 11:29:00 +0000 (12:29 +0100)]
Prevent arithmetic on NULL pointer if the scratch space is too small
If the scratch space is too small when calling
`secp256k1_ecmult_strauss_batch()`, the `state.pre_a` allocation will
fail and the pointer will be `NULL`. This causes `state.pre_a_lam` to be
computed from the `NULL` pointer.
It is also possible that the first allocation to fail is for `state.ps`,
which will cause the failure to occur when in
`secp256k1_ecmult_strauss_wnaf()`.
The issue has been detected by UBSAN using Clang 10:
```
CC=clang \
CFLAGS="-fsanitize=undefined -fno-omit-frame-pointer" \
LDFLAGS="-fsanitize=undefined -fno-omit-frame-pointer" \
../configure
UBSAN_OPTIONS=print_stacktrace=1:halt_on_error=1 make check
```
Elichai Turkel [Wed, 21 Oct 2020 11:39:52 +0000 (14:39 +0300)]
Modify bitcoin_secp.m4's openssl check to call all the functions that we
use in the tests/benchmarks.
That way linking will fail if those symbols are missing
This is a rebased/combined version of the following pull requests/commits with minor changes:
* #825 Switch to our own memcmp function
* Modification: `secp256k1_memcmp_var` is marked static inline
* Modification: also replace `memcmp` with `secp256k1_memcmp_var` in exhaustive tests
* Modification: add reference to GCC bug 95189
* #822 Increase precision of g1 and g2
* Modification: use the new `secp256k1_memcmp_var` function instead of `memcmp` (see https://github.com/bitcoin-core/secp256k1/pull/822#issuecomment-706610361)
* Modification: drop the " Allow secp256k1_split_lambda_verify to pass even in the presence of GCC bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95189." commit, as it's dealt with using `secp256k1_memcmp_var`.
* Modification: rename secp256k1_gej_mul_lambda -> secp256k1_ge_mul_lambda
* A new commit that moves the `lambda` constant out of `secp256k1_scalar_split_lambda` and (`_verify`).
* The test commit suggested here: https://github.com/bitcoin-core/secp256k1/pull/822#issuecomment-706610276
* Modification: use the new accessible `secp256k1_const_lambda` instead of duplicating it.
* #826 Rip out non-endomorphism code
* A new commit that reduces the size of the WNAF output to 129, as we now have proof that the split output is always 128 bits or less.
* A new commit to more consistently use input:`k`, integer outputs:`k1`,`k2`, modulo n outputs:`r1`,`r2`
Gregory Maxwell [Sat, 10 Oct 2020 20:46:36 +0000 (20:46 +0000)]
Check correctness of lambda split without -DVERIFY
The VERIFY macro turns on various paranoid consistency checks, but
the complete functionality should still be tested without it.
This also adds a couple of static test points for extremely small
split inputs/outputs. The existing bounds vectors already check
extremely large outputs.
Russell O'Connor [Mon, 21 Sep 2020 15:40:01 +0000 (11:40 -0400)]
Increase precision of g1 and g2
This allows us to shift by 256+128 = 384 bits, which is a multiple of the limb size of
the scalar representation. This also happens to be the most precision possible for g2
that still fits into a 256-bit value.
A few miscellaneous improvements:
* Just use EXHAUSTIVE_TEST_ORDER as order everywhere, rather than a variable
* Move exhaustive tests for recovery module to the recovery module directory
* Make `secp256k1_scalar_set_b32` detect overflow correctly for scalar_low (a comment in the recovery exhaustive test indicated why this was the case, but this looks incorrect).
* Change the small test groups so that they include a point with X coordinate 1.
* Initialize the RNG seed, allowing configurating from the cmdline, and report it.
* Permit changing the number of iterations (re-randomizing for each).
* Support splitting the work across cores from the cmdline.
And a big one:
* Add exhaustive tests for schnorrsig module (and limited ones for extrakeys).
Jonas Nick [Tue, 15 Sep 2020 17:48:06 +0000 (17:48 +0000)]
Merge #782: Check if variable=yes instead of if var is set in travis.sh
34debf7a6d36bbd9a52e68e079ddfc446faf5bef Modify .travis.yml to explictly pass no in env vars instead of setting to nothing (Elichai Turkel) ef37761feed0172baa03dd94c842f1547bdf3016 Change travis.sh to check if variables are equal to yes instead of not-empty. Before this, setting `VALGRIND=wat` was considered as true, and to make it evaluate as false you had to unset the variable `VALGRIND=` but not it checks if `VALGRIND=yes` and if it's not `yes` then it's evaluated to false (Elichai Turkel)
This PR implements signing, verification and batch verification as described in [BIP-340](https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki) in an experimental module named `schnorrsig`. It includes the test vectors and a benchmarking tool.
This PR also adds a module `extrakeys` that allows [BIP-341](https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki)-style key tweaking.
(Adding ChaCha20 as a CSPRNG and batch verification was moved to PR #760).
In order to enable the module run `./configure` with `--enable-experimental --enable-module-schnorrsig`.
Currently if `secp256k1_gej_add_var` / `secp256k1_gej_add_ge_var` /` secp256k1_gej_add_zinv_var` receive `P + (-P)` it will set `gej->infinity = 1` but doesn't call initialize the field elements.
Notice that this is the only branch in the function that results in an uninitialized output.
By using `secp256k1_gej_set_infinity()` it will set the field elements to zero while also setting the infinity flag.
I also added a test that fails with valgrind on current master but passes with the fix.
EDIT: This isn't a bug or something necessary, I just personally found this helpful.
These are all the architecture macros I could find with known endianness. Use those as a fallback when __BYTE_ORDER__ isn't available.
See https://github.com/bitcoin-core/secp256k1/pull/787#issuecomment-673764652
It also adds a SHA256 selftest, so that improperly overriding the endianness detection will be detected at runtime.
ACKs for top commit:
real-or-random:
ACK 8bc6aeffa9a191e677cb9e3a22fff130f16990f3 I read the diff, and tested that the self-test passes/fails with/without the correct endianness setting
gmaxwell:
ACK 8bc6aeffa9a191e677cb9e3a22fff130f16990f3 looks good and I also ran the tests on MIPS-BE and verified that forcing it to LE makes the runtime test fail.
This PR increases the general robustness of scratch spaces. It does not fix an existing vulnerability because scratch spaces aren't used anywhere in master. Additionally, it must be prevented anyway that an attacker has (indirect) control over the arguments touched in this PR.
This had two things in it-- tests for the scalar/field code and
constant time signing and keygen.
The signing and keygen have been thoroughly constant time for years
and there are now powerful tests to verify it... no further work
on constant-time is needed at least on ordinary platforms (other
sidechannels-- sure).
The scalar and field code have extensive tests. They could use
better static test vectors but they're well tested.
TODOs for the project are currently better documented on github
right now. This file could return in the future with current
info, if needed.
Gregory Maxwell [Mon, 31 Aug 2020 23:11:41 +0000 (23:11 +0000)]
Remove the extremely outdated TODO file.
This had two things in it-- tests for the scalar/field code and
constant time signing and keygen.
The signing and keygen have been thoroughly constant time for years
and there are now powerful tests to verify it... no further work
on constant-time is needed at least on ordinary platforms (other
sidechannels-- sure).
The scalar and field code have extensive tests. They could use
better static test vectors but they're well tested.
TODOs for the project are currently better documented on github
right now. This file could return in the future with current
info, if needed.
A compile-time check is implemented in a new `src/assumptions.h` which verifies several aspects that are implementation-defined in C:
* size of bytes
* conversion between unsigned and (negative) signed types
* right-shifts of negative signed types.
This does not fix any particular issue but it's preferable to not
rely on autoconf. This avoids endianness mess for users on BE hosts
if they use their build without autoconf.
The macros are carefully written to err on the side of the caution,
e.g., we #error if the user manually configures a different endianness
than what we detect.
This PR does two things:
* It removes the ability to select the 5x52 field with a 8x32 scalar, or the 10x26 field with a 4x64 scalar. It's both 128-bit wide versions, or neither.
* The choice is made automatically by the C code, unless overridden by a USE_FORCE_WIDEMUL_INT{64,128} define (which is available through `configure` with a hidden option --with-test-override-wide-multiplication={auto,int64,int128}).
This reduces the reliance on autoconf for this performance-critical configuration option, and also reduces the number of different combinations to test.
This removes one theoretically useful combination: if you had x86_64 asm but no __int128 support in your compiler, it was possible to use the 64-bit field before but the 32-bit scalar. I think this doesn't matter as all compilers/systems that support (our) x86_64 asm also support __int128. Furthermore, #767 will break this.
As an unexpected side effect, this also means the `gen_context` static precomputation tool will now use __int128 based implementations when available (which required an addition to the 5x52 field; see first commit).
Tim Ruffing [Tue, 21 Jul 2020 12:05:56 +0000 (14:05 +0200)]
Use preprocessor macros instead of autoconf to detect endianness
This does not fix any particular issue but it's preferable to not
rely on autoconf. This avoids endianness mess for users on BE hosts
if they use their build without autoconf.
The macros are carefully written to err on the side of the caution,
e.g., we #error if the user manually configures a different endianness
than what we detect.
Pieter Wuille [Sun, 9 Aug 2020 17:58:40 +0000 (10:58 -0700)]
Autodetect __int128 availability on the C side
Instead of supporting configuration of the field and scalar size independently,
both are now controlled by the availability of a 64x64->128 bit multiplication
(currently only through __int128). This is autodetected from the C code through
__SIZEOF_INT128__, but can be overridden using configure's
--with-test-override-wide-multiply, or by defining
USE_FORCE_WIDEMUL_{INT64,INT128} manually.