From stefbon at gmail.com Tue Jan 5 06:55:12 2021 From: stefbon at gmail.com (Stef Bon) Date: Tue, 5 Jan 2021 06:55:12 +0100 Subject: segfault calling gcry_mpi_powm Message-ID: Hi, I'm dealing with a serious problem. My program is running into a segfault, and I cannot solve that. I've been looking at it for more than a week, and do not know why it segfaults. I'm using gcry_mpi_powm to calculate the "e", "f" and shared key in Diffie-Hellman key exchange. a. the values p, g, x, e and f (all type gcry_mpi_t) are initialized with gcry_mpi_new(0). b. p and g are set to fixed values, read from hardcoded values using gcry_mpi_scan with format GCRYMPI_FMT_USG. c. x is set using gcry_mpi_randomize. d. e is calculated like : gcry_mpi_powm(e, g, x, p) now the journal entries look like: Jan 05 05:30:36 ws-001.bononline.nl kernel: traps: sonssc[6198] general protection fault ip:7fa60c1e4359 sp:7fa60afbaa10 error:0 in libc-2.32.so[7fa60c183000+148000] Jan 05 05:30:36 ws-001.bononline.nl systemd[1]: Created slice system-systemd\x2dcoredump.slice. Jan 05 05:30:36 ws-001.bononline.nl systemd[1]: Started Process Core Dump (PID 6212/UID 0). Jan 05 05:30:36 ws-001.bononline.nl systemd-coredump[6213]: [?] Process 6196 (sonssc) of user 0 dumped core. Stack trace of thread 6198: #0 0x00007fa60c1e4359 n/a (libc.so.6 + 0x83359) #1 0x00007fa60c6a7395 n/a (libgcrypt.so.20 + 0x10395) #2 0x00007fa60c76910b n/a (libgcrypt.so.20 + 0xd210b) #3 0x00005587f4b22ad4 n/a (/home/sbon/Projects/fuse/fs-workspace/src/sonssc + 0x42ad4) #4 0x87fc013cf9521000 n/a (n/a + 0x0) and gdb backtrace looks like: Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00007f20b7e5f359 in ?? () from /lib64/libc.so.6 [Current thread is 1 (Thread 0x7f20b6c37640 (LWP 15027))] (gdb) bt #0 0x00007f20b7e5f359 in () at /lib64/libc.so.6 #1 0x00007f20b8322395 in () at /usr/lib64/libgcrypt.so.20 #2 0x00007f20b83e410b in () at /usr/lib64/libgcrypt.so.20 #3 0x000055e95c8bbad4 in dh_create_local_key (k=0x7f20b6c36730) at ssh/keyexchange/dh.c:350 #4 0x000055e95c8bc939 in start_diffiehellman_client (connection=0x7f20a40021c0, k=0x7f20b6c36730, H=0x7f20b6c36100) at ssh/keyexchange/key-exchange.c:389 I'm stuck here. Can somebody help me here? Thanks in advance, Stfe Bon the Netherlands From stefbon at gmail.com Mon Jan 11 05:05:03 2021 From: stefbon at gmail.com (Stef Bon) Date: Mon, 11 Jan 2021 05:05:03 +0100 Subject: segfault calling gcry_mpi_powm In-Reply-To: References: Message-ID: Hi, I'm still busy tracking this segfault. I've compiled the latest git version of libgcrypt, installed in /home/sbon/usr, add some debug flags, and made sonssc link against it, and again the same segfault, but now with more information: coredump gdb gives oa: (gdb) bt #0 0x00007f6f1245d489 in () at /lib64/libc.so.6 #1 0x00007f6f1294a9d5 in _gcry_free (p=0x7f6f0c001458) at global.c:1035 #2 0x00007f6f12a138bf in _gcry_mpi_free_limb_space (a=, nlimbs=) at mpiutil.c:158 #3 0x00007f6f12a0feeb in _gcry_mpi_powm (res=0x7f6f0c00c5c8, base=, expo=, mod=) at mpi-pow.c:744 #4 0x00007f6f12946db5 in gcry_mpi_powm (w=, b=, e=, m=) at visibility.c:460 #5 0x00005613647db46b in dh_create_local_key (k=0x7f6f11a5c6f0) at ssh/keyexchange/dh.c:350 #6 0x00005613647dc2b5 in start_diffiehellman_client (connection=connection at entry=0x7f6f0c002340, k=k at entry=0x7f6f11a5c6f0, H=H at entry=0x7f6f11a5c130) at ssh/keyexchange/key-exchange.c:390 Now something is getting more clear. Is it possible that the _gcry_free function assumes it is dealing with secure memory? Stef From wk at gnupg.org Fri Jan 15 15:28:32 2021 From: wk at gnupg.org (Werner Koch) Date: Fri, 15 Jan 2021 15:28:32 +0100 Subject: segfault calling gcry_mpi_powm In-Reply-To: (Stef Bon via Gcrypt-devel's message of "Mon, 11 Jan 2021 05:05:03 +0100") References: Message-ID: <87czy6gtpr.fsf@wheatstone.g10code.de> On Mon, 11 Jan 2021 05:05, Stef Bon said: > #3 0x00007f6f12a0feeb in _gcry_mpi_powm (res=0x7f6f0c00c5c8, > base=, expo=, mod=) at > mpi-pow.c:744 This is for (i = 0; i < (1 << (W - 1)); i++) _gcry_mpi_free_limb_space( precomp[i], esec ? precomp_size[i] : 0 ); _gcry_mpi_free_limb_space (base_u, esec ? max_u_size : 0); and not easy to decide what's going wrong with this internally allocated memory. We need to replicate the problem, for example by printing the inpurt values to mpi_powm as called here > #5 0x00005613647db46b in dh_create_local_key (k=0x7f6f11a5c6f0) at > ssh/keyexchange/dh.c:350 and writing a simple test program. Use gcry_log_debugmpi ("Some text", MPI). But what I would do first is to run valgrind on your program. Usually if quickly pinpoints the faulty code. > Now something is getting more clear. Is it possible that the > _gcry_free function assumes it is dealing with secure memory? Can't tell Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From stefbon at gmail.com Fri Jan 15 20:31:47 2021 From: stefbon at gmail.com (Stef Bon) Date: Fri, 15 Jan 2021 20:31:47 +0100 Subject: segfault calling gcry_mpi_powm In-Reply-To: <87czy6gtpr.fsf@wheatstone.g10code.de> References: <87czy6gtpr.fsf@wheatstone.g10code.de> Message-ID: Hi, the program is doing something what is hard to trace with gdb. GDB shows the trace, but it does not happen there. It has to do with memory allocated in a way causing this, and I cannot find it. In the meantime I've solved this issue when it looks as if it crashes when calling gcry_mpi_powm, but stuck now somewhere else. This gives me hope, I can solve something, but still not able to make it run and stay that way. I will try valgrind. thanks, Stef Bon the Netherlands From stefbon at gmail.com Mon Jan 18 14:03:14 2021 From: stefbon at gmail.com (Stef Bon) Date: Mon, 18 Jan 2021 14:03:14 +0100 Subject: segfault calling gcry_mpi_powm In-Reply-To: References: <87czy6gtpr.fsf@wheatstone.g10code.de> Message-ID: Hi, I've solved the issue. It was indeed something I suspected: something else was not allocated the right way, and later somewhere in the process this will cause errors. The place it segfaults is not related to the bug. Anyway, it's running again. Thanks for your time and effort, Stef Bon The Netherlands From wk at gnupg.org Tue Jan 19 17:59:02 2021 From: wk at gnupg.org (Werner Koch) Date: Tue, 19 Jan 2021 17:59:02 +0100 Subject: [Announce] Libgcrypt 1.9.0 relased Message-ID: <877do8dfs9.fsf@wheatstone.g10code.de> Hello! We are pleased to announce the availability of Libgcrypt version 1.9.0. This release starts a new stable branch of Libgcrypt with full API and ABI compatibility to the 1.8 series. Over the last 3 or 4 years Jussi Kivilinna put a lot of work into speeding up the algorithms for the most commonly used CPUs. See below for a list of improvements and new features in 1.9. Libgcrypt is a general purpose library of cryptographic building blocks. It is originally based on code used by GnuPG. It does not provide any implementation of OpenPGP or other protocols. Thorough understanding of applied cryptography is required to use Libgcrypt. Noteworthy changes in Libgcrypt 1.9.0 ------------------------------------- * New and extended interfaces: - New curves Ed448, X448, and SM2. - New cipher mode EAX. - New cipher algo SM4. - New hash algo SM3. - New hash algo variants SHA512/224 and SHA512/256. - New MAC algos for Blake-2 algorithms, the new SHA512 variants, SM3, SM4 and for a GOST variant. - New convenience function gcry_mpi_get_ui. - gcry_sexp_extract_param understands new format specifiers to directly store to integers and strings. - New function gcry_ecc_mul_point and curve constants for Curve448 and Curve25519. [#4293] - New function gcry_ecc_get_algo_keylen. - New control code GCRYCTL_AUTO_EXPAND_SECMEM to allow growing the secure memory area. Also in 1.8.2 as an undocumented feature. * Performance: - Optimized implementations for Aarch64. - Faster implementations for Poly1305 and ChaCha. Also for PowerPC. [b9a471ccf5,172ad09cbe,#4460] - Optimized implementations of AES and SHA-256 on PowerPC. [#4529,#4530] - Improved use of AES-NI to speed up AES-XTS (6 times faster). [a00c5b2988] - Improved use of AES-NI for OCB. [eacbd59b13,e924ce456d] - Speedup AES-XTS on ARMv8/CE (2.5 times faster). [93503c127a] - New AVX and AVX2 implementations for Blake-2 (1.3/1.4 times faster). [af7fc732f9, da58a62ac1] - Use Intel SHA extension for SHA-1 and SHA-256 (4.0/3.7 times faster). [d02958bd30, 0b3ec359e2] - Use ARMv7/NEON accelerated GCM implementation (3 times faster). [2445cf7431] - Use of i386/SSSE3 for SHA-512 (4.5 times faster on Ryzen 7). [b52dde8609] - Use 64 bit ARMv8/CE PMULL for CRC (7 times faster). [14c8a593ed] - Improve CAST5 (40% to 70% faster). [4ec566b368] - Improve Blowfish (60% to 80% faster). [ced7508c85] * Bug fixes: - Fix infinite loop due to applications using fork the wrong way. [#3491][also in 1.8.4] - Fix possible leak of a few bits of secret primes to pageable memory. [#3848][also in 1.8.4] - Fix possible hang in the RNG (1.8.3 only). [#4034][also in 1.8.4] - Several minor fixes. [#4102,#4208,#4209,#4210,#4211,#4212] [also in 1.8.4] - On Linux always make use of getrandom if possible and then use its /dev/urandom behaviour. [#3894][also in 1.8.4] - Use blinding for ECDSA signing to mitigate a novel side-channel attack. [#4011,CVE-2018-0495] [also in 1.8.3, 1.7.10] - Fix incorrect counter overflow handling for GCM when using an IV size other than 96 bit. [#3764] [also in 1.8.3, 1.7.10] - Fix incorrect output of AES-keywrap mode for in-place encryption on some platforms. [also in 1.8.3, 1.7.10] - Fix the gcry_mpi_ec_curve_point point validation function. [also in 1.8.3, 1.7.10] - Fix rare assertion failure in gcry_prime_check. [also in 1.8.3] - Do not use /dev/srandom on OpenBSD. [also in 1.8.2] - Fix test suite failure on systems with large pages. [#3351] [also in 1.8.2] - Fix test suite to not use mmap on Windows. [also in 1.8.2] - Fix fatal out of secure memory status in the s-expression parser on heavy loaded systems. [also in 1.8.2] - Fix build problems on OpenIndiana et al. [#4818, also in 1.8.6] - Fix GCM bug on arm64 which troubles for example OMEMO. [#4986, also in 1.8.6] - Detect a div-by-zero in a debug helper tool. [#4868, also in 1.8.6] - Use a constant time mpi_inv and related changes. [#4869, partly also in 1.8.6] - Fix mpi_copy to correctly handle flags of opaque MPIs. [also in 1.8.6] - Fix mpi_cmp to consider +0 and -0 the same. [also in 1.8.6] - Fix extra entropy collection via clock_gettime. Note that this fallback code path is not used on any decent hardware. [#4966, also in 1.8.7] - Support opaque MPI with gcry_mpi_print. [#4872, also in 1.8.7] - Allow for a Unicode random seed file on Windows. [#5098, also in 1.8.7] * Other features: - Add OIDs from RFC-8410 as aliases for Ed25519 and Curve25519. [also in 1.8.6] - Add mitigation against ECC timing attack CVE-2019-13626. [#4626] - Internal cleanup of the ECC implementation. - Support reading EC point in compressed format for some curves. [#4951] For a list of interface changes and links to commits and bug numbers see the release info at https://dev.gnupg.org/T4294 Download ======== Source code is hosted at the GnuPG FTP server and its mirrors as listed at https://gnupg.org/download/mirrors.html. On the primary server the source tarball and its digital signature are: https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.9.0.tar.bz2 https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.9.0.tar.bz2.sig or gzip compressed: https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.9.0.tar.gz https://gnupg.org/ftp/gcrypt/libgcrypt/libgcrypt-1.9.0.tar.gz.sig In order to check that the version of Libgcrypt you downloaded is an original and unmodified file please follow the instructions found at https://gnupg.org/download/integrity_check.html. In short, you may use one of the following methods: - Check the supplied OpenPGP signature. For example to check the signature of the file libgcrypt-1.9.0.tar.bz2 you would use this command: gpg --verify libgcrypt-1.9.0.tar.bz2.sig libgcrypt-1.9.0.tar.bz2 This checks whether the signature file matches the source file. You should see a message indicating that the signature is good and made by one or more of the release signing keys. Make sure that this is a valid key, either by matching the shown fingerprint against a trustworthy list of valid release signing keys or by checking that the key has been signed by trustworthy other keys. See the end of this mail for information on the signing keys. - If you are not able to use an existing version of GnuPG, you have to verify the SHA-1 checksum. On Unix systems the command to do this is either "sha1sum" or "shasum". Assuming you downloaded the file libgcrypt-1.9.0.tar.bz2, you run the command like this: sha1sum libgcrypt-1.9.0.tar.bz2 and check that the output matches the first line from the this list: 459383a8b6200673cfc31f7b265c4961c0850031 libgcrypt-1.9.0.tar.bz2 25b36d1e3c32ef76be5098da721dd68933798a3d libgcrypt-1.9.0.tar.gz You should also verify that the checksums above are authentic by matching them with copies of this announcement. Those copies can be found at other mailing lists, web sites, and search engines. Copying ======= Libgcrypt is distributed under the terms of the GNU Lesser General Public License (LGPLv2.1+). The helper programs as well as the documentation are distributed under the terms of the GNU General Public License (GPLv2+). The file LICENSES has notices about contributions that require that these additional notices are distributed. Support ======= For help on developing with Libgcrypt you should read the included manual and if needed ask on the gcrypt-devel mailing list. In case of problems specific to this release please first check https://dev.gnupg.org/T4294 for updated information. Please also consult the archive of the gcrypt-devel mailing list before reporting a bug: https://gnupg.org/documentation/mailing-lists.html . We suggest to send bug reports for a new release to this list in favor of filing a bug at https://bugs.gnupg.org. If you need commercial support go to https://gnupg.com or https://gnupg.org/service.html . If you are a developer and you need a certain feature for your project, please do not hesitate to bring it to the gcrypt-devel mailing list for discussion. Thanks ====== Since 2001 maintenance and development of GnuPG is done by g10 Code GmbH and still mostly financed by donations. Three full-time employed developers as well as two contractors exclusively work on GnuPG and closely related software like Libgcrypt, GPGME, and Gpg4win. We like to thank all the nice people who are helping Libgcrypt, be it testing, coding, suggesting, auditing, administering the servers, spreading the word, or answering questions on the mailing lists. Many thanks to our numerous financial supporters, both corporate and individuals. Without you it would not be possible to keep GnuPG and Libgcrypt in a good and secure shape and to address all the small and larger requests made by our users. Thanks. Happy hacking, Your Libgcrypt hackers p.s. This is an announcement only mailing list. Please send replies only to the gcrypt-devel'at'gnupg.org mailing list. p.p.s List of Release Signing Keys: To guarantee that a downloaded GnuPG version has not been tampered by malicious entities we provide signature files for all tarballs and binary versions. The keys are also signed by the long term keys of their respective owners. Current releases are signed by one or more of these four keys: ed25519 2020-08-24 [expires: 2030-06-30] Key fingerprint = 6DAA 6E64 A76D 2840 571B 4902 5288 97B8 2640 3ADA Werner Koch (dist signing 2020) rsa2048 2014-10-29 [expires: 2020-10-30] Key fingerprint = 031E C253 6E58 0D8E A286 A9F2 2071 B08A 33BD 3F06 NIIBE Yutaka (GnuPG Release Key) rsa3072 2017-03-17 [expires: 2027-03-15] Key fingerprint = 5B80 C575 4298 F0CB 55D8 ED6A BCEF 7E29 4B09 2E28 Andre Heinecke (Release Signing Key) rsa2048 2011-01-12 [expires: 2021-12-31] Key fingerprint = D869 2123 C406 5DEA 5E0F 3AB5 249B 39D2 4F25 E3B6 Werner Koch (dist sig) The keys are available at https://gnupg.org/signature_key.html and in any recently released GnuPG tarball in the file g10/distsigkey.gpg . Note that this mail has been signed by a different key. -- "If privacy is outlawed, only outlaws will have privacy." - PRZ 1991 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From jussi.kivilinna at iki.fi Tue Jan 19 19:13:33 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 19 Jan 2021 20:13:33 +0200 Subject: [PATCH 2/2] kdf: make self-test test-vector array read-only In-Reply-To: <20210119181333.138826-1-jussi.kivilinna@iki.fi> References: <20210119181333.138826-1-jussi.kivilinna@iki.fi> Message-ID: <20210119181333.138826-2-jussi.kivilinna@iki.fi> * cipher/kdf.c (selftest_pbkdf2): Make 'tv[]' constant. -- Signed-off-by: Jussi Kivilinna --- cipher/kdf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cipher/kdf.c b/cipher/kdf.c index b916a3f8..93c2c9f6 100644 --- a/cipher/kdf.c +++ b/cipher/kdf.c @@ -342,7 +342,7 @@ check_one (int algo, int hash_algo, static gpg_err_code_t selftest_pbkdf2 (int extended, selftest_report_func_t report) { - static struct { + static const struct { const char *desc; const char *p; /* Passphrase. */ size_t plen; /* Length of P. */ -- 2.27.0 From jussi.kivilinna at iki.fi Tue Jan 19 19:13:32 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 19 Jan 2021 20:13:32 +0200 Subject: [PATCH 1/2] kdf: add missing null-terminator for self-test test-vector array Message-ID: <20210119181333.138826-1-jussi.kivilinna@iki.fi> * cipher/kdf.c (selftest_pbkdf2): Add null-terminator to TV array. -- This was causing kdf sefl-test to fail on s390x builds. Signed-off-by: Jussi Kivilinna --- cipher/kdf.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/cipher/kdf.c b/cipher/kdf.c index 3d707bd0..b916a3f8 100644 --- a/cipher/kdf.c +++ b/cipher/kdf.c @@ -452,7 +452,8 @@ selftest_pbkdf2 (int extended, selftest_report_func_t report) "\x34\x8c\x89\xdb\xcb\xd3\x2b\x2f\x32\xd8\x14\xb8\x11\x6e\x84\xcf" "\x2b\x17\x34\x7e\xbc\x18\x00\x18\x1c\x4e\x2a\x1f\xb8\xdd\x53\xe1" "\xc6\x35\x51\x8c\x7d\xac\x47\xe9" - } + }, + { NULL } }; const char *what; const char *errtxt; -- 2.27.0 From jussi.kivilinna at iki.fi Tue Jan 19 19:14:00 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 19 Jan 2021 20:14:00 +0200 Subject: [PATCH 1/2] mpi/longlong: make use of compiler provided __builtin_ctz/__builtin_clz Message-ID: <20210119181401.139024-1-jussi.kivilinna@iki.fi> * configure.ac (gcry_cv_have_builtin_ctzl, gcry_cv_have_builtin_clz) (gcry_cv_have_builtin_clzl): New checks. * mpi/longlong.h (count_leading_zeros, count_trailing_zeros): Use __buildin_clz[l]/__builtin_ctz[l] if available and bit counting macros not yet provided by inline assembly. -- Signed-off-by: Jussi Kivilinna --- configure.ac | 45 +++++++++++++++++++++++++++++++++++++++++++++ mpi/longlong.h | 20 ++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/configure.ac b/configure.ac index fda74056..e0d52d8c 100644 --- a/configure.ac +++ b/configure.ac @@ -903,6 +903,51 @@ if test "$gcry_cv_have_builtin_ctz" = "yes" ; then fi +# +# Check for __builtin_ctzl intrinsic. +# +AC_CACHE_CHECK(for __builtin_ctzl, + [gcry_cv_have_builtin_ctzl], + [gcry_cv_have_builtin_ctzl=no + AC_LINK_IFELSE([AC_LANG_PROGRAM([], + [unsigned long x = 0; long y = __builtin_ctzl(x); return y;])], + [gcry_cv_have_builtin_ctzl=yes])]) +if test "$gcry_cv_have_builtin_ctzl" = "yes" ; then + AC_DEFINE(HAVE_BUILTIN_CTZL, 1, + [Defined if compiler has '__builtin_ctzl' intrinsic]) +fi + + +# +# Check for __builtin_clz intrinsic. +# +AC_CACHE_CHECK(for __builtin_clz, + [gcry_cv_have_builtin_clz], + [gcry_cv_have_builtin_clz=no + AC_LINK_IFELSE([AC_LANG_PROGRAM([], + [unsigned int x = 0; int y = __builtin_clz(x); return y;])], + [gcry_cv_have_builtin_clz=yes])]) +if test "$gcry_cv_have_builtin_clz" = "yes" ; then + AC_DEFINE(HAVE_BUILTIN_CLZ, 1, + [Defined if compiler has '__builtin_clz' intrinsic]) +fi + + +# +# Check for __builtin_clzl intrinsic. +# +AC_CACHE_CHECK(for __builtin_clzl, + [gcry_cv_have_builtin_clzl], + [gcry_cv_have_builtin_clzl=no + AC_LINK_IFELSE([AC_LANG_PROGRAM([], + [unsigned long x = 0; long y = __builtin_clzl(x); return y;])], + [gcry_cv_have_builtin_clzl=yes])]) +if test "$gcry_cv_have_builtin_clzl" = "yes" ; then + AC_DEFINE(HAVE_BUILTIN_CLZL, 1, + [Defined if compiler has '__builtin_clzl' intrinsic]) +fi + + # # Check for __sync_synchronize intrinsic. # diff --git a/mpi/longlong.h b/mpi/longlong.h index 6573c984..6993d6eb 100644 --- a/mpi/longlong.h +++ b/mpi/longlong.h @@ -1681,6 +1681,26 @@ extern USItype __udiv_qrnnd (); # define udiv_qrnnd __udiv_qrnnd_c #endif +#if !defined (count_leading_zeros) +# if defined (HAVE_BUILTIN_CLZL) && SIZEOF_UNSIGNED_LONG * 8 == W_TYPE_SIZE +# define count_leading_zeros(count, x) (count = __builtin_clzl(x)) +# undef COUNT_LEADING_ZEROS_0 /* Input X=0 is undefined for the builtin. */ +# elif defined (HAVE_BUILTIN_CLZ) && SIZEOF_UNSIGNED_INT * 8 == W_TYPE_SIZE +# define count_leading_zeros(count, x) (count = __builtin_clz(x)) +# undef COUNT_LEADING_ZEROS_0 /* Input X=0 is undefined for the builtin. */ +# endif +#endif + +#if !defined (count_trailing_zeros) +# if defined (HAVE_BUILTIN_CTZL) && SIZEOF_UNSIGNED_LONG * 8 == W_TYPE_SIZE +# define count_trailing_zeros(count, x) (count = __builtin_ctzl(x)) +# undef COUNT_LEADING_ZEROS_0 /* Input X=0 is undefined for the builtin. */ +# elif defined (HAVE_BUILTIN_CTZ) && SIZEOF_UNSIGNED_INT * 8 == W_TYPE_SIZE +# define count_trailing_zeros(count, x) (count = __builtin_ctz(x)) +# undef COUNT_LEADING_ZEROS_0 /* Input X=0 is undefined for the builtin. */ +# endif +#endif + #if !defined (count_leading_zeros) extern # ifdef __STDC__ -- 2.27.0 From jussi.kivilinna at iki.fi Tue Jan 19 19:14:01 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 19 Jan 2021 20:14:01 +0200 Subject: [PATCH 2/2] cipher/bithelp: use __builtin_ctzl when available In-Reply-To: <20210119181401.139024-1-jussi.kivilinna@iki.fi> References: <20210119181401.139024-1-jussi.kivilinna@iki.fi> Message-ID: <20210119181401.139024-2-jussi.kivilinna@iki.fi> * cipher/bithelp.h (_gcry_ctz64): Use __builtin_ctzl is available. -- Signed-off-by: Jussi Kivilinna --- cipher/bithelp.h | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/cipher/bithelp.h b/cipher/bithelp.h index 26ef7c35..7793ce7c 100644 --- a/cipher/bithelp.h +++ b/cipher/bithelp.h @@ -83,7 +83,7 @@ static inline int _gcry_ctz (unsigned int x) { #if defined (HAVE_BUILTIN_CTZ) - return x? __builtin_ctz (x) : 8 * sizeof (x); + return x ? __builtin_ctz (x) : 8 * sizeof (x); #else /* See * http://graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightModLookup @@ -106,9 +106,11 @@ _gcry_ctz (unsigned int x) static inline int _gcry_ctz64(u64 x) { -#if defined (HAVE_BUILTIN_CTZ) && SIZEOF_UNSIGNED_INT >= 8 +#if defined (HAVE_BUILTIN_CTZL) && SIZEOF_UNSIGNED_LONG >= 8 + return x ? __builtin_ctzl (x) : 8 * sizeof (x); +#elif defined (HAVE_BUILTIN_CTZ) && SIZEOF_UNSIGNED_INT >= 8 #warning hello - return x? __builtin_ctz (x) : 8 * sizeof (x); + return x ? __builtin_ctz (x) : 8 * sizeof (x); #else if ((x & 0xffffffff)) return _gcry_ctz (x); -- 2.27.0 From jussi.kivilinna at iki.fi Tue Jan 19 19:14:17 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 19 Jan 2021 20:14:17 +0200 Subject: [PATCH] Silence 'may be used uninitialized in this function' warnings Message-ID: <20210119181417.139116-1-jussi.kivilinna@iki.fi> * cipher/arcfour.c (selftest): Initialize 'ctx'. * cipher/ecc-eddsa.c (_gcry_ecc_eddsa_ensure_compact): Initialize 'enc' and 'enclen'. (_gcry_ecc_eddsa_sign, _gcry_ecc_eddsa_verify): Initialize 'encpklen'. * mpi/mpi-pow.c (_gcry_mpi_powm): Initialize 'xsize'. -- Warnings were seen on gcc-s390x build with optimization level -O3. Signed-off-by: Jussi Kivilinna --- cipher/arcfour.c | 2 +- cipher/ecc-eddsa.c | 8 ++++---- mpi/mpi-pow.c | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/cipher/arcfour.c b/cipher/arcfour.c index 9e71857c..909e45b2 100644 --- a/cipher/arcfour.c +++ b/cipher/arcfour.c @@ -183,7 +183,7 @@ arcfour_setkey ( void *context, const byte *key, unsigned int keylen, static const char* selftest(void) { - ARCFOUR_context ctx; + ARCFOUR_context ctx = { { 0, }, }; byte scratch[16]; /* Test vector from Cryptlib labeled there: "from the diff --git a/cipher/ecc-eddsa.c b/cipher/ecc-eddsa.c index 2a1a8907..63c0ef3f 100644 --- a/cipher/ecc-eddsa.c +++ b/cipher/ecc-eddsa.c @@ -154,8 +154,8 @@ _gcry_ecc_eddsa_ensure_compact (gcry_mpi_t value, unsigned int nbits) const unsigned char *buf; unsigned int rawmpilen; gcry_mpi_t x, y; - unsigned char *enc; - unsigned int enclen; + unsigned char *enc = NULL; + unsigned int enclen = 0; if (!mpi_is_opaque (value)) return GPG_ERR_INV_OBJ; @@ -699,7 +699,7 @@ _gcry_ecc_eddsa_sign (gcry_mpi_t input, mpi_ec_t ec, unsigned char *rawmpi = NULL; unsigned int rawmpilen; unsigned char *encpk = NULL; /* Encoded public key. */ - unsigned int encpklen; + unsigned int encpklen = 0; mpi_point_struct I; /* Intermediate value. */ gcry_mpi_t a, x, y, r; int b; @@ -977,7 +977,7 @@ _gcry_ecc_eddsa_verify (gcry_mpi_t input, mpi_ec_t ec, int b; unsigned int tmp; unsigned char *encpk = NULL; /* Encoded public key. */ - unsigned int encpklen; + unsigned int encpklen = 0; const void *mbuf, *rbuf; unsigned char *tbuf = NULL; size_t mlen, rlen; diff --git a/mpi/mpi-pow.c b/mpi/mpi-pow.c index 62b4a808..defd675e 100644 --- a/mpi/mpi-pow.c +++ b/mpi/mpi-pow.c @@ -545,7 +545,7 @@ _gcry_mpi_powm (gcry_mpi_t res, { mpi_size_t i, j, k; mpi_ptr_t xp; - mpi_size_t xsize; + mpi_size_t xsize = 0; int c; mpi_limb_t e; mpi_limb_t carry_limb; -- 2.27.0 From jussi.kivilinna at iki.fi Tue Jan 19 20:13:49 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 19 Jan 2021 21:13:49 +0200 Subject: [PATCH] tests/basic: fix build on ARM32 when NEON disabled Message-ID: <20210119191349.232760-1-jussi.kivilinna@iki.fi> * tests/basic.c (CLUTTER_VECTOR_REGISTER_NEON) (CLUTTER_VECTOR_REGISTER_AARCH64): Remove check for __ARM_FEATURE_SIMD32. -- Cluttering of NEON vector registers was enabled even if NEON was not active for current compiler target. Issue was caused by enabling NEON cluttering by wrong feature macro __ARM_FEATURE_SIMD32. GnuPG-bug-id: 5251 Signed-off-by: Jussi Kivilinna --- tests/basic.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tests/basic.c b/tests/basic.c index 46e4c0f8..8b333bae 100644 --- a/tests/basic.c +++ b/tests/basic.c @@ -223,12 +223,12 @@ progress_handler (void *cb_data, const char *what, int printchar, # define CLUTTER_VECTOR_REGISTER_COUNT 8 #elif defined(HAVE_COMPATIBLE_GCC_AARCH64_PLATFORM_AS) && \ defined(HAVE_GCC_INLINE_ASM_AARCH64_NEON) && \ - (defined(__ARM_FEATURE_SIMD32) || defined(__ARM_NEON)) + defined(__ARM_NEON) # define CLUTTER_VECTOR_REGISTER_AARCH64 1 # define CLUTTER_VECTOR_REGISTER_COUNT 32 #elif defined(HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS) && \ defined(HAVE_GCC_INLINE_ASM_NEON) && \ - (defined(__ARM_FEATURE_SIMD32) || defined(__ARM_NEON)) + defined(__ARM_NEON) # define CLUTTER_VECTOR_REGISTER_NEON 1 # define CLUTTER_VECTOR_REGISTER_COUNT 16 #endif -- 2.27.0 From wk at gnupg.org Wed Jan 20 13:59:48 2021 From: wk at gnupg.org (Werner Koch) Date: Wed, 20 Jan 2021 13:59:48 +0100 Subject: [PATCH] Silence 'may be used uninitialized in this function' warnings In-Reply-To: <20210119181417.139116-1-jussi.kivilinna@iki.fi> (Jussi Kivilinna's message of "Tue, 19 Jan 2021 20:14:17 +0200") References: <20210119181417.139116-1-jussi.kivilinna@iki.fi> Message-ID: <87v9brahmj.fsf@wheatstone.g10code.de> On Tue, 19 Jan 2021 20:14, Jussi Kivilinna said: > Warnings were seen on gcc-s390x build with optimization level -O3. In general I don't like to silence such warning because later compiler versions are often fixed to detect such wrong warnings. The initialization may in some cases even inhibit the compiler to detect other errors. > - ARCFOUR_context ctx; > + ARCFOUR_context ctx = { { 0, }, }; The context is initialized in do_arcfour_setkey. Trailing commas are not needed and HP compilers may bail out here. I suggest not to apply this patch. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From jussi.kivilinna at iki.fi Wed Jan 20 16:17:02 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Wed, 20 Jan 2021 17:17:02 +0200 Subject: [PATCH] Split inline assembly blocks with many memory operands Message-ID: <20210120151702.2588524-1-jussi.kivilinna@iki.fi> * cipher/rijndael-aesni.c (aesni_ocb_checksum, aesni_ocb_enc) (aesni_ocb_dec, _gcry_aes_aesni_ocb_auth): Split assembly blocks with more than 4 memory operands to smaller blocks. * cipher/sha512-ssse3-i386.c (W2): Split big assembly block to three smaller blocks. -- On i386, with -O0, assembly blocks with many memory operands cause compiler error such as: rijndael-aesni.c:2815:7: error: 'asm' operand has impossible constraints Fix is to split assembly blocks so that number of operands per block is reduced. GnuPG-bug-id: 5257 Signed-off-by: Jussi Kivilinna --- cipher/rijndael-aesni.c | 137 +++++++++++++++++++++---------------- cipher/sha512-ssse3-i386.c | 18 +++-- 2 files changed, 90 insertions(+), 65 deletions(-) diff --git a/cipher/rijndael-aesni.c b/cipher/rijndael-aesni.c index 747ef662..95ec4c2b 100644 --- a/cipher/rijndael-aesni.c +++ b/cipher/rijndael-aesni.c @@ -2271,16 +2271,18 @@ aesni_ocb_checksum (gcry_cipher_hd_t c, const unsigned char *plaintext, "vpxor %[ptr1], %%ymm1, %%ymm1\n\t" "vpxor %[ptr2], %%ymm2, %%ymm2\n\t" "vpxor %[ptr3], %%ymm3, %%ymm3\n\t" - "vpxor %[ptr4], %%ymm0, %%ymm0\n\t" - "vpxor %[ptr5], %%ymm4, %%ymm4\n\t" - "vpxor %[ptr6], %%ymm5, %%ymm5\n\t" - "vpxor %[ptr7], %%ymm7, %%ymm7\n\t" : : [ptr0] "m" (*(plaintext + 0 * BLOCKSIZE * 2)), [ptr1] "m" (*(plaintext + 1 * BLOCKSIZE * 2)), [ptr2] "m" (*(plaintext + 2 * BLOCKSIZE * 2)), - [ptr3] "m" (*(plaintext + 3 * BLOCKSIZE * 2)), - [ptr4] "m" (*(plaintext + 4 * BLOCKSIZE * 2)), + [ptr3] "m" (*(plaintext + 3 * BLOCKSIZE * 2)) + : "memory" ); + asm volatile ("vpxor %[ptr4], %%ymm0, %%ymm0\n\t" + "vpxor %[ptr5], %%ymm4, %%ymm4\n\t" + "vpxor %[ptr6], %%ymm5, %%ymm5\n\t" + "vpxor %[ptr7], %%ymm7, %%ymm7\n\t" + : + : [ptr4] "m" (*(plaintext + 4 * BLOCKSIZE * 2)), [ptr5] "m" (*(plaintext + 5 * BLOCKSIZE * 2)), [ptr6] "m" (*(plaintext + 6 * BLOCKSIZE * 2)), [ptr7] "m" (*(plaintext + 7 * BLOCKSIZE * 2)) @@ -2325,16 +2327,18 @@ aesni_ocb_checksum (gcry_cipher_hd_t c, const unsigned char *plaintext, "vxorpd %[ptr1], %%ymm1, %%ymm1\n\t" "vxorpd %[ptr2], %%ymm2, %%ymm2\n\t" "vxorpd %[ptr3], %%ymm3, %%ymm3\n\t" - "vxorpd %[ptr4], %%ymm0, %%ymm0\n\t" - "vxorpd %[ptr5], %%ymm4, %%ymm4\n\t" - "vxorpd %[ptr6], %%ymm5, %%ymm5\n\t" - "vxorpd %[ptr7], %%ymm7, %%ymm7\n\t" : : [ptr0] "m" (*(plaintext + 0 * BLOCKSIZE * 2)), [ptr1] "m" (*(plaintext + 1 * BLOCKSIZE * 2)), [ptr2] "m" (*(plaintext + 2 * BLOCKSIZE * 2)), - [ptr3] "m" (*(plaintext + 3 * BLOCKSIZE * 2)), - [ptr4] "m" (*(plaintext + 4 * BLOCKSIZE * 2)), + [ptr3] "m" (*(plaintext + 3 * BLOCKSIZE * 2)) + : "memory" ); + asm volatile ("vxorpd %[ptr4], %%ymm0, %%ymm0\n\t" + "vxorpd %[ptr5], %%ymm4, %%ymm4\n\t" + "vxorpd %[ptr6], %%ymm5, %%ymm5\n\t" + "vxorpd %[ptr7], %%ymm7, %%ymm7\n\t" + : + : [ptr4] "m" (*(plaintext + 4 * BLOCKSIZE * 2)), [ptr5] "m" (*(plaintext + 5 * BLOCKSIZE * 2)), [ptr6] "m" (*(plaintext + 6 * BLOCKSIZE * 2)), [ptr7] "m" (*(plaintext + 7 * BLOCKSIZE * 2)) @@ -2718,28 +2722,35 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, "aesenclast %[tmpbuf0],%%xmm8\n\t" "aesenclast %[tmpbuf1],%%xmm9\n\t" "aesenclast %[tmpbuf2],%%xmm10\n\t" - "aesenclast %%xmm5, %%xmm11\n\t" + : + : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), + [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)), + [lxfkey] "m" (*lxf_key) + : "memory" ); + asm volatile ("aesenclast %%xmm5, %%xmm11\n\t" "pxor %[lxfkey], %%xmm11\n\t" "movdqu %%xmm1, %[outbuf0]\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" - "movdqu %%xmm3, %[outbuf2]\n\t" + : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), + [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)) + : [lxfkey] "m" (*lxf_key) + : "memory" ); + asm volatile ("movdqu %%xmm3, %[outbuf2]\n\t" "movdqu %%xmm4, %[outbuf3]\n\t" "movdqu %%xmm8, %[outbuf4]\n\t" - "movdqu %%xmm9, %[outbuf5]\n\t" + : [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), + [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)), + [outbuf4] "=m" (*(outbuf + 4 * BLOCKSIZE)) + : + : "memory" ); + asm volatile ("movdqu %%xmm9, %[outbuf5]\n\t" "movdqu %%xmm10, %[outbuf6]\n\t" "movdqu %%xmm11, %[outbuf7]\n\t" - : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), - [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)), - [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), - [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)), - [outbuf4] "=m" (*(outbuf + 4 * BLOCKSIZE)), - [outbuf5] "=m" (*(outbuf + 5 * BLOCKSIZE)), + : [outbuf5] "=m" (*(outbuf + 5 * BLOCKSIZE)), [outbuf6] "=m" (*(outbuf + 6 * BLOCKSIZE)), [outbuf7] "=m" (*(outbuf + 7 * BLOCKSIZE)) - : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), - [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), - [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)), - [lxfkey] "m" (*lxf_key) + : : "memory" ); outbuf += 8*BLOCKSIZE; @@ -2816,17 +2827,18 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, "movdqu %%xmm1, %[outbuf0]\n\t" "pxor %[tmpbuf1],%%xmm2\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" - "pxor %[tmpbuf2],%%xmm3\n\t" + : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), + [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)) + : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)) + : "memory" ); + asm volatile ("pxor %[tmpbuf2],%%xmm3\n\t" "movdqu %%xmm3, %[outbuf2]\n\t" "pxor %%xmm5, %%xmm4\n\t" "movdqu %%xmm4, %[outbuf3]\n\t" - : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), - [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)), - [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), + : [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)) - : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), - [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), - [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) + : [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) : "memory" ); outbuf += 4*BLOCKSIZE; @@ -3199,28 +3211,34 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, "aesdeclast %[tmpbuf0],%%xmm8\n\t" "aesdeclast %[tmpbuf1],%%xmm9\n\t" "aesdeclast %[tmpbuf2],%%xmm10\n\t" - "aesdeclast %%xmm5, %%xmm11\n\t" + : + : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), + [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) + : "memory" ); + asm volatile ("aesdeclast %%xmm5, %%xmm11\n\t" "pxor %[lxfkey], %%xmm11\n\t" "movdqu %%xmm1, %[outbuf0]\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" - "movdqu %%xmm3, %[outbuf2]\n\t" + : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), + [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)) + : [lxfkey] "m" (*lxf_key) + : "memory" ); + asm volatile ("movdqu %%xmm3, %[outbuf2]\n\t" "movdqu %%xmm4, %[outbuf3]\n\t" "movdqu %%xmm8, %[outbuf4]\n\t" - "movdqu %%xmm9, %[outbuf5]\n\t" + : [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), + [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)), + [outbuf4] "=m" (*(outbuf + 4 * BLOCKSIZE)) + : + : "memory" ); + asm volatile ("movdqu %%xmm9, %[outbuf5]\n\t" "movdqu %%xmm10, %[outbuf6]\n\t" "movdqu %%xmm11, %[outbuf7]\n\t" - : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), - [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)), - [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), - [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)), - [outbuf4] "=m" (*(outbuf + 4 * BLOCKSIZE)), - [outbuf5] "=m" (*(outbuf + 5 * BLOCKSIZE)), + : [outbuf5] "=m" (*(outbuf + 5 * BLOCKSIZE)), [outbuf6] "=m" (*(outbuf + 6 * BLOCKSIZE)), [outbuf7] "=m" (*(outbuf + 7 * BLOCKSIZE)) - : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), - [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), - [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)), - [lxfkey] "m" (*lxf_key) + : : "memory" ); outbuf += 8*BLOCKSIZE; @@ -3292,17 +3310,18 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, "movdqu %%xmm1, %[outbuf0]\n\t" "pxor %[tmpbuf1],%%xmm2\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" - "pxor %[tmpbuf2],%%xmm3\n\t" + : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), + [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)) + : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)) + : "memory" ); + asm volatile ("pxor %[tmpbuf2],%%xmm3\n\t" "movdqu %%xmm3, %[outbuf2]\n\t" "pxor %%xmm5, %%xmm4\n\t" "movdqu %%xmm4, %[outbuf3]\n\t" - : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), - [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)), - [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), + : [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)) - : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), - [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), - [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) + : [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) : "memory" ); outbuf += 4*BLOCKSIZE; @@ -3461,16 +3480,18 @@ _gcry_aes_aesni_ocb_auth (gcry_cipher_hd_t c, const void *abuf_arg, "movdqu %[abuf1], %%xmm2\n\t" "movdqu %[abuf2], %%xmm3\n\t" "movdqu %[abuf3], %%xmm4\n\t" - "movdqu %[abuf4], %%xmm8\n\t" - "movdqu %[abuf5], %%xmm9\n\t" - "movdqu %[abuf6], %%xmm10\n\t" - "movdqu %[abuf7], %%xmm11\n\t" : : [abuf0] "m" (*(abuf + 0 * BLOCKSIZE)), [abuf1] "m" (*(abuf + 1 * BLOCKSIZE)), [abuf2] "m" (*(abuf + 2 * BLOCKSIZE)), - [abuf3] "m" (*(abuf + 3 * BLOCKSIZE)), - [abuf4] "m" (*(abuf + 4 * BLOCKSIZE)), + [abuf3] "m" (*(abuf + 3 * BLOCKSIZE)) + : "memory" ); + asm volatile ("movdqu %[abuf4], %%xmm8\n\t" + "movdqu %[abuf5], %%xmm9\n\t" + "movdqu %[abuf6], %%xmm10\n\t" + "movdqu %[abuf7], %%xmm11\n\t" + : + : [abuf4] "m" (*(abuf + 4 * BLOCKSIZE)), [abuf5] "m" (*(abuf + 5 * BLOCKSIZE)), [abuf6] "m" (*(abuf + 6 * BLOCKSIZE)), [abuf7] "m" (*(abuf + 7 * BLOCKSIZE)) diff --git a/cipher/sha512-ssse3-i386.c b/cipher/sha512-ssse3-i386.c index 4b12cee4..0fc98d8e 100644 --- a/cipher/sha512-ssse3-i386.c +++ b/cipher/sha512-ssse3-i386.c @@ -228,7 +228,11 @@ static const unsigned char bshuf_mask[16] __attribute__ ((aligned (16))) = asm volatile ("movdqu %[w_t_m_2], %%xmm2;\n\t" \ "movdqa %%xmm2, %%xmm0;\n\t" \ "movdqu %[w_t_m_15], %%xmm5;\n\t" \ - "movdqa %%xmm5, %%xmm3;\n\t" \ + : \ + : [w_t_m_2] "m" (w[(i)-2]), \ + [w_t_m_15] "m" (w[(i)-15]) \ + : "memory" ); \ + asm volatile ("movdqa %%xmm5, %%xmm3;\n\t" \ "psrlq $(61-19), %%xmm0;\n\t" \ "psrlq $(8-7), %%xmm3;\n\t" \ "pxor %%xmm2, %%xmm0;\n\t" \ @@ -251,17 +255,17 @@ static const unsigned char bshuf_mask[16] __attribute__ ((aligned (16))) = "movdqu %[w_t_m_16], %%xmm2;\n\t" \ "pxor %%xmm4, %%xmm3;\n\t" \ "movdqu %[w_t_m_7], %%xmm1;\n\t" \ - "paddq %%xmm3, %%xmm0;\n\t" \ + : \ + : [w_t_m_7] "m" (w[(i)-7]), \ + [w_t_m_16] "m" (w[(i)-16]) \ + : "memory" ); \ + asm volatile ("paddq %%xmm3, %%xmm0;\n\t" \ "paddq %%xmm2, %%xmm0;\n\t" \ "paddq %%xmm1, %%xmm0;\n\t" \ "movdqu %%xmm0, %[w_t_m_0];\n\t" \ "paddq %[k], %%xmm0;\n\t" \ : [w_t_m_0] "=m" (w[(i)-0]) \ - : [k] "m" (K[i]), \ - [w_t_m_2] "m" (w[(i)-2]), \ - [w_t_m_7] "m" (w[(i)-7]), \ - [w_t_m_15] "m" (w[(i)-15]), \ - [w_t_m_16] "m" (w[(i)-16]) \ + : [k] "m" (K[i]) \ : "memory" ) unsigned int ASM_FUNC_ATTR -- 2.27.0 From kasumi at rollingapple.net Wed Jan 20 16:19:11 2021 From: kasumi at rollingapple.net (Kasumi Fukuda) Date: Thu, 21 Jan 2021 00:19:11 +0900 Subject: libgcrypt1.9.0: Failure on linking test executables Message-ID: Dear gcrypt developers, I'm trying to build libgcrypt 1.9.0 on amazonlinux2, with its dependency library (libgpg-error) built from source and installed in a non-default prefixed location (and libgcrypt-config is on the $PATH when configure). The test executables such as tests/t-secmem and tests/t-mpi-bit fail to link in my environment with the following error: ---- /bin/sh ../libtool --tag=CC --mode=link gcc -I/opt/x86_64-redhat-linux/libgpg-error/1.41/include -g -O2 -fvisibility=hidden -fno-delete-null-pointer-checks -Wall -no-install -o t-secmem t-secmem.o ../src/libgcrypt.la ../compat/libcompat.la libtool: link: gcc -I/opt/x86_64-redhat-linux/libgpg-error/1.41/include -g -O2 -fvisibility=hidden -fno-delete-null-pointer-checks -Wall -o t-secmem t-secmem.o ../src/.libs/libgcrypt.so ../compat/.libs/libcompat.a -Wl,-rpath -Wl,/tmp/src/tmp/x86_64-redhat-linux/ports/libgcrypt/1.9.0/libgcrypt-1.9.0/src/.libs -Wl,-rpath -Wl,/opt/x86_64-redhat-linux/libgcrypt/1.9.0/lib ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_fprintf at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpg_err_code_from_syserror at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_get_syscall_clamp at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_lock_init at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpg_err_code_from_errno at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_lock_unlock at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_b64dec_start at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_rewind at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_b64dec_finish at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_lock_lock at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpg_err_set_errno at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpg_strsource at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_fclose at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_b64dec_proc at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_fopenmem at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpg_strerror at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_fclose_snatch at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_ferror at GPG_ERROR_1.0' ../src/.libs/libgcrypt.so: undefined reference to `gpgrt_lock_destroy at GPG_ERROR_1.0' collect2: error: ld returned 1 exit status make[2]: *** [t-secmem] Error 1 ---- Seeing the log, I think we need to specify the location of libgpg-error for the linker to correctly link these executables. Commenting out the following lines in tests/Makefile.am seems to fix the problem (these lines overrides LDADD to drop -lgcrypt-error): ---- pkbench_LDADD = $(standard_ldadd) prime_LDADD = $(standard_ldadd) t_mpi_bit_LDADD = $(standard_ldadd) t_secmem_LDADD = $(standard_ldadd) testapi_LDADD = $(standard_ldadd) ---- I will provide the full configure/make logs if necessary. From jussi.kivilinna at iki.fi Wed Jan 20 15:24:47 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Wed, 20 Jan 2021 16:24:47 +0200 Subject: [PATCH] Silence 'may be used uninitialized in this function' warnings In-Reply-To: <87v9brahmj.fsf@wheatstone.g10code.de> References: <20210119181417.139116-1-jussi.kivilinna@iki.fi> <87v9brahmj.fsf@wheatstone.g10code.de> Message-ID: <43ba702e-0a75-10e9-022f-3c453baba8ec@iki.fi> On 20.1.2021 14.59, Werner Koch wrote: > On Tue, 19 Jan 2021 20:14, Jussi Kivilinna said: > >> Warnings were seen on gcc-s390x build with optimization level -O3. > > In general I don't like to silence such warning because later compiler > versions are often fixed to detect such wrong warnings. The > initialization may in some cases even inhibit the compiler to detect > other errors. > >> - ARCFOUR_context ctx; >> + ARCFOUR_context ctx = { { 0, }, }; > > The context is initialized in do_arcfour_setkey. Trailing commas are > not needed and HP compilers may bail out here. > > I suggest not to apply this patch. > Ok. I'll leave this one out. -Jussi > > Salam-Shalom, > > Werner > From gniibe at fsij.org Thu Jan 21 06:23:20 2021 From: gniibe at fsij.org (NIIBE Yutaka) Date: Thu, 21 Jan 2021 14:23:20 +0900 Subject: libgcrypt1.9.0: Failure on linking test executables In-Reply-To: References: Message-ID: <8735yuualz.fsf@iwagami.gniibe.org> Kasumi Fukuda via Gcrypt-devel writes: > I'm trying to build libgcrypt 1.9.0 on amazonlinux2, with its > dependency library (libgpg-error) built from source and installed in a > non-default prefixed location (and libgcrypt-config is on the $PATH > when configure). > The test executables such as tests/t-secmem and tests/t-mpi-bit fail > to link in my environment with the following error: Thank you for the report. What's the version of your GCC and linker (from binutils)? I don't see the reason why your linker emits error when linking t-secmem with libgcrypt.so. The t-secmem.o object has no use of any libgpg-error sympols. It seems that your linker tries to resolve all symbols in linked library, which is not needed at all. I tried with my environment (Debian GNU/Linux): Clang-3.8, GCC 6, GCC 8, GCC 9,... with ld.gold or ld.bfd. I was unable to replicate the linkage error. > Seeing the log, I think we need to specify the location of > libgpg-error for the linker to correctly link these executables. FWIW, it is me who droped -lgpg-error for these programs. It is intentional change to minimize library dependency, assuming modern toolchain. However, if it results build error(s) on some systems, it should be reverted. -- From kasumi at rollingapple.net Thu Jan 21 08:29:24 2021 From: kasumi at rollingapple.net (Kasumi Fukuda) Date: Thu, 21 Jan 2021 16:29:24 +0900 Subject: libgcrypt1.9.0: Failure on linking test executables In-Reply-To: <8735yuualz.fsf@iwagami.gniibe.org> References: <8735yuualz.fsf@iwagami.gniibe.org> Message-ID: Thank you for your clarification, Niibe san On Thu, Jan 21, 2021 at 2:23 PM NIIBE Yutaka wrote: > What's the version of your GCC and linker (from binutils)? They are from the official Amazon Linux 2 repository. ---- bash-4.2# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/7/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl --enable-libmpx --enable-libsanitizer --enable-gnu-indirect-function --enable-libcilkrts --enable-libatomic --enable-libquadmath --enable-libitm --with-tune=generic --with-arch_32=x86-64 --build=x86_64-redhat-linux Thread model: posix gcc version 7.3.1 20180712 (Red Hat 7.3.1-9) (GCC) bash-4.2# ld -v GNU ld version 2.29.1-30.amzn2 bash-4.2# ld.gold -v GNU gold (version 2.29.1-30.amzn2) 1.14 ---- > I don't see the reason why your linker emits error when linking t-secmem > with libgcrypt.so. The t-secmem.o object has no use of any libgpg-error > sympols. It seems that your linker tries to resolve all symbols in > linked library, which is not needed at all. I think you are right. And in fact, it has no dependency on libgpg-error symbols when I do `nm t-secmem.o` or `readelf -a t-secmem`. > I tried with my environment (Debian GNU/Linux): Clang-3.8, GCC 6, GCC 8, > GCC 9,... with ld.gold or ld.bfd. I was unable to replicate the linkage > error. Switching the linker to gold (by symlinking ld -> ld.gold) resolved the issue in my case. So something wrong with my ld.bfd? -- From kasumi at rollingapple.net Thu Jan 21 09:07:02 2021 From: kasumi at rollingapple.net (Kasumi Fukuda) Date: Thu, 21 Jan 2021 17:07:02 +0900 Subject: libgcrypt1.9.0: Failure on linking test executables In-Reply-To: References: <8735yuualz.fsf@iwagami.gniibe.org> Message-ID: Let me share a repro step using docker: ---- #!/bin/bash docker run -i amazon/aws-sam-cli-build-image-provided.al2 /bin/bash <<'EOF' set -ex cd /tmp export MAKEFLAGS=-j$(nproc) M=https://gnupg.org/ftp/gcrypt curl $M/libgpg-error/libgpg-error-1.41.tar.bz2 | tar jx curl $M/libgcrypt/libgcrypt-1.9.0.tar.bz2 | tar jx cd /tmp/libgpg-error-1.41 ./configure --prefix=/opt/libgpg-error --disable-doc make && make install export PATH=/opt/libgpg-error/bin:$PATH cd /tmp/libgcrypt-1.9.0 ./configure --prefix=/opt/libgcrypt --disable-doc make && make install EOF ---- On Thu, Jan 21, 2021 at 4:29 PM Kasumi Fukuda wrote: > > Thank you for your clarification, Niibe san > > On Thu, Jan 21, 2021 at 2:23 PM NIIBE Yutaka wrote: > > What's the version of your GCC and linker (from binutils)? > > They are from the official Amazon Linux 2 repository. > > ---- > bash-4.2# gcc -v > Using built-in specs. > COLLECT_GCC=gcc > COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/7/lto-wrapper > Target: x86_64-redhat-linux > Configured with: ../configure --enable-bootstrap > --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --prefix=/usr > --mandir=/usr/share/man --infodir=/usr/share/info > --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared > --enable-threads=posix --enable-checking=release --enable-multilib > --with-system-zlib --enable-__cxa_atexit > --disable-libunwind-exceptions --enable-gnu-unique-object > --enable-linker-build-id --with-gcc-major-version-only > --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array > --with-isl --enable-libmpx --enable-libsanitizer > --enable-gnu-indirect-function --enable-libcilkrts --enable-libatomic > --enable-libquadmath --enable-libitm --with-tune=generic > --with-arch_32=x86-64 --build=x86_64-redhat-linux > Thread model: posix > gcc version 7.3.1 20180712 (Red Hat 7.3.1-9) (GCC) > > bash-4.2# ld -v > GNU ld version 2.29.1-30.amzn2 > > bash-4.2# ld.gold -v > GNU gold (version 2.29.1-30.amzn2) 1.14 > ---- > > > I don't see the reason why your linker emits error when linking t-secmem > > with libgcrypt.so. The t-secmem.o object has no use of any libgpg-error > > sympols. It seems that your linker tries to resolve all symbols in > > linked library, which is not needed at all. > > I think you are right. And in fact, it has no dependency on > libgpg-error symbols when I do `nm t-secmem.o` or `readelf -a > t-secmem`. > > > I tried with my environment (Debian GNU/Linux): Clang-3.8, GCC 6, GCC 8, > > GCC 9,... with ld.gold or ld.bfd. I was unable to replicate the linkage > > error. > > Switching the linker to gold (by symlinking ld -> ld.gold) resolved > the issue in my case. > So something wrong with my ld.bfd? > > -- From jussi.kivilinna at iki.fi Thu Jan 21 21:30:14 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 21 Jan 2021 22:30:14 +0200 Subject: [PATCH] Add configure option to force enable 'soft' HW feature bits Message-ID: <20210121203016.4051126-1-jussi.kivilinna@iki.fi> * configure.ac (force_soft_hwfeatures) (ENABLE_FORCE_SOFT_HWFEATURES): New. * src/hwf-x86.c (detect_x86_gnuc): Enable HWF_INTEL_FAST_SHLD and HWF_INTEL_FAST_VPGATHER if ENABLE_FORCE_SOFT_HWFEATURES enabled. -- Patch allows enabling HW features, that are fast only select CPU models, on all CPUs. For example, SHLD instruction is fast on only select Intel processors and should not be used on others. This configuration option allows enabling these 'soft' HW features for testing purposes on all CPUs. Current 'soft' HW features are: - "intel-fast-shld": supported by all x86 (but very slow on most) - "intel-fast-vpgather": supported by all x86 with AVX2 (but slow on most) Signed-off-by: Jussi Kivilinna --- configure.ac | 14 ++++++++++++++ src/hwf-x86.c | 23 +++++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/configure.ac b/configure.ac index ce17d9f4..8aba8ece 100644 --- a/configure.ac +++ b/configure.ac @@ -566,6 +566,15 @@ AC_ARG_ENABLE(large-data-tests, AC_MSG_RESULT($large_data_tests) AC_SUBST(RUN_LARGE_DATA_TESTS, $large_data_tests) +# Implementation of --enable-force-soft-hwfeatures +AC_MSG_CHECKING([whether 'soft' HW feature bits are forced on]) +AC_ARG_ENABLE([force-soft-hwfeatures], + AS_HELP_STRING([--enable-force-soft-hwfeatures], + [Enable forcing 'soft' HW feature bits on]), + [force_soft_hwfeatures=$enableval], + [force_soft_hwfeatures=no]) +AC_MSG_RESULT($force_soft_hwfeatures) + # Implementation of the --with-capabilities switch. # Check whether we want to use Linux capabilities @@ -2434,6 +2443,11 @@ if test x"$drngsupport" = xyes ; then fi +if test x"$force_soft_hwfeatures" = xyes ; then + AC_DEFINE(ENABLE_FORCE_SOFT_HWFEATURES, 1, + [Enable forcing 'soft' HW feature bits on (for testing).]) +fi + # Define conditional sources and config.h symbols depending on the # selected ciphers, pubkey-ciphers, digests, kdfs, and random modules. diff --git a/src/hwf-x86.c b/src/hwf-x86.c index 796e874f..9a9ed6d3 100644 --- a/src/hwf-x86.c +++ b/src/hwf-x86.c @@ -301,6 +301,29 @@ detect_x86_gnuc (void) avoid_vpgather |= 1; } +#ifdef ENABLE_FORCE_SOFT_HWFEATURES + /* Soft HW features mark functionality that is available on all systems + * but not feasible to use because of slow HW implementation. */ + + /* SHLD is faster at rotating register than actual ROR/ROL instructions + * on older Intel systems (~sandy-bridge era). However, SHLD is very + * slow on almost anything else and later Intel processors have faster + * ROR/ROL. Therefore in regular build HWF_INTEL_FAST_SHLD is enabled + * only for those Intel processors that benefit from the SHLD + * instruction. Enabled here unconditionally as requested. */ + result |= HWF_INTEL_FAST_SHLD; + + /* VPGATHER instructions are used for look-up table based + * implementations which require VPGATHER to be fast enough to beat + * regular parallelized look-up table implementations (see Twofish). + * So far, only Intel processors beginning with skylake have had + * VPGATHER fast enough to be enabled. AMD Zen3 comes close to + * being feasible, but not quite (where twofish-avx2 is few percent + * slower than twofish-3way). Enable VPGATHER here unconditionally + * as requested. */ + avoid_vpgather = 0; +#endif + #ifdef ENABLE_PCLMUL_SUPPORT /* Test bit 1 for PCLMUL. */ if (features & 0x00000002) -- 2.27.0 From jussi.kivilinna at iki.fi Thu Jan 21 21:30:15 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 21 Jan 2021 22:30:15 +0200 Subject: [PATCH] Define HW-feature flags per architecture In-Reply-To: <20210121203016.4051126-1-jussi.kivilinna@iki.fi> References: <20210121203016.4051126-1-jussi.kivilinna@iki.fi> Message-ID: <20210121203016.4051126-2-jussi.kivilinna@iki.fi> * random/rand-internal.h (_gcry_rndhw_poll_slow): Add requested length parameter. * random/rndhw.c (_gcry_rndhw_poll_slow): Limit accounted bytes to 50% (or 25% for RDRAND) - this code is moved from caller side. * random/rndlinux.c (_gcry_rndlinux_gather_random): Move HWF_INTEL_RDRAND check to _gcry_rndhw_poll_slow. * src/g10lib.h (HWF_PADLOCK_*, HWF_INTEL_*): Define only if HAVE_CPU_ARCH_X86. (HWF_ARM_*): Define only if HAVE_CPU_ARCH_ARM. (HWF_PPC_*): Define only if HAVE_CPU_ARCH_PPC. (HWF_S390X_*): Define only if HAVE_CPU_ARCH_S390X. -- Signed-off-by: Jussi Kivilinna --- random/rand-internal.h | 2 +- random/rndhw.c | 15 ++++++++++++--- random/rndlinux.c | 17 ++++------------- src/g10lib.h | 34 ++++++++++++++++++++++------------ 4 files changed, 39 insertions(+), 29 deletions(-) diff --git a/random/rand-internal.h b/random/rand-internal.h index d99c6671..34221569 100644 --- a/random/rand-internal.h +++ b/random/rand-internal.h @@ -141,7 +141,7 @@ void _gcry_rndhw_poll_fast (void (*add)(const void*, size_t, enum random_origins origin); size_t _gcry_rndhw_poll_slow (void (*add)(const void*, size_t, enum random_origins), - enum random_origins origin); + enum random_origins origin, size_t req_length); diff --git a/random/rndhw.c b/random/rndhw.c index 2829382c..3cf9acc3 100644 --- a/random/rndhw.c +++ b/random/rndhw.c @@ -198,24 +198,33 @@ _gcry_rndhw_poll_fast (void (*add)(const void*, size_t, enum random_origins), /* Read 64 bytes from a hardware RNG and return the number of bytes - actually read. */ + actually read. However hardware source is let account only + for up to 50% (or 25% for RDRAND) of the requested bytes. */ size_t _gcry_rndhw_poll_slow (void (*add)(const void*, size_t, enum random_origins), - enum random_origins origin) + enum random_origins origin, size_t req_length) { size_t nbytes = 0; (void)add; (void)origin; + req_length /= 2; /* Up to 50%. */ + #ifdef USE_DRNG if ((_gcry_get_hw_features () & HWF_INTEL_RDRAND)) - nbytes += poll_drng (add, origin, 0); + { + req_length /= 2; /* Up to 25%. */ + nbytes += poll_drng (add, origin, 0); + } #endif #ifdef USE_PADLOCK if ((_gcry_get_hw_features () & HWF_PADLOCK_RNG)) nbytes += poll_padlock (add, origin, 0); #endif + if (nbytes > req_length) + nbytes = req_length; + return nbytes; } diff --git a/random/rndlinux.c b/random/rndlinux.c index 04e2a464..7cbf6ac2 100644 --- a/random/rndlinux.c +++ b/random/rndlinux.c @@ -186,19 +186,10 @@ _gcry_rndlinux_gather_random (void (*add)(const void*, size_t, } - /* First read from a hardware source. However let it account only - for up to 50% (or 25% for RDRAND) of the requested bytes. */ - n_hw = _gcry_rndhw_poll_slow (add, origin); - if ((_gcry_get_hw_features () & HWF_INTEL_RDRAND)) - { - if (n_hw > length/4) - n_hw = length/4; - } - else - { - if (n_hw > length/2) - n_hw = length/2; - } + /* First read from a hardware source. Note that _gcry_rndhw_poll_slow lets + it account only for up to 50% (or 25% for RDRAND) of the requested + bytes. */ + n_hw = _gcry_rndhw_poll_slow (add, origin, length); if (length > 1) length -= n_hw; diff --git a/src/g10lib.h b/src/g10lib.h index cba2e237..243997eb 100644 --- a/src/g10lib.h +++ b/src/g10lib.h @@ -217,6 +217,8 @@ char **_gcry_strtokenize (const char *string, const char *delim); /*-- src/hwfeatures.c --*/ +#if defined(HAVE_CPU_ARCH_X86) + #define HWF_PADLOCK_RNG (1 << 0) #define HWF_PADLOCK_AES (1 << 1) #define HWF_PADLOCK_SHA (1 << 2) @@ -236,20 +238,28 @@ char **_gcry_strtokenize (const char *string, const char *delim); #define HWF_INTEL_RDTSC (1 << 15) #define HWF_INTEL_SHAEXT (1 << 16) -#define HWF_ARM_NEON (1 << 17) -#define HWF_ARM_AES (1 << 18) -#define HWF_ARM_SHA1 (1 << 19) -#define HWF_ARM_SHA2 (1 << 20) -#define HWF_ARM_PMULL (1 << 21) +#elif defined(HAVE_CPU_ARCH_ARM) + +#define HWF_ARM_NEON (1 << 0) +#define HWF_ARM_AES (1 << 1) +#define HWF_ARM_SHA1 (1 << 2) +#define HWF_ARM_SHA2 (1 << 3) +#define HWF_ARM_PMULL (1 << 4) + +#elif defined(HAVE_CPU_ARCH_PPC) -#define HWF_PPC_VCRYPTO (1 << 22) -#define HWF_PPC_ARCH_3_00 (1 << 23) -#define HWF_PPC_ARCH_2_07 (1 << 24) +#define HWF_PPC_VCRYPTO (1 << 0) +#define HWF_PPC_ARCH_3_00 (1 << 1) +#define HWF_PPC_ARCH_2_07 (1 << 2) -#define HWF_S390X_MSA (1 << 25) -#define HWF_S390X_MSA_4 (1 << 26) -#define HWF_S390X_MSA_8 (1 << 27) -#define HWF_S390X_VX (1 << 28) +#elif defined(HAVE_CPU_ARCH_S390X) + +#define HWF_S390X_MSA (1 << 0) +#define HWF_S390X_MSA_4 (1 << 1) +#define HWF_S390X_MSA_8 (1 << 2) +#define HWF_S390X_VX (1 << 3) + +#endif gpg_err_code_t _gcry_disable_hw_feature (const char *name); void _gcry_detect_hw_features (void); -- 2.27.0 From jussi.kivilinna at iki.fi Thu Jan 21 21:30:16 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 21 Jan 2021 22:30:16 +0200 Subject: [PATCH] rijndael: remove unused use_xxx flags In-Reply-To: <20210121203016.4051126-1-jussi.kivilinna@iki.fi> References: <20210121203016.4051126-1-jussi.kivilinna@iki.fi> Message-ID: <20210121203016.4051126-3-jussi.kivilinna@iki.fi> * cipher/rijndael-internal.h (RIJNDAEL_context_s): Remove unused 'use_padlock', 'use_aesni', 'use_ssse3', 'use_arm_ce', 'use_ppc_crypto' and 'use_ppc9le_crypto'. * cipher/rijndael.c (do_setkey): Do not setup 'use_padlock', 'use_aesni', 'use_ssse3', 'use_arm_ce', 'use_ppc_crypto' and 'use_ppc9le_crypto'. -- Signed-off-by: Jussi Kivilinna --- cipher/rijndael-internal.h | 20 ++------------------ cipher/rijndael.c | 24 ------------------------ 2 files changed, 2 insertions(+), 42 deletions(-) diff --git a/cipher/rijndael-internal.h b/cipher/rijndael-internal.h index 447a773a..7e01f6b0 100644 --- a/cipher/rijndael-internal.h +++ b/cipher/rijndael-internal.h @@ -164,26 +164,10 @@ typedef struct RIJNDAEL_context_s } u2; int rounds; /* Key-length-dependent number of rounds. */ unsigned int decryption_prepared:1; /* The decryption key schedule is available. */ -#ifdef USE_PADLOCK - unsigned int use_padlock:1; /* Padlock shall be used. */ -#endif /*USE_PADLOCK*/ #ifdef USE_AESNI - unsigned int use_aesni:1; /* AES-NI shall be used. */ - unsigned int use_avx:1; /* AVX shall be used. */ - unsigned int use_avx2:1; /* AVX2 shall be used. */ + unsigned int use_avx:1; /* AVX shall be used by AES-NI implementation. */ + unsigned int use_avx2:1; /* AVX2 shall be used by AES-NI implementation. */ #endif /*USE_AESNI*/ -#ifdef USE_SSSE3 - unsigned int use_ssse3:1; /* SSSE3 shall be used. */ -#endif /*USE_SSSE3*/ -#ifdef USE_ARM_CE - unsigned int use_arm_ce:1; /* ARMv8 CE shall be used. */ -#endif /*USE_ARM_CE*/ -#ifdef USE_PPC_CRYPTO - unsigned int use_ppc_crypto:1; /* PowerPC crypto shall be used. */ -#endif /*USE_PPC_CRYPTO*/ -#ifdef USE_PPC_CRYPTO_WITH_PPC9LE - unsigned int use_ppc9le_crypto:1; /* POWER9 LE crypto shall be used. */ -#endif #ifdef USE_S390X_CRYPTO byte km_func; byte km_func_xts; diff --git a/cipher/rijndael.c b/cipher/rijndael.c index 2b1aa5e5..6ab6d542 100644 --- a/cipher/rijndael.c +++ b/cipher/rijndael.c @@ -441,24 +441,6 @@ do_setkey (RIJNDAEL_context *ctx, const byte *key, const unsigned keylen, hwfeatures = _gcry_get_hw_features (); ctx->decryption_prepared = 0; -#ifdef USE_PADLOCK - ctx->use_padlock = 0; -#endif -#ifdef USE_AESNI - ctx->use_aesni = 0; -#endif -#ifdef USE_SSSE3 - ctx->use_ssse3 = 0; -#endif -#ifdef USE_ARM_CE - ctx->use_arm_ce = 0; -#endif -#ifdef USE_PPC_CRYPTO - ctx->use_ppc_crypto = 0; -#endif -#ifdef USE_PPC_CRYPTO_WITH_PPC9LE - ctx->use_ppc9le_crypto = 0; -#endif /* Setup default bulk encryption routines. */ memset (bulk_ops, 0, sizeof(*bulk_ops)); @@ -486,7 +468,6 @@ do_setkey (RIJNDAEL_context *ctx, const byte *key, const unsigned keylen, ctx->prefetch_enc_fn = NULL; ctx->prefetch_dec_fn = NULL; ctx->prepare_decryption = _gcry_aes_aesni_prepare_decryption; - ctx->use_aesni = 1; ctx->use_avx = !!(hwfeatures & HWF_INTEL_AVX); ctx->use_avx2 = !!(hwfeatures & HWF_INTEL_AVX2); @@ -509,7 +490,6 @@ do_setkey (RIJNDAEL_context *ctx, const byte *key, const unsigned keylen, ctx->prefetch_enc_fn = NULL; ctx->prefetch_dec_fn = NULL; ctx->prepare_decryption = _gcry_aes_padlock_prepare_decryption; - ctx->use_padlock = 1; memcpy (ctx->padlockkey, key, keylen); } #endif @@ -522,7 +502,6 @@ do_setkey (RIJNDAEL_context *ctx, const byte *key, const unsigned keylen, ctx->prefetch_enc_fn = NULL; ctx->prefetch_dec_fn = NULL; ctx->prepare_decryption = _gcry_aes_ssse3_prepare_decryption; - ctx->use_ssse3 = 1; /* Setup SSSE3 bulk encryption routines. */ bulk_ops->cfb_enc = _gcry_aes_ssse3_cfb_enc; @@ -543,7 +522,6 @@ do_setkey (RIJNDAEL_context *ctx, const byte *key, const unsigned keylen, ctx->prefetch_enc_fn = NULL; ctx->prefetch_dec_fn = NULL; ctx->prepare_decryption = _gcry_aes_armv8_ce_prepare_decryption; - ctx->use_arm_ce = 1; /* Setup ARM-CE bulk encryption routines. */ bulk_ops->cfb_enc = _gcry_aes_armv8_ce_cfb_enc; @@ -565,7 +543,6 @@ do_setkey (RIJNDAEL_context *ctx, const byte *key, const unsigned keylen, ctx->prefetch_enc_fn = NULL; ctx->prefetch_dec_fn = NULL; ctx->prepare_decryption = _gcry_aes_ppc8_prepare_decryption; - ctx->use_ppc_crypto = 1; /* same key-setup as USE_PPC_CRYPTO */ ctx->use_ppc9le_crypto = 1; /* Setup PPC9LE bulk encryption routines. */ @@ -588,7 +565,6 @@ do_setkey (RIJNDAEL_context *ctx, const byte *key, const unsigned keylen, ctx->prefetch_enc_fn = NULL; ctx->prefetch_dec_fn = NULL; ctx->prepare_decryption = _gcry_aes_ppc8_prepare_decryption; - ctx->use_ppc_crypto = 1; /* Setup PPC8 bulk encryption routines. */ bulk_ops->cfb_enc = _gcry_aes_ppc8_cfb_enc; -- 2.27.0 From gniibe at fsij.org Fri Jan 22 04:59:05 2021 From: gniibe at fsij.org (NIIBE Yutaka) Date: Fri, 22 Jan 2021 12:59:05 +0900 Subject: libgcrypt1.9.0: Failure on linking test executables In-Reply-To: References: <8735yuualz.fsf@iwagami.gniibe.org> Message-ID: <87czxx8vw6.fsf@iwagami.gniibe.org> Kasumi Fukuda writes: > Let me share a repro step using docker: Thanks a lot. I think that it's better practice to do "make check" before installation. And to do that, we need to specify LD_LIBRARY_PATH too (as well as PATH), so that testing programs in the build procedure run correctly. Like this: --- libgcrypt-test-install-orig.sh 2021-01-22 11:06:34.329216354 +0900 +++ libgcrypt-test-install-fixed.sh 2021-01-22 12:29:42.725628836 +0900 @@ -11,12 +11,13 @@ cd /tmp/libgpg-error-1.41 ./configure --prefix=/opt/libgpg-error --disable-doc -make && make install +make && make check && make install export PATH=/opt/libgpg-error/bin:$PATH +export LD_LIBRARY_PATH=/opt/libgpg-error/lib cd /tmp/libgcrypt-1.9.0 ./configure --prefix=/opt/libgcrypt --disable-doc -make && make install +make && make check && make install EOF And... this works, with no linking errors. The problem is somehow complicated. The linker checks runtime libraries under LD_LIBRARY_PATH. In your case, even if you don't need 'make check', please specify LD_LIBRARY_PATH. -- From lomov.vl at yandex.ru Fri Jan 22 04:38:52 2021 From: lomov.vl at yandex.ru (Vladimir Lomov) Date: Fri, 22 Jan 2021 11:38:52 +0800 Subject: gpg-agent 'crashes' with libgcrypt 1.9.0 Message-ID: Hello, I'm using Archlinux x86_64 and gnupg 2.2.27 and libgcrypt 1.9.0. Before I updated my boxes to new libgcrypt 1.9.0 (previous version was 1.8.7) gpg worked fine, gpg-agent asked password once to sign, encrypt and decrypt and subsequent signing, encrypting and decrypting went without password asking (intended behaviour). After update I noticed that I could decrypt files but cannot sign or encrypt anything. I opened bug ticket in my distribution: https://bugs.archlinux.org/task/69389 where I showed output from systemd and my attempts to debug gpg-agent. The gpg-agent is started by systemd (socket activation). With libgcrypt 1.9.0 when I want to sign or encrypt a file I successfully enter password but after that I see (a bit cryptic) message from gpg: gpg: signing failed: End of file The systemctl shows that gpg-agent was terminated (I'm not sure how exactly, gpg-agent doesn't produce any debug information) with message: ... Jan 21 10:13:33 smoon4.bkoty.ru gpg-agent[25312]: free(): invalid pointer ... This is my gpg-agent.conf: ----------------------------------- 8< -------------------------------------- pinentry-program /usr/bin/pinentry-curses pinentry-timeout 60 # no-grab allow-loopback-pinentry allow-emacs-pinentry default-cache-ttl 5400 default-cache-ttl-ssh 5400 max-cache-ttl 10800 max-cache-ttl-ssh 10800 enable-ssh-support ssh-fingerprint-digest SHA256 ----------------------------------- 8< -------------------------------------- I wasn't able to run gpg-agent in strace or gdb to figure out what is wrong, so I follow advice of Andreas Radke to ask help here. Would be glad to help to resolve my issue because if libgcrypt 1.9.0 would be in "stable" area then I can't sign or encrypt files (it is interesting enough that I could decrypt files). --- WBR, Vladimir Lomov -- "Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats." -- Howard Aiken -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: not available URL: From kasumi at rollingapple.net Fri Jan 22 06:09:39 2021 From: kasumi at rollingapple.net (Kasumi Fukuda) Date: Fri, 22 Jan 2021 14:09:39 +0900 Subject: libgcrypt1.9.0: Failure on linking test executables In-Reply-To: <87czxx8vw6.fsf@iwagami.gniibe.org> References: <8735yuualz.fsf@iwagami.gniibe.org> <87czxx8vw6.fsf@iwagami.gniibe.org> Message-ID: On Fri, Jan 22, 2021 at 12:59 PM NIIBE Yutaka wrote: > > Kasumi Fukuda writes: > > Let me share a repro step using docker: > > Thanks a lot. > > I think that it's better practice to do "make check" before > installation. Thank you for the advice. It ensures the linked executables are actually executable. > And to do that, we need to specify LD_LIBRARY_PATH too (as well as > PATH), so that testing programs in the build procedure run correctly. My understanding is that libgcypt and libgpg-error use libtool, so RPATH (or RUNPATH) for the dependent libraries can be automatically injected into the executable when linked. We usually don't need to set LD_LIBRARY_PATH to run such an executable. If I comment out the t_secmem_LDADD, the link command for t-secmem has the necessary -rpath for libgpg-error as follows: ---- libtool: link: gcc -I/opt/libgpg-error/include -g -O2 -fvisibility=hidden -fno-delete-null-pointer-checks -Wall -o t-secmem t-secmem.o -Wl,--disable-new-dtags ../src/.libs/libgcrypt.so ../compat/.libs/libcompat.a -L/opt/libgpg-error/lib / opt/libgpg-error/lib/libgpg-error.so -Wl,-rpath -Wl,/tmp/libgcrypt-1.9.0/src/.libs -Wl,-rpath -Wl,/opt/libgpg-error/lib -Wl,-rpath -Wl,/opt/libgcrypt/lib -Wl,-rpath -Wl,/opt/libgpg-error/lib ---- instead of: ---- libtool: link: gcc -I/opt/libgpg-error/include -g -O2 -fvisibility=hidden -fno-delete-null-pointer-checks -Wall -o t-secmem t-secmem.o ../src/.libs/libgcrypt.so ../compat/.libs/libcompat.a -Wl,-rpath -Wl,/tmp/libgcrypt-1.9.0/src/.libs -Wl,-rpath -Wl,/opt/libgcrypt/lib ---- and `make check` works without LD_LIBRARY_PATH. This can be virtually tested by: sed -i.bak 's/^\([a-z_]\+_LDADD = \)\$(standard_ldadd)$/\1$(LDADD)/' tests/Makefile.in before ./configure for libgcrypt. I'm not sure I understand RPATH/RUNPATH things correctly. But this way of building GnuPG suite has been working for older versions and it would be inconvenient if LD_LIBRARY_PATH were required to be set. > Like this: > > --- libgcrypt-test-install-orig.sh 2021-01-22 11:06:34.329216354 +0900 > +++ libgcrypt-test-install-fixed.sh 2021-01-22 12:29:42.725628836 +0900 > @@ -11,12 +11,13 @@ > > cd /tmp/libgpg-error-1.41 > ./configure --prefix=/opt/libgpg-error --disable-doc > -make && make install > +make && make check && make install > > export PATH=/opt/libgpg-error/bin:$PATH > +export LD_LIBRARY_PATH=/opt/libgpg-error/lib > > cd /tmp/libgcrypt-1.9.0 > ./configure --prefix=/opt/libgcrypt --disable-doc > -make && make install > +make && make check && make install > > EOF > > > And... this works, with no linking errors. > > The problem is somehow complicated. The linker checks runtime libraries > under LD_LIBRARY_PATH. > > > In your case, even if you don't need 'make check', please specify > LD_LIBRARY_PATH. > -- From wk at gnupg.org Fri Jan 22 12:18:32 2021 From: wk at gnupg.org (Werner Koch) Date: Fri, 22 Jan 2021 12:18:32 +0100 Subject: [PATCH] cipher/sha512: Fix non-NEON ARM assembly implementation In-Reply-To: <87o8hidsa5.fsf@gmail.com> (David Michael via Gnupg-devel's message of "Thu, 21 Jan 2021 14:05:54 -0500") References: <87o8hidsa5.fsf@gmail.com> Message-ID: <87o8hh5iev.fsf@wheatstone.g10code.de> Hi! Thanks David for the reports. I accidentally mentioned this mailing list in the annoucnement, but it really this should go to gcrypt-devel. Libgcrypt hackers are not necessary reading this list. I'll repost your mails to gcrypt-devel. We have a couple of fixes already in the repo, see also https://dev/gnupg.org/T5251 for the NEON thing. I am not sure whether the Poly1305 bug has already been reported. Shalom-Salam, Werner -- * Free Assange and protect free journalism! * Germany: Sign the Treaty on the Prohibition of Nuclear Weapons! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From wk at gnupg.org Fri Jan 22 12:19:20 2021 From: wk at gnupg.org (Werner Koch) Date: Fri, 22 Jan 2021 12:19:20 +0100 Subject: [David Michael] [PATCH] cipher/sha512: Fix non-NEON ARM assembly implementation Message-ID: <87eeid5idj.fsf@wheatstone.g10code.de> An embedded message was scrubbed... From: David Michael Subject: [PATCH] cipher/sha512: Fix non-NEON ARM assembly implementation Date: Thu, 21 Jan 2021 14:05:54 -0500 Size: 3552 URL: -------------- next part -------------- -- * Free Assange and protect free journalism! * Germany: Sign the Treaty on the Prohibition of Nuclear Weapons! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From wk at gnupg.org Fri Jan 22 12:19:42 2021 From: wk at gnupg.org (Werner Koch) Date: Fri, 22 Jan 2021 12:19:42 +0100 Subject: [David Michael via Gnupg-devel] [PATCH libgcrypt 1/2] cipher/sha512: Fix non-NEON ARM assembly implementation Message-ID: <87a6t15icx.fsf@wheatstone.g10code.de> An embedded message was scrubbed... From: David Michael via Gnupg-devel Subject: [PATCH libgcrypt 1/2] cipher/sha512: Fix non-NEON ARM assembly implementation Date: Thu, 21 Jan 2021 14:58:18 -0500 Size: 5340 URL: -------------- next part -------------- -- * Free Assange and protect free journalism! * Germany: Sign the Treaty on the Prohibition of Nuclear Weapons! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From wk at gnupg.org Fri Jan 22 12:21:40 2021 From: wk at gnupg.org (Werner Koch) Date: Fri, 22 Jan 2021 12:21:40 +0100 Subject: [David Michael] [PATCH libgcypt 2/2] cipher/poly1305: Fix 32-bit x86 compilation Message-ID: <8735yt5i9n.fsf@wheatstone.g10code.de> An embedded message was scrubbed... From: David Michael Subject: [PATCH libgcypt 2/2] cipher/poly1305: Fix 32-bit x86 compilation Date: Thu, 21 Jan 2021 14:58:26 -0500 Size: 4101 URL: -------------- next part -------------- -- * Free Assange and protect free journalism! * Germany: Sign the Treaty on the Prohibition of Nuclear Weapons! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From gniibe at fsij.org Fri Jan 22 13:02:50 2021 From: gniibe at fsij.org (NIIBE Yutaka) Date: Fri, 22 Jan 2021 21:02:50 +0900 Subject: libgcrypt1.9.0: Failure on linking test executables In-Reply-To: References: <8735yuualz.fsf@iwagami.gniibe.org> <87czxx8vw6.fsf@iwagami.gniibe.org> Message-ID: <87lfclch79.fsf@jumper.gniibe.org> Kasumi Fukuda wrote: > My understanding is that libgcypt and libgpg-error use libtool, so > RPATH (or RUNPATH) for the dependent libraries can be automatically > injected into the executable when linked. > We usually don't need to set LD_LIBRARY_PATH to run such an executable. Yes and No. For a program (like ones under tests) which is built with libtool's -no-install option, RPATH/RUNPATH is embedded. That's true. This is because such a program can be executed correctly with the particular library currently being built (not with the one installed already on system). With embedded RPATH/RUNPATH, such a program can be executed with no setting of LD_LIBRARY_PATH. But, AFAIU, it is not for supporting no setting of LD_LIBRARY_PATH. Please note that LD_LIBRARY_PATH is usually set to run a program with library under non-standard prefix. We can't ignore this major use case. > But this way of building GnuPG suite has been working for older > versions and it would be inconvenient if LD_LIBRARY_PATH were required > to be set. People would say it's a regression, I'm afraid. I think that this is a corner case. The change of mine for minimizing library dependency has its purpose to keep up evolusion of toolchain. Ideally, libtool itself should have been evolved, while toolchain has been evolved gradually. -- From kasumi at rollingapple.net Fri Jan 22 14:21:16 2021 From: kasumi at rollingapple.net (Kasumi Fukuda) Date: Fri, 22 Jan 2021 22:21:16 +0900 Subject: libgcrypt1.9.0: Failure on linking test executables In-Reply-To: <87lfclch79.fsf@jumper.gniibe.org> References: <8735yuualz.fsf@iwagami.gniibe.org> <87czxx8vw6.fsf@iwagami.gniibe.org> <87lfclch79.fsf@jumper.gniibe.org> Message-ID: On Fri, Jan 22, 2021 at 9:02 PM NIIBE Yutaka wrote: > > Kasumi Fukuda wrote: > > My understanding is that libgcypt and libgpg-error use libtool, so > > RPATH (or RUNPATH) for the dependent libraries can be automatically > > injected into the executable when linked. > > We usually don't need to set LD_LIBRARY_PATH to run such an executable. > > Yes and No. > > For a program (like ones under tests) which is built with libtool's > -no-install option, RPATH/RUNPATH is embedded. That's true. This is > because such a program can be executed correctly with the particular > library currently being built (not with the one installed already on > system). > > With embedded RPATH/RUNPATH, such a program can be executed with no > setting of LD_LIBRARY_PATH. But, AFAIU, it is not for supporting no > setting of LD_LIBRARY_PATH. I didn't know that. > Please note that LD_LIBRARY_PATH is usually set to run a program with > library under non-standard prefix. We can't ignore this major use case. > > > But this way of building GnuPG suite has been working for older > > versions and it would be inconvenient if LD_LIBRARY_PATH were required > > to be set. > > People would say it's a regression, I'm afraid. I think that this is a > corner case. I agree that this is just a corner case with quite an old linker, and respect your decision not to support them. Thank you again for your clarification. From jussi.kivilinna at iki.fi Fri Jan 22 16:32:19 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Fri, 22 Jan 2021 17:32:19 +0200 Subject: [David Michael] [PATCH libgcypt 2/2] cipher/poly1305: Fix 32-bit x86 compilation In-Reply-To: <8735yt5i9n.fsf@wheatstone.g10code.de> References: <8735yt5i9n.fsf@wheatstone.g10code.de> Message-ID: <9e99f475-a650-5d0f-5119-a57be2fe3963@iki.fi> Hello, On 22.1.2021 13.21, Werner Koch via Gcrypt-devel wrote: > ForwardedMessage.eml / David Michael : > > * cipher/poly1305.c [HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS]: Also > conditionalize on whether __arm__ is defined. > > -- > > When building for i686, configure detects that the assembler can > use the different architectures, so it defined everything under the > HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS conditional block. Since that > block is first and the following x86 block only defines UMUL_ADD_32 > if it's not already defined, the lto-wrapper failed during linking > with a pile of "no such instruction: umlal ..." errors. Gating on > __arm__ prevents that initial defintion and fixes the errors. > HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS should not be defined on x86/i686 in first place. Problem must be in 'configure.ac' in the HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS check. Does this issue happen with "clang -flto"? There was another issue reported regarding clang LTO on x86_64 (https://dev.gnupg.org/T5255) and there problem was caused partly by another 'configure.ac' check not working with clang when LTO enabled. Fix there was to change assembly check to be done with compile+link instead of just compile. -Jussi From jussi.kivilinna at iki.fi Fri Jan 22 16:35:12 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Fri, 22 Jan 2021 17:35:12 +0200 Subject: [David Michael via Gnupg-devel] [PATCH libgcrypt 1/2] cipher/sha512: Fix non-NEON ARM assembly implementation In-Reply-To: <87a6t15icx.fsf@wheatstone.g10code.de> References: <87a6t15icx.fsf@wheatstone.g10code.de> Message-ID: <921aff8a-7392-ce33-259e-3d25433ff8da@iki.fi> Hello, On 22.1.2021 13.19, Werner Koch via Gcrypt-devel wrote: > ForwardedMessage.eml / David Michael : > > * cipher/sha512.c (do_transform_generic) > [USE_ARM_ASM]: Switch to the non-NEON assembly implementation. > > -- > > When building for ARM CPUs that don't support NEON, linking fails > with an "undefined reference to _gcry_sha512_transform_armv7_neon" > error. Switching to the non-NEON assembly function corrects this. > --- > (Resending this in case it wasn't delivered due to not being subscribed.) > Looks correct. Thanks. -Jussi From wk at gnupg.org Fri Jan 22 17:53:17 2021 From: wk at gnupg.org (Werner Koch) Date: Fri, 22 Jan 2021 17:53:17 +0100 Subject: gpg-agent 'crashes' with libgcrypt 1.9.0 In-Reply-To: (Vladimir Lomov via Gcrypt-devel's message of "Fri, 22 Jan 2021 11:38:52 +0800") References: Message-ID: <87v9bo3oci.fsf@wheatstone.g10code.de> On Fri, 22 Jan 2021 11:38, Vladimir Lomov said: > Jan 21 10:13:33 smoon4.bkoty.ru gpg-agent[25312]: free(): invalid pointer https://dev.gnupg.org/T5254 has a fix. I am going to release 1.9.1 next week. Shalom-Salam, Werner -- * Free Assange and protect free journalism! * Germany: Sign the Treaty on the Prohibition of Nuclear Weapons! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From wk at gnupg.org Fri Jan 22 17:58:09 2021 From: wk at gnupg.org (Werner Koch) Date: Fri, 22 Jan 2021 17:58:09 +0100 Subject: Libgcrypt 1.9.0 bugs Message-ID: <87r1mc3o4e.fsf@wheatstone.g10code.de> Hi! if you notice bugs related to libgcrypt 1.9.0 please visit https://dev.gnupg.org/T4294 where you can find a comment with known bugs and fixes. There will be a 1.9.1 next week. For all our releases we put a bug id and a link into the NEWS file and announcement mails with infos about bugs found after the release. Please always check this page before posting a report. Salam-Shalom, Werner -- * Free Assange and protect free journalism! * Germany: Sign the Treaty on the Prohibition of Nuclear Weapons! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From jussi.kivilinna at iki.fi Fri Jan 22 18:18:48 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Fri, 22 Jan 2021 19:18:48 +0200 Subject: [David Michael] [PATCH libgcypt 2/2] cipher/poly1305: Fix 32-bit x86 compilation In-Reply-To: References: <8735yt5i9n.fsf@wheatstone.g10code.de> <9e99f475-a650-5d0f-5119-a57be2fe3963@iki.fi> Message-ID: <2feb5eee-86ee-93bc-47fa-6f5072d4d9e7@iki.fi> On 22.1.2021 18.28, David Michael wrote: > On Fri, Jan 22, 2021 at 10:32 AM Jussi Kivilinna wrote: >> Hello, >> >> On 22.1.2021 13.21, Werner Koch via Gcrypt-devel wrote: >>> ForwardedMessage.eml / David Michael : >>> >>> * cipher/poly1305.c [HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS]: Also >>> conditionalize on whether __arm__ is defined. >>> >>> -- >>> >>> When building for i686, configure detects that the assembler can >>> use the different architectures, so it defined everything under the >>> HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS conditional block. Since that >>> block is first and the following x86 block only defines UMUL_ADD_32 >>> if it's not already defined, the lto-wrapper failed during linking >>> with a pile of "no such instruction: umlal ..." errors. Gating on >>> __arm__ prevents that initial defintion and fixes the errors. >>> >> >> HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS should not be defined on x86/i686 >> in first place. > > Using i686 GCC has these in config.log: > > #define HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS 1 > #define HAVE_COMPATIBLE_GCC_AARCH64_PLATFORM_AS 1 > #define HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS 1 > >> Problem must be in 'configure.ac' in the HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS >> check. Does this issue happen with "clang -flto"? > Hm, ok. Maybe the LTO/configure.ac problem manifests itself also with GCC in some system configurations. Does attached patch help? -Jussi > If I set CC to i686 clang, it first fails because it doesn't undestand > this line in jitterentropy-base.c > > #pragma GCC optimize ("O0") > > If I use -O0 in CFLAGS, then it fails due to the ARM instructions > while linking libgcrypt.so again. It has the same HAVE_COMPATIBLE_GCC > definitions in config.log as with GCC. > > :1:2: error: invalid instruction mnemonic 'umlal' > umlal %ecx, %eax, %edx, %esi > ^~~~~ > LLVM ERROR: Error parsing inline asm > > Thanks. > > David > -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-configure.ac-run-assembler-checks-through-linker-for.patch Type: text/x-patch Size: 19162 bytes Desc: not available URL: From fedora.dm0 at gmail.com Fri Jan 22 17:28:57 2021 From: fedora.dm0 at gmail.com (David Michael) Date: Fri, 22 Jan 2021 11:28:57 -0500 Subject: [David Michael] [PATCH libgcypt 2/2] cipher/poly1305: Fix 32-bit x86 compilation In-Reply-To: <9e99f475-a650-5d0f-5119-a57be2fe3963@iki.fi> References: <8735yt5i9n.fsf@wheatstone.g10code.de> <9e99f475-a650-5d0f-5119-a57be2fe3963@iki.fi> Message-ID: On Fri, Jan 22, 2021 at 10:32 AM Jussi Kivilinna wrote: > Hello, > > On 22.1.2021 13.21, Werner Koch via Gcrypt-devel wrote: > > ForwardedMessage.eml / David Michael : > > > > * cipher/poly1305.c [HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS]: Also > > conditionalize on whether __arm__ is defined. > > > > -- > > > > When building for i686, configure detects that the assembler can > > use the different architectures, so it defined everything under the > > HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS conditional block. Since that > > block is first and the following x86 block only defines UMUL_ADD_32 > > if it's not already defined, the lto-wrapper failed during linking > > with a pile of "no such instruction: umlal ..." errors. Gating on > > __arm__ prevents that initial defintion and fixes the errors. > > > > HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS should not be defined on x86/i686 > in first place. Using i686 GCC has these in config.log: #define HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS 1 #define HAVE_COMPATIBLE_GCC_AARCH64_PLATFORM_AS 1 #define HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS 1 > Problem must be in 'configure.ac' in the HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS > check. Does this issue happen with "clang -flto"? If I set CC to i686 clang, it first fails because it doesn't undestand this line in jitterentropy-base.c #pragma GCC optimize ("O0") If I use -O0 in CFLAGS, then it fails due to the ARM instructions while linking libgcrypt.so again. It has the same HAVE_COMPATIBLE_GCC definitions in config.log as with GCC. :1:2: error: invalid instruction mnemonic 'umlal' umlal %ecx, %eax, %edx, %esi ^~~~~ LLVM ERROR: Error parsing inline asm Thanks. David From fedora.dm0 at gmail.com Fri Jan 22 18:32:50 2021 From: fedora.dm0 at gmail.com (David Michael) Date: Fri, 22 Jan 2021 12:32:50 -0500 Subject: [David Michael] [PATCH libgcypt 2/2] cipher/poly1305: Fix 32-bit x86 compilation In-Reply-To: <2feb5eee-86ee-93bc-47fa-6f5072d4d9e7@iki.fi> References: <8735yt5i9n.fsf@wheatstone.g10code.de> <9e99f475-a650-5d0f-5119-a57be2fe3963@iki.fi> <2feb5eee-86ee-93bc-47fa-6f5072d4d9e7@iki.fi> Message-ID: On Fri, Jan 22, 2021 at 12:18 PM Jussi Kivilinna wrote: > On 22.1.2021 18.28, David Michael wrote: > > On Fri, Jan 22, 2021 at 10:32 AM Jussi Kivilinna wrote: > >> Hello, > >> > >> On 22.1.2021 13.21, Werner Koch via Gcrypt-devel wrote: > >>> ForwardedMessage.eml / David Michael : > >>> > >>> * cipher/poly1305.c [HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS]: Also > >>> conditionalize on whether __arm__ is defined. > >>> > >>> -- > >>> > >>> When building for i686, configure detects that the assembler can > >>> use the different architectures, so it defined everything under the > >>> HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS conditional block. Since that > >>> block is first and the following x86 block only defines UMUL_ADD_32 > >>> if it's not already defined, the lto-wrapper failed during linking > >>> with a pile of "no such instruction: umlal ..." errors. Gating on > >>> __arm__ prevents that initial defintion and fixes the errors. > >>> > >> > >> HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS should not be defined on x86/i686 > >> in first place. > > > > Using i686 GCC has these in config.log: > > > > #define HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS 1 > > #define HAVE_COMPATIBLE_GCC_AARCH64_PLATFORM_AS 1 > > #define HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS 1 > > > >> Problem must be in 'configure.ac' in the HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS > >> check. Does this issue happen with "clang -flto"? > > > > Hm, ok. Maybe the LTO/configure.ac problem manifests itself also with GCC > in some system configurations. Does attached patch help? Yes, that seems to result in only HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS being defined for i686, so it successfully builds. Thanks. David From jussi.kivilinna at iki.fi Fri Jan 22 18:39:23 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Fri, 22 Jan 2021 19:39:23 +0200 Subject: [David Michael] [PATCH libgcypt 2/2] cipher/poly1305: Fix 32-bit x86 compilation In-Reply-To: References: <8735yt5i9n.fsf@wheatstone.g10code.de> <9e99f475-a650-5d0f-5119-a57be2fe3963@iki.fi> <2feb5eee-86ee-93bc-47fa-6f5072d4d9e7@iki.fi> Message-ID: <4c70eafb-9266-38ba-08cc-07af4cff65ae@iki.fi> On 22.1.2021 19.32, David Michael wrote: > On Fri, Jan 22, 2021 at 12:18 PM Jussi Kivilinna wrote: >> On 22.1.2021 18.28, David Michael wrote: >>> On Fri, Jan 22, 2021 at 10:32 AM Jussi Kivilinna wrote: >>>> Hello, >>>> >>>> On 22.1.2021 13.21, Werner Koch via Gcrypt-devel wrote: >>>>> ForwardedMessage.eml / David Michael : >>>>> >>>>> * cipher/poly1305.c [HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS]: Also >>>>> conditionalize on whether __arm__ is defined. >>>>> >>>>> -- >>>>> >>>>> When building for i686, configure detects that the assembler can >>>>> use the different architectures, so it defined everything under the >>>>> HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS conditional block. Since that >>>>> block is first and the following x86 block only defines UMUL_ADD_32 >>>>> if it's not already defined, the lto-wrapper failed during linking >>>>> with a pile of "no such instruction: umlal ..." errors. Gating on >>>>> __arm__ prevents that initial defintion and fixes the errors. >>>>> >>>> >>>> HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS should not be defined on x86/i686 >>>> in first place. >>> >>> Using i686 GCC has these in config.log: >>> >>> #define HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS 1 >>> #define HAVE_COMPATIBLE_GCC_AARCH64_PLATFORM_AS 1 >>> #define HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS 1 >>> >>>> Problem must be in 'configure.ac' in the HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS >>>> check. Does this issue happen with "clang -flto"? >>> >> >> Hm, ok. Maybe the LTO/configure.ac problem manifests itself also with GCC >> in some system configurations. Does attached patch help? > > Yes, that seems to result in only > HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS being defined for i686, so it > successfully builds. Great! That is the same as I have for non-LTO GCC build: $ cat config.h|grep COMPATIBLE /* #undef HAVE_COMPATIBLE_CC_PPC_ALTIVEC */ /* #undef HAVE_COMPATIBLE_CC_PPC_ALTIVEC_WITH_CFLAGS */ /* #undef HAVE_COMPATIBLE_GCC_AARCH64_PLATFORM_AS */ #define HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS 1 /* #undef HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS */ /* #undef HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS */ HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS check currently has 32-bit instruction to check that system is x86. Therefore check succeeds on x86_64 and on (some) i686 systems and that is expected. -Jussi > > Thanks. > > David > From jussi.kivilinna at iki.fi Fri Jan 22 18:41:42 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Fri, 22 Jan 2021 19:41:42 +0200 Subject: [PATCH 1/2] configure.ac: run assembler checks through linker for better LTO support Message-ID: <20210122174143.858566-1-jussi.kivilinna@iki.fi> * configure.ac (gcry_cv_gcc_arm_platform_as_ok) (gcry_cv_gcc_aarch64_platform_as_ok) (gcry_cv_gcc_inline_asm_ssse3, gcry_cv_gcc_inline_asm_pclmul) (gcry_cv_gcc_inline_asm_shaext, gcry_cv_gcc_inline_asm_sse41) (gcry_cv_gcc_inline_asm_avx, gcry_cv_gcc_inline_asm_avx2) (gcry_cv_gcc_inline_asm_bmi2, gcry_cv_gcc_as_const_division_ok) (gcry_cv_gcc_as_const_division_with_wadivide_ok) (gcry_cv_gcc_amd64_platform_as_ok, gcry_cv_gcc_win64_platform_as_ok) (gcry_cv_gcc_platform_as_ok_for_intel_syntax) (gcry_cv_gcc_inline_asm_neon, gcry_cv_gcc_inline_asm_aarch32_crypto) (gcry_cv_gcc_inline_asm_aarch64_neon) (gcry_cv_gcc_inline_asm_aarch64_crypto) (gcry_cv_gcc_inline_asm_ppc_altivec) (gcry_cv_gcc_inline_asm_ppc_arch_3_00) (gcry_cv_gcc_inline_asm_s390x, gcry_cv_gcc_inline_asm_s390x): Use AC_LINK_IFELSE check instead of AC_COMPILE_IFELSE. -- LTO may defer assembly checking to linker stage, thus we need to use AC_LINK_IFELSE instead of AC_COMPILE_IFELSE for these checks. Signed-off-by: Jussi Kivilinna --- configure.ac | 111 ++++++++++++++++++++++++++++++--------------------- 1 file changed, 65 insertions(+), 46 deletions(-) diff --git a/configure.ac b/configure.ac index 97abcf54..f7339a3e 100644 --- a/configure.ac +++ b/configure.ac @@ -1203,11 +1203,12 @@ AC_CACHE_CHECK([whether GCC assembler is compatible for ARM assembly implementat gcry_cv_gcc_arm_platform_as_ok="n/a" else gcry_cv_gcc_arm_platform_as_ok=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( /* Test if assembler supports UAL syntax. */ ".syntax unified\n\t" ".arm\n\t" /* our assembly code is in ARM mode */ + ".text\n\t" /* Following causes error if assembler ignored '.syntax unified'. */ "asmfunc:\n\t" "add %r0, %r0, %r4, ror #12;\n\t" @@ -1215,7 +1216,7 @@ AC_CACHE_CHECK([whether GCC assembler is compatible for ARM assembly implementat /* Test if '.type' and '.size' are supported. */ ".size asmfunc,.-asmfunc;\n\t" ".type asmfunc,%function;\n\t" - );]])], + );]], [ asmfunc(); ] )], [gcry_cv_gcc_arm_platform_as_ok=yes]) fi]) if test "$gcry_cv_gcc_arm_platform_as_ok" = "yes" ; then @@ -1235,13 +1236,14 @@ AC_CACHE_CHECK([whether GCC assembler is compatible for ARMv8/Aarch64 assembly i gcry_cv_gcc_aarch64_platform_as_ok="n/a" else gcry_cv_gcc_aarch64_platform_as_ok=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( + ".text\n\t" "asmfunc:\n\t" "eor x0, x0, x30, ror #12;\n\t" "add x0, x0, x30, asr #12;\n\t" "eor v0.16b, v0.16b, v31.16b;\n\t" - );]])], + );]], [ asmfunc(); ] )], [gcry_cv_gcc_aarch64_platform_as_ok=yes]) fi]) if test "$gcry_cv_gcc_aarch64_platform_as_ok" = "yes" ; then @@ -1287,6 +1289,7 @@ AC_CACHE_CHECK([whether GCC assembler supports for ELF directives], AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( /* Test if ELF directives '.type' and '.size' are supported. */ + ".text\n\t" "asmfunc:\n\t" ".size asmfunc,.-asmfunc;\n\t" ".type asmfunc,STT_FUNC;\n\t" @@ -1474,12 +1477,12 @@ AC_CACHE_CHECK([whether GCC inline assembler supports SSSE3 instructions], gcry_cv_gcc_inline_asm_ssse3="n/a" else gcry_cv_gcc_inline_asm_ssse3=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[static unsigned char be_mask[16] __attribute__ ((aligned (16))) = { 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 }; void a(void) { __asm__("pshufb %[mask], %%xmm2\n\t"::[mask]"m"(*be_mask):); - }]])], + }]], [ a(); ] )], [gcry_cv_gcc_inline_asm_ssse3=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_ssse3" = "yes" ; then @@ -1498,10 +1501,10 @@ AC_CACHE_CHECK([whether GCC inline assembler supports PCLMUL instructions], gcry_cv_gcc_inline_asm_pclmul="n/a" else gcry_cv_gcc_inline_asm_pclmul=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[void a(void) { __asm__("pclmulqdq \$0, %%xmm1, %%xmm3\n\t":::"cc"); - }]])], + }]], [ a(); ] )], [gcry_cv_gcc_inline_asm_pclmul=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_pclmul" = "yes" ; then @@ -1520,7 +1523,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports SHA Extensions instruction gcry_cv_gcc_inline_asm_shaext="n/a" else gcry_cv_gcc_inline_asm_shaext=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[void a(void) { __asm__("sha1rnds4 \$0, %%xmm1, %%xmm3\n\t":::"cc"); __asm__("sha1nexte %%xmm1, %%xmm3\n\t":::"cc"); @@ -1529,7 +1532,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports SHA Extensions instruction __asm__("sha256rnds2 %%xmm0, %%xmm1, %%xmm3\n\t":::"cc"); __asm__("sha256msg1 %%xmm1, %%xmm3\n\t":::"cc"); __asm__("sha256msg2 %%xmm1, %%xmm3\n\t":::"cc"); - }]])], + }]], [ a(); ] )], [gcry_cv_gcc_inline_asm_shaext=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_shaext" = "yes" ; then @@ -1548,11 +1551,11 @@ AC_CACHE_CHECK([whether GCC inline assembler supports SSE4.1 instructions], gcry_cv_gcc_inline_asm_sse41="n/a" else gcry_cv_gcc_inline_asm_sse41=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[void a(void) { int i; __asm__("pextrd \$2, %%xmm0, %[out]\n\t" : [out] "=m" (i)); - }]])], + }]], [ a(); ] )], [gcry_cv_gcc_inline_asm_sse41=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_sse41" = "yes" ; then @@ -1571,10 +1574,10 @@ AC_CACHE_CHECK([whether GCC inline assembler supports AVX instructions], gcry_cv_gcc_inline_asm_avx="n/a" else gcry_cv_gcc_inline_asm_avx=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[void a(void) { __asm__("xgetbv; vaesdeclast (%[mem]),%%xmm0,%%xmm7\n\t"::[mem]"r"(0):); - }]])], + }]], [ a(); ] )], [gcry_cv_gcc_inline_asm_avx=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_avx" = "yes" ; then @@ -1593,10 +1596,10 @@ AC_CACHE_CHECK([whether GCC inline assembler supports AVX2 instructions], gcry_cv_gcc_inline_asm_avx2="n/a" else gcry_cv_gcc_inline_asm_avx2=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[void a(void) { __asm__("xgetbv; vpbroadcastb %%xmm7,%%ymm1\n\t":::"cc"); - }]])], + }]], [ a(); ] )], [gcry_cv_gcc_inline_asm_avx2=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_avx2" = "yes" ; then @@ -1615,7 +1618,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports BMI2 instructions], gcry_cv_gcc_inline_asm_bmi2="n/a" else gcry_cv_gcc_inline_asm_bmi2=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[unsigned int a(unsigned int x, unsigned int y) { unsigned int tmp1, tmp2; asm ("rorxl %2, %1, %0" @@ -1625,7 +1628,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports BMI2 instructions], : "=r" (tmp2) : "r0" (x), "rm" (y)); return tmp1 + tmp2; - }]])], + }]], [ a(1, 2); ] )], [gcry_cv_gcc_inline_asm_bmi2=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_bmi2" = "yes" ; then @@ -1642,8 +1645,9 @@ if test $amd64_as_feature_detection = yes; then AC_CACHE_CHECK([whether GCC assembler handles division correctly], [gcry_cv_gcc_as_const_division_ok], [gcry_cv_gcc_as_const_division_ok=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( - [[__asm__("xorl \$(123456789/12345678), %ebp;\n\t");]])], + AC_LINK_IFELSE([AC_LANG_PROGRAM( + [[__asm__(".text\n\tfn:\n\t xorl \$(123456789/12345678), %ebp;\n\t");]], + [fn();])], [gcry_cv_gcc_as_const_division_ok=yes])]) if test "$gcry_cv_gcc_as_const_division_ok" = "no" ; then # @@ -1654,8 +1658,9 @@ if test $amd64_as_feature_detection = yes; then AC_CACHE_CHECK([whether GCC assembler handles division correctly with "-Wa,--divide"], [gcry_cv_gcc_as_const_division_with_wadivide_ok], [gcry_cv_gcc_as_const_division_with_wadivide_ok=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( - [[__asm__("xorl \$(123456789/12345678), %ebp;\n\t");]])], + AC_LINK_IFELSE([AC_LANG_PROGRAM( + [[__asm__(".text\n\tfn:\n\t xorl \$(123456789/12345678), %ebp;\n\t");]], + [fn();])], [gcry_cv_gcc_as_const_division_with_wadivide_ok=yes])]) if test "$gcry_cv_gcc_as_const_division_with_wadivide_ok" = "no" ; then # '-Wa,--divide' did not work, restore old flags. @@ -1677,10 +1682,11 @@ if test $amd64_as_feature_detection = yes; then gcry_cv_gcc_amd64_platform_as_ok="n/a" else gcry_cv_gcc_amd64_platform_as_ok=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( /* Test if '.type' and '.size' are supported. */ /* These work only on ELF targets. */ + ".text\n\t" "asmfunc:\n\t" ".size asmfunc,.-asmfunc;\n\t" ".type asmfunc, at function;\n\t" @@ -1689,7 +1695,7 @@ if test $amd64_as_feature_detection = yes; then * and "-Wa,--divide" workaround failed, this causes assembly * to be disable on this machine. */ "xorl \$(123456789/12345678), %ebp;\n\t" - );]])], + );]], [ asmfunc(); ])], [gcry_cv_gcc_amd64_platform_as_ok=yes]) fi]) if test "$gcry_cv_gcc_amd64_platform_as_ok" = "yes" ; then @@ -1702,12 +1708,13 @@ if test $amd64_as_feature_detection = yes; then AC_CACHE_CHECK([whether GCC assembler is compatible for WIN64 assembly implementations], [gcry_cv_gcc_win64_platform_as_ok], [gcry_cv_gcc_win64_platform_as_ok=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( + ".text\n\t" ".globl asmfunc\n\t" "asmfunc:\n\t" "xorq \$(1234), %rbp;\n\t" - );]])], + );]], [ asmfunc(); ])], [gcry_cv_gcc_win64_platform_as_ok=yes])]) if test "$gcry_cv_gcc_win64_platform_as_ok" = "yes" ; then AC_DEFINE(HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS,1, @@ -1728,9 +1735,11 @@ AC_CACHE_CHECK([whether GCC assembler is compatible for Intel syntax assembly im gcry_cv_gcc_platform_as_ok_for_intel_syntax="n/a" else gcry_cv_gcc_platform_as_ok_for_intel_syntax=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( ".intel_syntax noprefix\n\t" + ".text\n\t" + "actest:\n\t" "pxor xmm1, xmm7;\n\t" /* Intel syntax implementation also use GAS macros, so check * for them here. */ @@ -1747,7 +1756,8 @@ AC_CACHE_CHECK([whether GCC assembler is compatible for Intel syntax assembly im "SET_VAL_B ebp\n\t" "add VAL_A, VAL_B;\n\t" "add VAL_B, 0b10101;\n\t" - );]])], + ".att_syntax prefix\n\t" + );]], [ actest(); ])], [gcry_cv_gcc_platform_as_ok_for_intel_syntax=yes]) fi]) if test "$gcry_cv_gcc_platform_as_ok_for_intel_syntax" = "yes" ; then @@ -1800,17 +1810,19 @@ AC_CACHE_CHECK([whether GCC inline assembler supports NEON instructions], gcry_cv_gcc_inline_asm_neon="n/a" else gcry_cv_gcc_inline_asm_neon=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( ".syntax unified\n\t" ".arm\n\t" ".fpu neon\n\t" + ".text\n\t" + "testfn:\n\t" "vld1.64 {%q0-%q1}, [%r0]!;\n\t" "vrev64.8 %q0, %q3;\n\t" "vadd.u64 %q0, %q1;\n\t" "vadd.s64 %d3, %d2, %d3;\n\t" ); - ]])], + ]], [ testfn(); ])], [gcry_cv_gcc_inline_asm_neon=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_neon" = "yes" ; then @@ -1829,13 +1841,15 @@ AC_CACHE_CHECK([whether GCC inline assembler supports AArch32 Crypto Extension i gcry_cv_gcc_inline_asm_aarch32_crypto="n/a" else gcry_cv_gcc_inline_asm_aarch32_crypto=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( ".syntax unified\n\t" ".arch armv8-a\n\t" ".arm\n\t" ".fpu crypto-neon-fp-armv8\n\t" + ".text\n\t" + "testfn:\n\t" "sha1h.32 q0, q0;\n\t" "sha1c.32 q0, q0, q0;\n\t" "sha1p.32 q0, q0, q0;\n\t" @@ -1855,7 +1869,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports AArch32 Crypto Extension i "vmull.p64 q0, d0, d0;\n\t" ); - ]])], + ]], [ testfn(); ])], [gcry_cv_gcc_inline_asm_aarch32_crypto=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_aarch32_crypto" = "yes" ; then @@ -1874,14 +1888,16 @@ AC_CACHE_CHECK([whether GCC inline assembler supports AArch64 NEON instructions] gcry_cv_gcc_inline_asm_aarch64_neon="n/a" else gcry_cv_gcc_inline_asm_aarch64_neon=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( ".cpu generic+simd\n\t" + ".text\n\t" + "testfn:\n\t" "mov w0, \#42;\n\t" "dup v0.8b, w0;\n\t" "ld4 {v0.8b,v1.8b,v2.8b,v3.8b},[x0],\#32;\n\t" ); - ]])], + ]], [ testfn(); ])], [gcry_cv_gcc_inline_asm_aarch64_neon=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_aarch64_neon" = "yes" ; then @@ -1900,10 +1916,11 @@ AC_CACHE_CHECK([whether GCC inline assembler supports AArch64 Crypto Extension i gcry_cv_gcc_inline_asm_aarch64_crypto="n/a" else gcry_cv_gcc_inline_asm_aarch64_crypto=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__( ".cpu generic+simd+crypto\n\t" - + ".text\n\t" + "testfn:\n\t" "mov w0, \#42;\n\t" "dup v0.8b, w0;\n\t" "ld4 {v0.8b,v1.8b,v2.8b,v3.8b},[x0],\#32;\n\t" @@ -1928,7 +1945,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports AArch64 Crypto Extension i "pmull v0.1q, v0.1d, v31.1d;\n\t" "pmull2 v0.1q, v0.2d, v31.2d;\n\t" ); - ]])], + ]], [ testfn(); ])], [gcry_cv_gcc_inline_asm_aarch64_crypto=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_aarch64_crypto" = "yes" ; then @@ -2010,8 +2027,9 @@ AC_CACHE_CHECK([whether GCC inline assembler supports PowerPC AltiVec/VSX/crypto gcry_cv_gcc_inline_asm_ppc_altivec="n/a" else gcry_cv_gcc_inline_asm_ppc_altivec=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[__asm__(".globl testfn;\n" + ".text\n\t" "testfn:\n" "stvx %v31,%r12,%r0;\n" "lvx %v20,%r12,%r0;\n" @@ -2022,7 +2040,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports PowerPC AltiVec/VSX/crypto "vshasigmad %v0, %v1, 0, 15;\n" "vpmsumd %v11, %v11, %v11;\n" ); - ]])], + ]], [ testfn(); ] )], [gcry_cv_gcc_inline_asm_ppc_altivec=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_ppc_altivec" = "yes" ; then @@ -2041,12 +2059,13 @@ AC_CACHE_CHECK([whether GCC inline assembler supports PowerISA 3.00 instructions gcry_cv_gcc_inline_asm_ppc_arch_3_00="n/a" else gcry_cv_gcc_inline_asm_ppc_arch_3_00=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( - [[__asm__(".globl testfn;\n" + AC_LINK_IFELSE([AC_LANG_PROGRAM( + [[__asm__(".text\n\t" + ".globl testfn;\n" "testfn:\n" "stxvb16x %r1,%v12,%v30;\n" ); - ]])], + ]], [ testfn(); ])], [gcry_cv_gcc_inline_asm_ppc_arch_3_00=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_ppc_arch_3_00" = "yes" ; then @@ -2065,7 +2084,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports zSeries instructions], gcry_cv_gcc_inline_asm_s390x="n/a" else gcry_cv_gcc_inline_asm_s390x=no - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[typedef unsigned int u128_t __attribute__ ((mode (TI))); unsigned int testfunc(unsigned int x, void *y, unsigned int z) { @@ -2106,7 +2125,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports zSeries instructions], : "memory", "r14"); return (unsigned int)r1 ^ reg0; } - ]])], + ]] , [ testfunc(0, 0, 0); ])], [gcry_cv_gcc_inline_asm_s390x=yes]) fi]) if test "$gcry_cv_gcc_inline_asm_s390x" = "yes" ; then @@ -2126,7 +2145,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports zSeries vector instruction else gcry_cv_gcc_inline_asm_s390x_vx=no if test "$gcry_cv_gcc_inline_asm_s390x" = "yes" ; then - AC_COMPILE_IFELSE([AC_LANG_SOURCE( + AC_LINK_IFELSE([AC_LANG_PROGRAM( [[void testfunc(void) { asm volatile (".machine \"z13+vx\"\n\t" @@ -2136,7 +2155,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports zSeries vector instruction : : "memory"); } - ]])], + ]], [ testfunc(); ])], [gcry_cv_gcc_inline_asm_s390x_vx=yes]) fi fi]) -- 2.27.0 From jussi.kivilinna at iki.fi Fri Jan 22 18:41:43 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Fri, 22 Jan 2021 19:41:43 +0200 Subject: [PATCH 2/2] sha512/sha256: remove assembler macros from AMD64 implementations In-Reply-To: <20210122174143.858566-1-jussi.kivilinna@iki.fi> References: <20210122174143.858566-1-jussi.kivilinna@iki.fi> Message-ID: <20210122174143.858566-2-jussi.kivilinna@iki.fi> * configure.ac (gcry_cv_gcc_platform_as_ok_for_intel_syntax): Remove assembler macro check from Intel syntax assembly support check. * cipher/sha256-avx-amd64.S: Replace assembler macros with C preprocessor counterparts. * cipher/sha256-avx2-bmi2-amd64.S: Ditto. * cipher/sha256-ssse3-amd64.S: Ditto. * cipher/sha512-avx-amd64.S: Ditto. * cipher/sha512-avx2-bmi2-amd64.S: Ditto. * cipher/sha512-ssse3-amd64.S: Ditto. -- Removing GNU assembler macros allows building these implementations with clang. GnuPG-bug-id: 5255 Signed-off-by: Jussi Kivilinna --- cipher/sha256-avx-amd64.S | 516 +++++++++++++++---------------- cipher/sha256-avx2-bmi2-amd64.S | 421 +++++++++++-------------- cipher/sha256-ssse3-amd64.S | 529 +++++++++++++++----------------- cipher/sha512-avx-amd64.S | 456 ++++++++++++++------------- cipher/sha512-avx2-bmi2-amd64.S | 498 +++++++++++++----------------- cipher/sha512-ssse3-amd64.S | 455 ++++++++++++++------------- configure.ac | 20 +- 7 files changed, 1387 insertions(+), 1508 deletions(-) diff --git a/cipher/sha256-avx-amd64.S b/cipher/sha256-avx-amd64.S index 77143ff0..ec945f84 100644 --- a/cipher/sha256-avx-amd64.S +++ b/cipher/sha256-avx-amd64.S @@ -65,67 +65,64 @@ #define VMOVDQ vmovdqu /* assume buffers not aligned */ -.macro ROR p1 p2 - /* shld is faster than ror on Intel Sandybridge */ - shld \p1, \p1, (32 - \p2) -.endm +#define ROR(p1, p2) \ + /* shld is faster than ror on Intel Sandybridge */ \ + shld p1, p1, (32 - p2); /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Define Macros*/ /* addm [mem], reg * Add reg to mem using reg-mem add and store */ -.macro addm p1 p2 - add \p2, \p1 - mov \p1, \p2 -.endm +#define addm(p1, p2) \ + add p2, p1; \ + mov p1, p2; /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;*/ /* COPY_XMM_AND_BSWAP xmm, [mem], byte_flip_mask * Load xmm with mem and byte swap each dword */ -.macro COPY_XMM_AND_BSWAP p1 p2 p3 - VMOVDQ \p1, \p2 - vpshufb \p1, \p1, \p3 -.endm +#define COPY_XMM_AND_BSWAP(p1, p2, p3) \ + VMOVDQ p1, p2; \ + vpshufb p1, p1, p3; /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;*/ -X0 = xmm4 -X1 = xmm5 -X2 = xmm6 -X3 = xmm7 +#define X0 xmm4 +#define X1 xmm5 +#define X2 xmm6 +#define X3 xmm7 -XTMP0 = xmm0 -XTMP1 = xmm1 -XTMP2 = xmm2 -XTMP3 = xmm3 -XTMP4 = xmm8 -XFER = xmm9 +#define XTMP0 xmm0 +#define XTMP1 xmm1 +#define XTMP2 xmm2 +#define XTMP3 xmm3 +#define XTMP4 xmm8 +#define XFER xmm9 -SHUF_00BA = xmm10 /* shuffle xBxA -> 00BA */ -SHUF_DC00 = xmm11 /* shuffle xDxC -> DC00 */ -BYTE_FLIP_MASK = xmm12 +#define SHUF_00BA xmm10 /* shuffle xBxA -> 00BA */ +#define SHUF_DC00 xmm11 /* shuffle xDxC -> DC00 */ +#define BYTE_FLIP_MASK xmm12 -NUM_BLKS = rdx /* 3rd arg */ -CTX = rsi /* 2nd arg */ -INP = rdi /* 1st arg */ +#define NUM_BLKS rdx /* 3rd arg */ +#define CTX rsi /* 2nd arg */ +#define INP rdi /* 1st arg */ -SRND = rdi /* clobbers INP */ -c = ecx -d = r8d -e = edx +#define SRND rdi /* clobbers INP */ +#define c ecx +#define d r8d +#define e edx -TBL = rbp -a = eax -b = ebx +#define TBL rbp +#define a eax +#define b ebx -f = r9d -g = r10d -h = r11d +#define f r9d +#define g r10d +#define h r11d -y0 = r13d -y1 = r14d -y2 = r15d +#define y0 r13d +#define y1 r14d +#define y2 r15d @@ -142,220 +139,197 @@ y2 = r15d #define _XMM_SAVE (_XFER + _XFER_SIZE + _ALIGN_SIZE) #define STACK_SIZE (_XMM_SAVE + _XMM_SAVE_SIZE) -/* rotate_Xs - * Rotate values of symbols X0...X3 */ -.macro rotate_Xs -X_ = X0 -X0 = X1 -X1 = X2 -X2 = X3 -X3 = X_ -.endm - -/* ROTATE_ARGS - * Rotate values of symbols a...h */ -.macro ROTATE_ARGS -TMP_ = h -h = g -g = f -f = e -e = d -d = c -c = b -b = a -a = TMP_ -.endm - -.macro FOUR_ROUNDS_AND_SCHED - /* compute s0 four at a time and s1 two at a time - * compute W[-16] + W[-7] 4 at a time */ - mov y0, e /* y0 = e */ - ROR y0, (25-11) /* y0 = e >> (25-11) */ - mov y1, a /* y1 = a */ - vpalignr XTMP0, X3, X2, 4 /* XTMP0 = W[-7] */ - ROR y1, (22-13) /* y1 = a >> (22-13) */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - mov y2, f /* y2 = f */ - ROR y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - xor y1, a /* y1 = a ^ (a >> (22-13) */ - xor y2, g /* y2 = f^g */ - vpaddd XTMP0, XTMP0, X0 /* XTMP0 = W[-7] + W[-16] */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - and y2, e /* y2 = (f^g)&e */ - ROR y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - /* compute s0 */ - vpalignr XTMP1, X1, X0, 4 /* XTMP1 = W[-15] */ - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - ROR y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - ROR y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - add y2, y0 /* y2 = S1 + CH */ - add y2, [rsp + _XFER + 0*4] /* y2 = k + w + S1 + CH */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - vpslld XTMP2, XTMP1, (32-7) - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - vpsrld XTMP3, XTMP1, 7 - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - vpor XTMP3, XTMP3, XTMP2 /* XTMP1 = W[-15] ror 7 */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ + +#define FOUR_ROUNDS_AND_SCHED_0(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + /* compute s0 four at a time and s1 two at a time */; \ + /* compute W[-16] + W[-7] 4 at a time */; \ + mov y0, e /* y0 = e */; \ + ROR( y0, (25-11)) /* y0 = e >> (25-11) */; \ + mov y1, a /* y1 = a */; \ + vpalignr XTMP0, X3, X2, 4 /* XTMP0 = W[-7] */; \ + ROR( y1, (22-13)) /* y1 = a >> (22-13) */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + mov y2, f /* y2 = f */; \ + ROR( y0, (11-6)) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + xor y2, g /* y2 = f^g */; \ + vpaddd XTMP0, XTMP0, X0 /* XTMP0 = W[-7] + W[-16] */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + ROR( y1, (13-2)) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + /* compute s0 */; \ + vpalignr XTMP1, X1, X0, 4 /* XTMP1 = W[-15] */; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + ROR( y0, 6) /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + ROR( y1, 2) /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + add y2, y0 /* y2 = S1 + CH */; \ + add y2, [rsp + _XFER + 0*4] /* y2 = k + w + S1 + CH */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + vpslld XTMP2, XTMP1, (32-7); \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + vpsrld XTMP3, XTMP1, 7; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + vpor XTMP3, XTMP3, XTMP2 /* XTMP1 = W[-15] ror 7 */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ -ROTATE_ARGS - mov y0, e /* y0 = e */ - mov y1, a /* y1 = a */ - ROR y0, (25-11) /* y0 = e >> (25-11) */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - mov y2, f /* y2 = f */ - ROR y1, (22-13) /* y1 = a >> (22-13) */ - vpslld XTMP2, XTMP1, (32-18) - xor y1, a /* y1 = a ^ (a >> (22-13) */ - ROR y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - xor y2, g /* y2 = f^g */ - vpsrld XTMP4, XTMP1, 18 - ROR y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - and y2, e /* y2 = (f^g)&e */ - ROR y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - vpxor XTMP4, XTMP4, XTMP3 - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - vpsrld XTMP1, XTMP1, 3 /* XTMP4 = W[-15] >> 3 */ - add y2, y0 /* y2 = S1 + CH */ - add y2, [rsp + _XFER + 1*4] /* y2 = k + w + S1 + CH */ - ROR y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - vpxor XTMP1, XTMP1, XTMP2 /* XTMP1 = W[-15] ror 7 ^ W[-15] ror 18 */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - vpxor XTMP1, XTMP1, XTMP4 /* XTMP1 = s0 */ - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - /* compute low s1 */ - vpshufd XTMP2, X3, 0b11111010 /* XTMP2 = W[-2] {BBAA} */ - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - vpaddd XTMP0, XTMP0, XTMP1 /* XTMP0 = W[-16] + W[-7] + s0 */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ +#define FOUR_ROUNDS_AND_SCHED_1(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + mov y0, e /* y0 = e */; \ + mov y1, a /* y1 = a */; \ + ROR( y0, (25-11)) /* y0 = e >> (25-11) */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + mov y2, f /* y2 = f */; \ + ROR( y1, (22-13)) /* y1 = a >> (22-13) */; \ + vpslld XTMP2, XTMP1, (32-18); \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + ROR( y0, (11-6)) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + xor y2, g /* y2 = f^g */; \ + vpsrld XTMP4, XTMP1, 18; \ + ROR( y1, (13-2)) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + ROR( y0, 6) /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + vpxor XTMP4, XTMP4, XTMP3; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + vpsrld XTMP1, XTMP1, 3 /* XTMP4 = W[-15] >> 3 */; \ + add y2, y0 /* y2 = S1 + CH */; \ + add y2, [rsp + _XFER + 1*4] /* y2 = k + w + S1 + CH */; \ + ROR( y1, 2) /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + vpxor XTMP1, XTMP1, XTMP2 /* XTMP1 = W[-15] ror 7 ^ W[-15] ror 18 */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + vpxor XTMP1, XTMP1, XTMP4 /* XTMP1 = s0 */; \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + /* compute low s1 */; \ + vpshufd XTMP2, X3, 0b11111010 /* XTMP2 = W[-2] {BBAA} */; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + vpaddd XTMP0, XTMP0, XTMP1 /* XTMP0 = W[-16] + W[-7] + s0 */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ -ROTATE_ARGS - mov y0, e /* y0 = e */ - mov y1, a /* y1 = a */ - ROR y0, (25-11) /* y0 = e >> (25-11) */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - ROR y1, (22-13) /* y1 = a >> (22-13) */ - mov y2, f /* y2 = f */ - xor y1, a /* y1 = a ^ (a >> (22-13) */ - ROR y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - vpsrlq XTMP3, XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xBxA} */ - xor y2, g /* y2 = f^g */ - vpsrlq XTMP4, XTMP2, 19 /* XTMP3 = W[-2] ror 19 {xBxA} */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - and y2, e /* y2 = (f^g)&e */ - vpsrld XTMP2, XTMP2, 10 /* XTMP4 = W[-2] >> 10 {BBAA} */ - ROR y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - ROR y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - vpxor XTMP2, XTMP2, XTMP3 - add y2, y0 /* y2 = S1 + CH */ - ROR y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - add y2, [rsp + _XFER + 2*4] /* y2 = k + w + S1 + CH */ - vpxor XTMP4, XTMP4, XTMP2 /* XTMP4 = s1 {xBxA} */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - vpshufb XTMP4, XTMP4, SHUF_00BA /* XTMP4 = s1 {00BA} */ - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - vpaddd XTMP0, XTMP0, XTMP4 /* XTMP0 = {..., ..., W[1], W[0]} */ - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - /* compute high s1 */ - vpshufd XTMP2, XTMP0, 0b01010000 /* XTMP2 = W[-2] {DDCC} */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ +#define FOUR_ROUNDS_AND_SCHED_2(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + mov y0, e /* y0 = e */; \ + mov y1, a /* y1 = a */; \ + ROR( y0, (25-11)) /* y0 = e >> (25-11) */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + ROR( y1, (22-13)) /* y1 = a >> (22-13) */; \ + mov y2, f /* y2 = f */; \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + ROR( y0, (11-6)) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + vpsrlq XTMP3, XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xBxA} */; \ + xor y2, g /* y2 = f^g */; \ + vpsrlq XTMP4, XTMP2, 19 /* XTMP3 = W[-2] ror 19 {xBxA} */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + vpsrld XTMP2, XTMP2, 10 /* XTMP4 = W[-2] >> 10 {BBAA} */; \ + ROR( y1, (13-2)) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + ROR( y0, 6) /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + vpxor XTMP2, XTMP2, XTMP3; \ + add y2, y0 /* y2 = S1 + CH */; \ + ROR( y1, 2) /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + add y2, [rsp + _XFER + 2*4] /* y2 = k + w + S1 + CH */; \ + vpxor XTMP4, XTMP4, XTMP2 /* XTMP4 = s1 {xBxA} */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + vpshufb XTMP4, XTMP4, SHUF_00BA /* XTMP4 = s1 {00BA} */; \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + vpaddd XTMP0, XTMP0, XTMP4 /* XTMP0 = {..., ..., W[1], W[0]} */; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + /* compute high s1 */; \ + vpshufd XTMP2, XTMP0, 0b01010000 /* XTMP2 = W[-2] {DDCC} */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ -ROTATE_ARGS - mov y0, e /* y0 = e */ - ROR y0, (25-11) /* y0 = e >> (25-11) */ - mov y1, a /* y1 = a */ - ROR y1, (22-13) /* y1 = a >> (22-13) */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - mov y2, f /* y2 = f */ - ROR y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - vpsrlq XTMP3, XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xDxC} */ - xor y1, a /* y1 = a ^ (a >> (22-13) */ - xor y2, g /* y2 = f^g */ - vpsrlq X0, XTMP2, 19 /* XTMP3 = W[-2] ror 19 {xDxC} */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - and y2, e /* y2 = (f^g)&e */ - ROR y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - vpsrld XTMP2, XTMP2, 10 /* X0 = W[-2] >> 10 {DDCC} */ - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - ROR y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - vpxor XTMP2, XTMP2, XTMP3 - ROR y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - add y2, y0 /* y2 = S1 + CH */ - add y2, [rsp + _XFER + 3*4] /* y2 = k + w + S1 + CH */ - vpxor X0, X0, XTMP2 /* X0 = s1 {xDxC} */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - vpshufb X0, X0, SHUF_DC00 /* X0 = s1 {DC00} */ - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - vpaddd X0, X0, XTMP0 /* X0 = {W[3], W[2], W[1], W[0]} */ - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ +#define FOUR_ROUNDS_AND_SCHED_3(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + mov y0, e /* y0 = e */; \ + ROR( y0, (25-11)) /* y0 = e >> (25-11) */; \ + mov y1, a /* y1 = a */; \ + ROR( y1, (22-13)) /* y1 = a >> (22-13) */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + mov y2, f /* y2 = f */; \ + ROR( y0, (11-6)) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + vpsrlq XTMP3, XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xDxC} */; \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + xor y2, g /* y2 = f^g */; \ + vpsrlq X0, XTMP2, 19 /* XTMP3 = W[-2] ror 19 {xDxC} */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + ROR( y1, (13-2)) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + vpsrld XTMP2, XTMP2, 10 /* X0 = W[-2] >> 10 {DDCC} */; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + ROR( y0, 6) /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + vpxor XTMP2, XTMP2, XTMP3; \ + ROR( y1, 2) /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + add y2, y0 /* y2 = S1 + CH */; \ + add y2, [rsp + _XFER + 3*4] /* y2 = k + w + S1 + CH */; \ + vpxor X0, X0, XTMP2 /* X0 = s1 {xDxC} */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + vpshufb X0, X0, SHUF_DC00 /* X0 = s1 {DC00} */; \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + vpaddd X0, X0, XTMP0 /* X0 = {W[3], W[2], W[1], W[0]} */; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ -ROTATE_ARGS -rotate_Xs -.endm +#define FOUR_ROUNDS_AND_SCHED(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + FOUR_ROUNDS_AND_SCHED_0(X0, X1, X2, X3, a, b, c, d, e, f, g, h); \ + FOUR_ROUNDS_AND_SCHED_1(X0, X1, X2, X3, h, a, b, c, d, e, f, g); \ + FOUR_ROUNDS_AND_SCHED_2(X0, X1, X2, X3, g, h, a, b, c, d, e, f); \ + FOUR_ROUNDS_AND_SCHED_3(X0, X1, X2, X3, f, g, h, a, b, c, d, e); /* input is [rsp + _XFER + %1 * 4] */ -.macro DO_ROUND i1 - mov y0, e /* y0 = e */ - ROR y0, (25-11) /* y0 = e >> (25-11) */ - mov y1, a /* y1 = a */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - ROR y1, (22-13) /* y1 = a >> (22-13) */ - mov y2, f /* y2 = f */ - xor y1, a /* y1 = a ^ (a >> (22-13) */ - ROR y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - xor y2, g /* y2 = f^g */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - ROR y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - and y2, e /* y2 = (f^g)&e */ - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - ROR y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - add y2, y0 /* y2 = S1 + CH */ - ROR y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - add y2, [rsp + _XFER + \i1 * 4] /* y2 = k + w + S1 + CH */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ +#define DO_ROUND(i1, a, b, c, d, e, f, g, h) \ + mov y0, e /* y0 = e */; \ + ROR( y0, (25-11)) /* y0 = e >> (25-11) */; \ + mov y1, a /* y1 = a */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + ROR( y1, (22-13)) /* y1 = a >> (22-13) */; \ + mov y2, f /* y2 = f */; \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + ROR( y0, (11-6)) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + xor y2, g /* y2 = f^g */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + ROR( y1, (13-2)) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + ROR( y0, 6) /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + add y2, y0 /* y2 = S1 + CH */; \ + ROR( y1, 2) /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + add y2, [rsp + _XFER + i1 * 4] /* y2 = k + w + S1 + CH */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ - ROTATE_ARGS -.endm /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; @@ -410,10 +384,10 @@ _gcry_sha256_transform_amd64_avx: lea TBL, [.LK256 ADD_RIP] /* byte swap first 16 dwords */ - COPY_XMM_AND_BSWAP X0, [INP + 0*16], BYTE_FLIP_MASK - COPY_XMM_AND_BSWAP X1, [INP + 1*16], BYTE_FLIP_MASK - COPY_XMM_AND_BSWAP X2, [INP + 2*16], BYTE_FLIP_MASK - COPY_XMM_AND_BSWAP X3, [INP + 3*16], BYTE_FLIP_MASK + COPY_XMM_AND_BSWAP(X0, [INP + 0*16], BYTE_FLIP_MASK) + COPY_XMM_AND_BSWAP(X1, [INP + 1*16], BYTE_FLIP_MASK) + COPY_XMM_AND_BSWAP(X2, [INP + 2*16], BYTE_FLIP_MASK) + COPY_XMM_AND_BSWAP(X3, [INP + 3*16], BYTE_FLIP_MASK) mov [rsp + _INP], INP @@ -423,20 +397,20 @@ _gcry_sha256_transform_amd64_avx: .Loop1: vpaddd XFER, X0, [TBL + 0*16] vmovdqa [rsp + _XFER], XFER - FOUR_ROUNDS_AND_SCHED + FOUR_ROUNDS_AND_SCHED(X0, X1, X2, X3, a, b, c, d, e, f, g, h) - vpaddd XFER, X0, [TBL + 1*16] + vpaddd XFER, X1, [TBL + 1*16] vmovdqa [rsp + _XFER], XFER - FOUR_ROUNDS_AND_SCHED + FOUR_ROUNDS_AND_SCHED(X1, X2, X3, X0, e, f, g, h, a, b, c, d) - vpaddd XFER, X0, [TBL + 2*16] + vpaddd XFER, X2, [TBL + 2*16] vmovdqa [rsp + _XFER], XFER - FOUR_ROUNDS_AND_SCHED + FOUR_ROUNDS_AND_SCHED(X2, X3, X0, X1, a, b, c, d, e, f, g, h) - vpaddd XFER, X0, [TBL + 3*16] + vpaddd XFER, X3, [TBL + 3*16] vmovdqa [rsp + _XFER], XFER add TBL, 4*16 - FOUR_ROUNDS_AND_SCHED + FOUR_ROUNDS_AND_SCHED(X3, X0, X1, X2, e, f, g, h, a, b, c, d) sub SRND, 1 jne .Loop1 @@ -445,17 +419,17 @@ _gcry_sha256_transform_amd64_avx: .Loop2: vpaddd X0, X0, [TBL + 0*16] vmovdqa [rsp + _XFER], X0 - DO_ROUND 0 - DO_ROUND 1 - DO_ROUND 2 - DO_ROUND 3 + DO_ROUND(0, a, b, c, d, e, f, g, h) + DO_ROUND(1, h, a, b, c, d, e, f, g) + DO_ROUND(2, g, h, a, b, c, d, e, f) + DO_ROUND(3, f, g, h, a, b, c, d, e) vpaddd X1, X1, [TBL + 1*16] vmovdqa [rsp + _XFER], X1 add TBL, 2*16 - DO_ROUND 0 - DO_ROUND 1 - DO_ROUND 2 - DO_ROUND 3 + DO_ROUND(0, e, f, g, h, a, b, c, d) + DO_ROUND(1, d, e, f, g, h, a, b, c) + DO_ROUND(2, c, d, e, f, g, h, a, b) + DO_ROUND(3, b, c, d, e, f, g, h, a) vmovdqa X0, X2 vmovdqa X1, X3 @@ -463,14 +437,14 @@ _gcry_sha256_transform_amd64_avx: sub SRND, 1 jne .Loop2 - addm [4*0 + CTX],a - addm [4*1 + CTX],b - addm [4*2 + CTX],c - addm [4*3 + CTX],d - addm [4*4 + CTX],e - addm [4*5 + CTX],f - addm [4*6 + CTX],g - addm [4*7 + CTX],h + addm([4*0 + CTX],a) + addm([4*1 + CTX],b) + addm([4*2 + CTX],c) + addm([4*3 + CTX],d) + addm([4*4 + CTX],e) + addm([4*5 + CTX],f) + addm([4*6 + CTX],g) + addm([4*7 + CTX],h) mov INP, [rsp + _INP] add INP, 64 diff --git a/cipher/sha256-avx2-bmi2-amd64.S b/cipher/sha256-avx2-bmi2-amd64.S index 52be1a07..faefba17 100644 --- a/cipher/sha256-avx2-bmi2-amd64.S +++ b/cipher/sha256-avx2-bmi2-amd64.S @@ -70,226 +70,171 @@ /* addm [mem], reg */ /* Add reg to mem using reg-mem add and store */ -.macro addm p1 p2 - add \p2, \p1 - mov \p1, \p2 -.endm +#define addm(p1, p2) \ + add p2, p1; \ + mov p1, p2; /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ -X0 = ymm4 -X1 = ymm5 -X2 = ymm6 -X3 = ymm7 +#define X0 ymm4 +#define X1 ymm5 +#define X2 ymm6 +#define X3 ymm7 /* XMM versions of above */ -XWORD0 = xmm4 -XWORD1 = xmm5 -XWORD2 = xmm6 -XWORD3 = xmm7 - -XTMP0 = ymm0 -XTMP1 = ymm1 -XTMP2 = ymm2 -XTMP3 = ymm3 -XTMP4 = ymm8 -XFER = ymm9 -XTMP5 = ymm11 - -SHUF_00BA = ymm10 /* shuffle xBxA -> 00BA */ -SHUF_DC00 = ymm12 /* shuffle xDxC -> DC00 */ -BYTE_FLIP_MASK = ymm13 - -X_BYTE_FLIP_MASK = xmm13 /* XMM version of BYTE_FLIP_MASK */ - -NUM_BLKS = rdx /* 3rd arg */ -CTX = rsi /* 2nd arg */ -INP = rdi /* 1st arg */ -c = ecx -d = r8d -e = edx /* clobbers NUM_BLKS */ -y3 = edi /* clobbers INP */ - -TBL = rbp -SRND = CTX /* SRND is same register as CTX */ - -a = eax -b = ebx -f = r9d -g = r10d -h = r11d -old_h = r11d - -T1 = r12d -y0 = r13d -y1 = r14d -y2 = r15d - - -_XFER_SIZE = 2*64*4 /* 2 blocks, 64 rounds, 4 bytes/round */ -_XMM_SAVE_SIZE = 0 -_INP_END_SIZE = 8 -_INP_SIZE = 8 -_CTX_SIZE = 8 -_RSP_SIZE = 8 - -_XFER = 0 -_XMM_SAVE = _XFER + _XFER_SIZE -_INP_END = _XMM_SAVE + _XMM_SAVE_SIZE -_INP = _INP_END + _INP_END_SIZE -_CTX = _INP + _INP_SIZE -_RSP = _CTX + _CTX_SIZE -STACK_SIZE = _RSP + _RSP_SIZE - -/* rotate_Xs */ -/* Rotate values of symbols X0...X3 */ -.macro rotate_Xs -X_ = X0 -X0 = X1 -X1 = X2 -X2 = X3 -X3 = X_ -.endm - -/* ROTATE_ARGS */ -/* Rotate values of symbols a...h */ -.macro ROTATE_ARGS -old_h = h -TMP_ = h -h = g -g = f -f = e -e = d -d = c -c = b -b = a -a = TMP_ -.endm - -.macro ONE_ROUND_PART1 XFER - /* h += Sum1 (e) + Ch (e, f, g) + (k[t] + w[0]); - * d += h; - * h += Sum0 (a) + Maj (a, b, c); - * - * Ch(x, y, z) => ((x & y) + (~x & z)) - * Maj(x, y, z) => ((x & y) + (z & (x ^ y))) - */ - - mov y3, e - add h, [\XFER] - and y3, f - rorx y0, e, 25 - rorx y1, e, 11 +#define XWORD0 xmm4 +#define XWORD1 xmm5 +#define XWORD2 xmm6 +#define XWORD3 xmm7 + +#define XTMP0 ymm0 +#define XTMP1 ymm1 +#define XTMP2 ymm2 +#define XTMP3 ymm3 +#define XTMP4 ymm8 +#define XFER ymm9 +#define XTMP5 ymm11 + +#define SHUF_00BA ymm10 /* shuffle xBxA -> 00BA */ +#define SHUF_DC00 ymm12 /* shuffle xDxC -> DC00 */ +#define BYTE_FLIP_MASK ymm13 + +#define X_BYTE_FLIP_MASK xmm13 /* XMM version of BYTE_FLIP_MASK */ + +#define NUM_BLKS rdx /* 3rd arg */ +#define CTX rsi /* 2nd arg */ +#define INP rdi /* 1st arg */ +#define c ecx +#define d r8d +#define e edx /* clobbers NUM_BLKS */ +#define y3 edi /* clobbers INP */ + +#define TBL rbp +#define SRND CTX /* SRND is same register as CTX */ + +#define a eax +#define b ebx +#define f r9d +#define g r10d +#define h r11d +#define old_h r11d + +#define T1 r12d +#define y0 r13d +#define y1 r14d +#define y2 r15d + + +#define _XFER_SIZE 2*64*4 /* 2 blocks, 64 rounds, 4 bytes/round */ +#define _XMM_SAVE_SIZE 0 +#define _INP_END_SIZE 8 +#define _INP_SIZE 8 +#define _CTX_SIZE 8 +#define _RSP_SIZE 8 + +#define _XFER 0 +#define _XMM_SAVE _XFER + _XFER_SIZE +#define _INP_END _XMM_SAVE + _XMM_SAVE_SIZE +#define _INP _INP_END + _INP_END_SIZE +#define _CTX _INP + _INP_SIZE +#define _RSP _CTX + _CTX_SIZE +#define STACK_SIZE _RSP + _RSP_SIZE + +#define ONE_ROUND_PART1(XFERIN, a, b, c, d, e, f, g, h) \ + /* h += Sum1 (e) + Ch (e, f, g) + (k[t] + w[0]); */ \ + /* d += h; */ \ + /* h += Sum0 (a) + Maj (a, b, c); */ \ + \ + /* Ch(x, y, z) => ((x & y) + (~x & z)) */ \ + /* Maj(x, y, z) => ((x & y) + (z & (x ^ y))) */ \ + \ + mov y3, e; \ + add h, [XFERIN]; \ + and y3, f; \ + rorx y0, e, 25; \ + rorx y1, e, 11; \ + lea h, [h + y3]; \ + andn y3, e, g; \ + rorx T1, a, 13; \ + xor y0, y1; \ lea h, [h + y3] - andn y3, e, g - rorx T1, a, 13 - xor y0, y1 - lea h, [h + y3] -.endm -.macro ONE_ROUND_PART2 - rorx y2, a, 22 - rorx y1, e, 6 - mov y3, a - xor T1, y2 - xor y0, y1 - xor y3, b - lea h, [h + y0] - mov y0, a - rorx y2, a, 2 - add d, h - and y3, c - xor T1, y2 - lea h, [h + y3] - lea h, [h + T1] - and y0, b - lea h, [h + y0] -.endm - -.macro ONE_ROUND XFER - ONE_ROUND_PART1 \XFER - ONE_ROUND_PART2 -.endm - -.macro FOUR_ROUNDS_AND_SCHED XFER, XFEROUT -/* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 0 ;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - vpalignr XTMP0, X3, X2, 4 /* XTMP0 = W[-7] */ - vpaddd XTMP0, XTMP0, X0 /* XTMP0 = W[-7] + W[-16]; y1 = (e >> 6); S1 */ - vpalignr XTMP1, X1, X0, 4 /* XTMP1 = W[-15] */ - vpsrld XTMP2, XTMP1, 7 - vpslld XTMP3, XTMP1, (32-7) - vpor XTMP3, XTMP3, XTMP2 /* XTMP3 = W[-15] ror 7 */ - vpsrld XTMP2, XTMP1,18 - - ONE_ROUND 0*4+\XFER - ROTATE_ARGS - -/* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 1 ;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - vpsrld XTMP4, XTMP1, 3 /* XTMP4 = W[-15] >> 3 */ - vpslld XTMP1, XTMP1, (32-18) - vpxor XTMP3, XTMP3, XTMP1 - vpxor XTMP3, XTMP3, XTMP2 /* XTMP3 = W[-15] ror 7 ^ W[-15] ror 18 */ - vpxor XTMP1, XTMP3, XTMP4 /* XTMP1 = s0 */ - vpshufd XTMP2, X3, 0b11111010 /* XTMP2 = W[-2] {BBAA} */ - vpaddd XTMP0, XTMP0, XTMP1 /* XTMP0 = W[-16] + W[-7] + s0 */ - vpsrld XTMP4, XTMP2, 10 /* XTMP4 = W[-2] >> 10 {BBAA} */ - - ONE_ROUND 1*4+\XFER - ROTATE_ARGS - -/* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 2 ;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - vpsrlq XTMP3, XTMP2, 19 /* XTMP3 = W[-2] ror 19 {xBxA} */ - vpsrlq XTMP2, XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xBxA} */ - vpxor XTMP2, XTMP2, XTMP3 - vpxor XTMP4, XTMP4, XTMP2 /* XTMP4 = s1 {xBxA} */ - vpshufb XTMP4, XTMP4, SHUF_00BA /* XTMP4 = s1 {00BA} */ - vpaddd XTMP0, XTMP0, XTMP4 /* XTMP0 = {..., ..., W[1], W[0]} */ - vpshufd XTMP2, XTMP0, 0b1010000 /* XTMP2 = W[-2] {DDCC} */ - - ONE_ROUND 2*4+\XFER - ROTATE_ARGS - -/* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 3 ;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - vpsrld XTMP5, XTMP2, 10 /* XTMP5 = W[-2] >> 10 {DDCC} */ - vpsrlq XTMP3, XTMP2, 19 /* XTMP3 = W[-2] ror 19 {xDxC} */ - vpsrlq XTMP2, XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xDxC} */ - vpxor XTMP2, XTMP2, XTMP3 - vpxor XTMP5, XTMP5, XTMP2 /* XTMP5 = s1 {xDxC} */ - vpshufb XTMP5, XTMP5, SHUF_DC00 /* XTMP5 = s1 {DC00} */ - vpaddd X0, XTMP5, XTMP0 /* X0 = {W[3], W[2], W[1], W[0]} */ - vpaddd XFER, X0, [TBL + \XFEROUT] - - ONE_ROUND_PART1 3*4+\XFER - vmovdqa [rsp + _XFER + \XFEROUT], XFER - ONE_ROUND_PART2 - ROTATE_ARGS - rotate_Xs -.endm - -.macro DO_4ROUNDS XFER -/* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 0 ;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - ONE_ROUND 0*4+\XFER - ROTATE_ARGS - -/* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 1 ;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - ONE_ROUND 1*4+\XFER - ROTATE_ARGS - -/* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 2 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - ONE_ROUND 2*4+\XFER - ROTATE_ARGS - -/* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 3 ;;;;;;;;;;;;;;;;;;;;;;;;;;; */ +#define ONE_ROUND_PART2(a, b, c, d, e, f, g, h) \ + rorx y2, a, 22; \ + rorx y1, e, 6; \ + mov y3, a; \ + xor T1, y2; \ + xor y0, y1; \ + xor y3, b; \ + lea h, [h + y0]; \ + mov y0, a; \ + rorx y2, a, 2; \ + add d, h; \ + and y3, c; \ + xor T1, y2; \ + lea h, [h + y3]; \ + lea h, [h + T1]; \ + and y0, b; \ + lea h, [h + y0] - ONE_ROUND 3*4+\XFER - ROTATE_ARGS -.endm +#define ONE_ROUND(XFER, a, b, c, d, e, f, g, h) \ + ONE_ROUND_PART1(XFER, a, b, c, d, e, f, g, h); \ + ONE_ROUND_PART2(a, b, c, d, e, f, g, h) + +#define FOUR_ROUNDS_AND_SCHED(XFERIN, XFEROUT, X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 0 ;;;;;;;;;;;;;;;;;;;;;;;;;;;; */; \ + vpalignr XTMP0, X3, X2, 4 /* XTMP0 = W[-7] */; \ + vpaddd XTMP0, XTMP0, X0 /* XTMP0 = W[-7] + W[-16]; y1 = (e >> 6); S1 */; \ + vpalignr XTMP1, X1, X0, 4 /* XTMP1 = W[-15] */; \ + vpsrld XTMP2, XTMP1, 7; \ + vpslld XTMP3, XTMP1, (32-7); \ + vpor XTMP3, XTMP3, XTMP2 /* XTMP3 = W[-15] ror 7 */; \ + vpsrld XTMP2, XTMP1,18; \ + \ + ONE_ROUND(0*4+XFERIN, a, b, c, d, e, f, g, h); \ + \ + /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 1 ;;;;;;;;;;;;;;;;;;;;;;;;;;;; */; \ + vpsrld XTMP4, XTMP1, 3 /* XTMP4 = W[-15] >> 3 */; \ + vpslld XTMP1, XTMP1, (32-18); \ + vpxor XTMP3, XTMP3, XTMP1; \ + vpxor XTMP3, XTMP3, XTMP2 /* XTMP3 = W[-15] ror 7 ^ W[-15] ror 18 */; \ + vpxor XTMP1, XTMP3, XTMP4 /* XTMP1 = s0 */; \ + vpshufd XTMP2, X3, 0b11111010 /* XTMP2 = W[-2] {BBAA} */; \ + vpaddd XTMP0, XTMP0, XTMP1 /* XTMP0 = W[-16] + W[-7] + s0 */; \ + vpsrld XTMP4, XTMP2, 10 /* XTMP4 = W[-2] >> 10 {BBAA} */; \ + \ + ONE_ROUND(1*4+XFERIN, h, a, b, c, d, e, f, g); \ + \ + /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 2 ;;;;;;;;;;;;;;;;;;;;;;;;;;;; */; \ + vpsrlq XTMP3, XTMP2, 19 /* XTMP3 = W[-2] ror 19 {xBxA} */; \ + vpsrlq XTMP2, XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xBxA} */; \ + vpxor XTMP2, XTMP2, XTMP3; \ + vpxor XTMP4, XTMP4, XTMP2 /* XTMP4 = s1 {xBxA} */; \ + vpshufb XTMP4, XTMP4, SHUF_00BA /* XTMP4 = s1 {00BA} */; \ + vpaddd XTMP0, XTMP0, XTMP4 /* XTMP0 = {..., ..., W[1], W[0]} */; \ + vpshufd XTMP2, XTMP0, 0b1010000 /* XTMP2 = W[-2] {DDCC} */; \ + \ + ONE_ROUND(2*4+XFERIN, g, h, a, b, c, d, e, f); \ + \ + /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 3 ;;;;;;;;;;;;;;;;;;;;;;;;;;;; */; \ + vpsrld XTMP5, XTMP2, 10 /* XTMP5 = W[-2] >> 10 {DDCC} */; \ + vpsrlq XTMP3, XTMP2, 19 /* XTMP3 = W[-2] ror 19 {xDxC} */; \ + vpsrlq XTMP2, XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xDxC} */; \ + vpxor XTMP2, XTMP2, XTMP3; \ + vpxor XTMP5, XTMP5, XTMP2 /* XTMP5 = s1 {xDxC} */; \ + vpshufb XTMP5, XTMP5, SHUF_DC00 /* XTMP5 = s1 {DC00} */; \ + vpaddd X0, XTMP5, XTMP0 /* X0 = {W[3], W[2], W[1], W[0]} */; \ + vpaddd XFER, X0, [TBL + XFEROUT]; \ + \ + ONE_ROUND_PART1(3*4+XFERIN, f, g, h, a, b, c, d, e); \ + vmovdqa [rsp + _XFER + XFEROUT], XFER; \ + ONE_ROUND_PART2(f, g, h, a, b, c, d, e); + +#define DO_4ROUNDS(XFERIN, a, b, c, d, e, f, g, h) \ + ONE_ROUND(0*4+XFERIN, a, b, c, d, e, f, g, h); \ + ONE_ROUND(1*4+XFERIN, h, a, b, c, d, e, f, g); \ + ONE_ROUND(2*4+XFERIN, g, h, a, b, c, d, e, f); \ + ONE_ROUND(3*4+XFERIN, f, g, h, a, b, c, d, e) /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; @@ -391,32 +336,32 @@ _gcry_sha256_transform_amd64_avx2: .align 16 .Loop1: - FOUR_ROUNDS_AND_SCHED rsp + _XFER + SRND + 0*32, SRND + 4*32 - FOUR_ROUNDS_AND_SCHED rsp + _XFER + SRND + 1*32, SRND + 5*32 - FOUR_ROUNDS_AND_SCHED rsp + _XFER + SRND + 2*32, SRND + 6*32 - FOUR_ROUNDS_AND_SCHED rsp + _XFER + SRND + 3*32, SRND + 7*32 + FOUR_ROUNDS_AND_SCHED(rsp + _XFER + SRND + 0*32, SRND + 4*32, X0, X1, X2, X3, a, b, c, d, e, f, g, h) + FOUR_ROUNDS_AND_SCHED(rsp + _XFER + SRND + 1*32, SRND + 5*32, X1, X2, X3, X0, e, f, g, h, a, b, c, d) + FOUR_ROUNDS_AND_SCHED(rsp + _XFER + SRND + 2*32, SRND + 6*32, X2, X3, X0, X1, a, b, c, d, e, f, g, h) + FOUR_ROUNDS_AND_SCHED(rsp + _XFER + SRND + 3*32, SRND + 7*32, X3, X0, X1, X2, e, f, g, h, a, b, c, d) add SRND, 4*32 cmp SRND, 3 * 4*32 jb .Loop1 /* ; Do last 16 rounds with no scheduling */ - DO_4ROUNDS rsp + _XFER + (3*4*32 + 0*32) - DO_4ROUNDS rsp + _XFER + (3*4*32 + 1*32) - DO_4ROUNDS rsp + _XFER + (3*4*32 + 2*32) - DO_4ROUNDS rsp + _XFER + (3*4*32 + 3*32) + DO_4ROUNDS(rsp + _XFER + (3*4*32 + 0*32), a, b, c, d, e, f, g, h) + DO_4ROUNDS(rsp + _XFER + (3*4*32 + 1*32), e, f, g, h, a, b, c, d) + DO_4ROUNDS(rsp + _XFER + (3*4*32 + 2*32), a, b, c, d, e, f, g, h) + DO_4ROUNDS(rsp + _XFER + (3*4*32 + 3*32), e, f, g, h, a, b, c, d) mov CTX, [rsp + _CTX] mov INP, [rsp + _INP] - addm [4*0 + CTX],a - addm [4*1 + CTX],b - addm [4*2 + CTX],c - addm [4*3 + CTX],d - addm [4*4 + CTX],e - addm [4*5 + CTX],f - addm [4*6 + CTX],g - addm [4*7 + CTX],h + addm([4*0 + CTX],a) + addm([4*1 + CTX],b) + addm([4*2 + CTX],c) + addm([4*3 + CTX],d) + addm([4*4 + CTX],e) + addm([4*5 + CTX],f) + addm([4*6 + CTX],g) + addm([4*7 + CTX],h) cmp INP, [rsp + _INP_END] ja .Ldone_hash @@ -425,8 +370,8 @@ _gcry_sha256_transform_amd64_avx2: xor SRND, SRND .align 16 .Loop3: - DO_4ROUNDS rsp + _XFER + SRND + 0*32 + 16 - DO_4ROUNDS rsp + _XFER + SRND + 1*32 + 16 + DO_4ROUNDS(rsp + _XFER + SRND + 0*32 + 16, a, b, c, d, e, f, g, h) + DO_4ROUNDS(rsp + _XFER + SRND + 1*32 + 16, e, f, g, h, a, b, c, d) add SRND, 2*32 cmp SRND, 4 * 4*32 jb .Loop3 @@ -435,14 +380,14 @@ _gcry_sha256_transform_amd64_avx2: mov INP, [rsp + _INP] add INP, 64 - addm [4*0 + CTX],a - addm [4*1 + CTX],b - addm [4*2 + CTX],c - addm [4*3 + CTX],d - addm [4*4 + CTX],e - addm [4*5 + CTX],f - addm [4*6 + CTX],g - addm [4*7 + CTX],h + addm([4*0 + CTX],a) + addm([4*1 + CTX],b) + addm([4*2 + CTX],c) + addm([4*3 + CTX],d) + addm([4*4 + CTX],e) + addm([4*5 + CTX],f) + addm([4*6 + CTX],g) + addm([4*7 + CTX],h) cmp INP, [rsp + _INP_END] jb .Loop0 diff --git a/cipher/sha256-ssse3-amd64.S b/cipher/sha256-ssse3-amd64.S index 0fb94c1b..098b0eb6 100644 --- a/cipher/sha256-ssse3-amd64.S +++ b/cipher/sha256-ssse3-amd64.S @@ -70,58 +70,56 @@ /* addm [mem], reg * Add reg to mem using reg-mem add and store */ -.macro addm p1 p2 - add \p2, \p1 - mov \p1, \p2 -.endm +#define addm(p1, p2) \ + add p2, p1; \ + mov p1, p2; /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;*/ /* COPY_XMM_AND_BSWAP xmm, [mem], byte_flip_mask * Load xmm with mem and byte swap each dword */ -.macro COPY_XMM_AND_BSWAP p1 p2 p3 - MOVDQ \p1, \p2 - pshufb \p1, \p3 -.endm +#define COPY_XMM_AND_BSWAP(p1, p2, p3) \ + MOVDQ p1, p2; \ + pshufb p1, p3; /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;*/ -X0 = xmm4 -X1 = xmm5 -X2 = xmm6 -X3 = xmm7 +#define X0 xmm4 +#define X1 xmm5 +#define X2 xmm6 +#define X3 xmm7 -XTMP0 = xmm0 -XTMP1 = xmm1 -XTMP2 = xmm2 -XTMP3 = xmm3 -XTMP4 = xmm8 -XFER = xmm9 +#define XTMP0 xmm0 +#define XTMP1 xmm1 +#define XTMP2 xmm2 +#define XTMP3 xmm3 +#define XTMP4 xmm8 +#define XFER xmm9 -SHUF_00BA = xmm10 /* shuffle xBxA -> 00BA */ -SHUF_DC00 = xmm11 /* shuffle xDxC -> DC00 */ -BYTE_FLIP_MASK = xmm12 +#define SHUF_00BA xmm10 /* shuffle xBxA -> 00BA */ +#define SHUF_DC00 xmm11 /* shuffle xDxC -> DC00 */ +#define BYTE_FLIP_MASK xmm12 -NUM_BLKS = rdx /* 3rd arg */ -CTX = rsi /* 2nd arg */ -INP = rdi /* 1st arg */ +#define NUM_BLKS rdx /* 3rd arg */ +#define CTX rsi /* 2nd arg */ +#define INP rdi /* 1st arg */ -SRND = rdi /* clobbers INP */ -c = ecx -d = r8d -e = edx +#define SRND rdi /* clobbers INP */ +#define c ecx +#define d r8d +#define e edx -TBL = rbp -a = eax -b = ebx +#define TBL rbp +#define a eax +#define b ebx -f = r9d -g = r10d -h = r11d +#define f r9d +#define g r10d +#define h r11d -y0 = r13d -y1 = r14d -y2 = r15d +#define y0 r13d +#define y1 r14d +#define y2 r15d @@ -138,230 +136,207 @@ y2 = r15d #define _XMM_SAVE (_XFER + _XFER_SIZE + _ALIGN_SIZE) #define STACK_SIZE (_XMM_SAVE + _XMM_SAVE_SIZE) -/* rotate_Xs - * Rotate values of symbols X0...X3 */ -.macro rotate_Xs -X_ = X0 -X0 = X1 -X1 = X2 -X2 = X3 -X3 = X_ -.endm - -/* ROTATE_ARGS - * Rotate values of symbols a...h */ -.macro ROTATE_ARGS -TMP_ = h -h = g -g = f -f = e -e = d -d = c -c = b -b = a -a = TMP_ -.endm - -.macro FOUR_ROUNDS_AND_SCHED - /* compute s0 four at a time and s1 two at a time - * compute W[-16] + W[-7] 4 at a time */ - movdqa XTMP0, X3 - mov y0, e /* y0 = e */ - ror y0, (25-11) /* y0 = e >> (25-11) */ - mov y1, a /* y1 = a */ - palignr XTMP0, X2, 4 /* XTMP0 = W[-7] */ - ror y1, (22-13) /* y1 = a >> (22-13) */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - mov y2, f /* y2 = f */ - ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - movdqa XTMP1, X1 - xor y1, a /* y1 = a ^ (a >> (22-13) */ - xor y2, g /* y2 = f^g */ - paddd XTMP0, X0 /* XTMP0 = W[-7] + W[-16] */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - and y2, e /* y2 = (f^g)&e */ - ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - /* compute s0 */ - palignr XTMP1, X0, 4 /* XTMP1 = W[-15] */ - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - movdqa XTMP2, XTMP1 /* XTMP2 = W[-15] */ - ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - add y2, y0 /* y2 = S1 + CH */ - add y2, [rsp + _XFER + 0*4] /* y2 = k + w + S1 + CH */ - movdqa XTMP3, XTMP1 /* XTMP3 = W[-15] */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - pslld XTMP1, (32-7) - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - psrld XTMP2, 7 - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - por XTMP1, XTMP2 /* XTMP1 = W[-15] ror 7 */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ + +#define FOUR_ROUNDS_AND_SCHED_0(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + /* compute s0 four at a time and s1 two at a time */; \ + /* compute W[-16] + W[-7] 4 at a time */; \ + movdqa XTMP0, X3; \ + mov y0, e /* y0 = e */; \ + ror y0, (25-11) /* y0 = e >> (25-11) */; \ + mov y1, a /* y1 = a */; \ + palignr XTMP0, X2, 4 /* XTMP0 = W[-7] */; \ + ror y1, (22-13) /* y1 = a >> (22-13) */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + mov y2, f /* y2 = f */; \ + ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + movdqa XTMP1, X1; \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + xor y2, g /* y2 = f^g */; \ + paddd XTMP0, X0 /* XTMP0 = W[-7] + W[-16] */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + /* compute s0 */; \ + palignr XTMP1, X0, 4 /* XTMP1 = W[-15] */; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + movdqa XTMP2, XTMP1 /* XTMP2 = W[-15] */; \ + ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + add y2, y0 /* y2 = S1 + CH */; \ + add y2, [rsp + _XFER + 0*4] /* y2 = k + w + S1 + CH */; \ + movdqa XTMP3, XTMP1 /* XTMP3 = W[-15] */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + pslld XTMP1, (32-7); \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + psrld XTMP2, 7; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + por XTMP1, XTMP2 /* XTMP1 = W[-15] ror 7 */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ -ROTATE_ARGS - movdqa XTMP2, XTMP3 /* XTMP2 = W[-15] */ - mov y0, e /* y0 = e */ - mov y1, a /* y1 = a */ - movdqa XTMP4, XTMP3 /* XTMP4 = W[-15] */ - ror y0, (25-11) /* y0 = e >> (25-11) */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - mov y2, f /* y2 = f */ - ror y1, (22-13) /* y1 = a >> (22-13) */ - pslld XTMP3, (32-18) - xor y1, a /* y1 = a ^ (a >> (22-13) */ - ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - xor y2, g /* y2 = f^g */ - psrld XTMP2, 18 - ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - and y2, e /* y2 = (f^g)&e */ - ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - pxor XTMP1, XTMP3 - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - psrld XTMP4, 3 /* XTMP4 = W[-15] >> 3 */ - add y2, y0 /* y2 = S1 + CH */ - add y2, [rsp + _XFER + 1*4] /* y2 = k + w + S1 + CH */ - ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - pxor XTMP1, XTMP2 /* XTMP1 = W[-15] ror 7 ^ W[-15] ror 18 */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - pxor XTMP1, XTMP4 /* XTMP1 = s0 */ - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - /* compute low s1 */ - pshufd XTMP2, X3, 0b11111010 /* XTMP2 = W[-2] {BBAA} */ - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - paddd XTMP0, XTMP1 /* XTMP0 = W[-16] + W[-7] + s0 */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ +#define FOUR_ROUNDS_AND_SCHED_1(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + movdqa XTMP2, XTMP3 /* XTMP2 = W[-15] */; \ + mov y0, e /* y0 = e */; \ + mov y1, a /* y1 = a */; \ + movdqa XTMP4, XTMP3 /* XTMP4 = W[-15] */; \ + ror y0, (25-11) /* y0 = e >> (25-11) */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + mov y2, f /* y2 = f */; \ + ror y1, (22-13) /* y1 = a >> (22-13) */; \ + pslld XTMP3, (32-18); \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + xor y2, g /* y2 = f^g */; \ + psrld XTMP2, 18; \ + ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + pxor XTMP1, XTMP3; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + psrld XTMP4, 3 /* XTMP4 = W[-15] >> 3 */; \ + add y2, y0 /* y2 = S1 + CH */; \ + add y2, [rsp + _XFER + 1*4] /* y2 = k + w + S1 + CH */; \ + ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + pxor XTMP1, XTMP2 /* XTMP1 = W[-15] ror 7 ^ W[-15] ror 18 */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + pxor XTMP1, XTMP4 /* XTMP1 = s0 */; \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + /* compute low s1 */; \ + pshufd XTMP2, X3, 0b11111010 /* XTMP2 = W[-2] {BBAA} */; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + paddd XTMP0, XTMP1 /* XTMP0 = W[-16] + W[-7] + s0 */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ -ROTATE_ARGS - movdqa XTMP3, XTMP2 /* XTMP3 = W[-2] {BBAA} */ - mov y0, e /* y0 = e */ - mov y1, a /* y1 = a */ - ror y0, (25-11) /* y0 = e >> (25-11) */ - movdqa XTMP4, XTMP2 /* XTMP4 = W[-2] {BBAA} */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - ror y1, (22-13) /* y1 = a >> (22-13) */ - mov y2, f /* y2 = f */ - xor y1, a /* y1 = a ^ (a >> (22-13) */ - ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - psrlq XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xBxA} */ - xor y2, g /* y2 = f^g */ - psrlq XTMP3, 19 /* XTMP3 = W[-2] ror 19 {xBxA} */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - and y2, e /* y2 = (f^g)&e */ - psrld XTMP4, 10 /* XTMP4 = W[-2] >> 10 {BBAA} */ - ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - pxor XTMP2, XTMP3 - add y2, y0 /* y2 = S1 + CH */ - ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - add y2, [rsp + _XFER + 2*4] /* y2 = k + w + S1 + CH */ - pxor XTMP4, XTMP2 /* XTMP4 = s1 {xBxA} */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - pshufb XTMP4, SHUF_00BA /* XTMP4 = s1 {00BA} */ - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - paddd XTMP0, XTMP4 /* XTMP0 = {..., ..., W[1], W[0]} */ - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - /* compute high s1 */ - pshufd XTMP2, XTMP0, 0b01010000 /* XTMP2 = W[-2] {DDCC} */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ +#define FOUR_ROUNDS_AND_SCHED_2(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + movdqa XTMP3, XTMP2 /* XTMP3 = W[-2] {BBAA} */; \ + mov y0, e /* y0 = e */; \ + mov y1, a /* y1 = a */; \ + ror y0, (25-11) /* y0 = e >> (25-11) */; \ + movdqa XTMP4, XTMP2 /* XTMP4 = W[-2] {BBAA} */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + ror y1, (22-13) /* y1 = a >> (22-13) */; \ + mov y2, f /* y2 = f */; \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + psrlq XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xBxA} */; \ + xor y2, g /* y2 = f^g */; \ + psrlq XTMP3, 19 /* XTMP3 = W[-2] ror 19 {xBxA} */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + psrld XTMP4, 10 /* XTMP4 = W[-2] >> 10 {BBAA} */; \ + ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + pxor XTMP2, XTMP3; \ + add y2, y0 /* y2 = S1 + CH */; \ + ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + add y2, [rsp + _XFER + 2*4] /* y2 = k + w + S1 + CH */; \ + pxor XTMP4, XTMP2 /* XTMP4 = s1 {xBxA} */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + pshufb XTMP4, SHUF_00BA /* XTMP4 = s1 {00BA} */; \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + paddd XTMP0, XTMP4 /* XTMP0 = {..., ..., W[1], W[0]} */; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + /* compute high s1 */; \ + pshufd XTMP2, XTMP0, 0b01010000 /* XTMP2 = W[-2] {DDCC} */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ -ROTATE_ARGS - movdqa XTMP3, XTMP2 /* XTMP3 = W[-2] {DDCC} */ - mov y0, e /* y0 = e */ - ror y0, (25-11) /* y0 = e >> (25-11) */ - mov y1, a /* y1 = a */ - movdqa X0, XTMP2 /* X0 = W[-2] {DDCC} */ - ror y1, (22-13) /* y1 = a >> (22-13) */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - mov y2, f /* y2 = f */ - ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - psrlq XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xDxC} */ - xor y1, a /* y1 = a ^ (a >> (22-13) */ - xor y2, g /* y2 = f^g */ - psrlq XTMP3, 19 /* XTMP3 = W[-2] ror 19 {xDxC} */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - and y2, e /* y2 = (f^g)&e */ - ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - psrld X0, 10 /* X0 = W[-2] >> 10 {DDCC} */ - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - pxor XTMP2, XTMP3 - ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - add y2, y0 /* y2 = S1 + CH */ - add y2, [rsp + _XFER + 3*4] /* y2 = k + w + S1 + CH */ - pxor X0, XTMP2 /* X0 = s1 {xDxC} */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - pshufb X0, SHUF_DC00 /* X0 = s1 {DC00} */ - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - paddd X0, XTMP0 /* X0 = {W[3], W[2], W[1], W[0]} */ - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ +#define FOUR_ROUNDS_AND_SCHED_3(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + movdqa XTMP3, XTMP2 /* XTMP3 = W[-2] {DDCC} */; \ + mov y0, e /* y0 = e */; \ + ror y0, (25-11) /* y0 = e >> (25-11) */; \ + mov y1, a /* y1 = a */; \ + movdqa X0, XTMP2 /* X0 = W[-2] {DDCC} */; \ + ror y1, (22-13) /* y1 = a >> (22-13) */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + mov y2, f /* y2 = f */; \ + ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + psrlq XTMP2, 17 /* XTMP2 = W[-2] ror 17 {xDxC} */; \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + xor y2, g /* y2 = f^g */; \ + psrlq XTMP3, 19 /* XTMP3 = W[-2] ror 19 {xDxC} */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + psrld X0, 10 /* X0 = W[-2] >> 10 {DDCC} */; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + pxor XTMP2, XTMP3; \ + ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + add y2, y0 /* y2 = S1 + CH */; \ + add y2, [rsp + _XFER + 3*4] /* y2 = k + w + S1 + CH */; \ + pxor X0, XTMP2 /* X0 = s1 {xDxC} */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + pshufb X0, SHUF_DC00 /* X0 = s1 {DC00} */; \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + paddd X0, XTMP0 /* X0 = {W[3], W[2], W[1], W[0]} */; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ -ROTATE_ARGS -rotate_Xs -.endm +#define FOUR_ROUNDS_AND_SCHED(X0, X1, X2, X3, a, b, c, d, e, f, g, h) \ + FOUR_ROUNDS_AND_SCHED_0(X0, X1, X2, X3, a, b, c, d, e, f, g, h); \ + FOUR_ROUNDS_AND_SCHED_1(X0, X1, X2, X3, h, a, b, c, d, e, f, g); \ + FOUR_ROUNDS_AND_SCHED_2(X0, X1, X2, X3, g, h, a, b, c, d, e, f); \ + FOUR_ROUNDS_AND_SCHED_3(X0, X1, X2, X3, f, g, h, a, b, c, d, e); /* input is [rsp + _XFER + %1 * 4] */ -.macro DO_ROUND i1 - mov y0, e /* y0 = e */ - ror y0, (25-11) /* y0 = e >> (25-11) */ - mov y1, a /* y1 = a */ - xor y0, e /* y0 = e ^ (e >> (25-11)) */ - ror y1, (22-13) /* y1 = a >> (22-13) */ - mov y2, f /* y2 = f */ - xor y1, a /* y1 = a ^ (a >> (22-13) */ - ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */ - xor y2, g /* y2 = f^g */ - xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */ - ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */ - and y2, e /* y2 = (f^g)&e */ - xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */ - ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */ - xor y2, g /* y2 = CH = ((f^g)&e)^g */ - add y2, y0 /* y2 = S1 + CH */ - ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */ - add y2, [rsp + _XFER + \i1 * 4] /* y2 = k + w + S1 + CH */ - mov y0, a /* y0 = a */ - add h, y2 /* h = h + S1 + CH + k + w */ - mov y2, a /* y2 = a */ - or y0, c /* y0 = a|c */ - add d, h /* d = d + h + S1 + CH + k + w */ - and y2, c /* y2 = a&c */ - and y0, b /* y0 = (a|c)&b */ - add h, y1 /* h = h + S1 + CH + k + w + S0 */ - or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */ +#define DO_ROUND(i1, a, b, c, d, e, f, g, h) \ + mov y0, e /* y0 = e */; \ + ror y0, (25-11) /* y0 = e >> (25-11) */; \ + mov y1, a /* y1 = a */; \ + xor y0, e /* y0 = e ^ (e >> (25-11)) */; \ + ror y1, (22-13) /* y1 = a >> (22-13) */; \ + mov y2, f /* y2 = f */; \ + xor y1, a /* y1 = a ^ (a >> (22-13) */; \ + ror y0, (11-6) /* y0 = (e >> (11-6)) ^ (e >> (25-6)) */; \ + xor y2, g /* y2 = f^g */; \ + xor y0, e /* y0 = e ^ (e >> (11-6)) ^ (e >> (25-6)) */; \ + ror y1, (13-2) /* y1 = (a >> (13-2)) ^ (a >> (22-2)) */; \ + and y2, e /* y2 = (f^g)&e */; \ + xor y1, a /* y1 = a ^ (a >> (13-2)) ^ (a >> (22-2)) */; \ + ror y0, 6 /* y0 = S1 = (e>>6) & (e>>11) ^ (e>>25) */; \ + xor y2, g /* y2 = CH = ((f^g)&e)^g */; \ + add y2, y0 /* y2 = S1 + CH */; \ + ror y1, 2 /* y1 = S0 = (a>>2) ^ (a>>13) ^ (a>>22) */; \ + add y2, [rsp + _XFER + i1 * 4] /* y2 = k + w + S1 + CH */; \ + mov y0, a /* y0 = a */; \ + add h, y2 /* h = h + S1 + CH + k + w */; \ + mov y2, a /* y2 = a */; \ + or y0, c /* y0 = a|c */; \ + add d, h /* d = d + h + S1 + CH + k + w */; \ + and y2, c /* y2 = a&c */; \ + and y0, b /* y0 = (a|c)&b */; \ + add h, y1 /* h = h + S1 + CH + k + w + S0 */; \ + or y0, y2 /* y0 = MAJ = (a|c)&b)|(a&c) */; \ lea h, [h + y0] /* h = h + S1 + CH + k + w + S0 + MAJ */ - ROTATE_ARGS -.endm /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; @@ -414,10 +389,10 @@ _gcry_sha256_transform_amd64_ssse3: lea TBL, [.LK256 ADD_RIP] /* byte swap first 16 dwords */ - COPY_XMM_AND_BSWAP X0, [INP + 0*16], BYTE_FLIP_MASK - COPY_XMM_AND_BSWAP X1, [INP + 1*16], BYTE_FLIP_MASK - COPY_XMM_AND_BSWAP X2, [INP + 2*16], BYTE_FLIP_MASK - COPY_XMM_AND_BSWAP X3, [INP + 3*16], BYTE_FLIP_MASK + COPY_XMM_AND_BSWAP(X0, [INP + 0*16], BYTE_FLIP_MASK) + COPY_XMM_AND_BSWAP(X1, [INP + 1*16], BYTE_FLIP_MASK) + COPY_XMM_AND_BSWAP(X2, [INP + 2*16], BYTE_FLIP_MASK) + COPY_XMM_AND_BSWAP(X3, [INP + 3*16], BYTE_FLIP_MASK) mov [rsp + _INP], INP @@ -428,23 +403,23 @@ _gcry_sha256_transform_amd64_ssse3: movdqa XFER, [TBL + 0*16] paddd XFER, X0 movdqa [rsp + _XFER], XFER - FOUR_ROUNDS_AND_SCHED + FOUR_ROUNDS_AND_SCHED(X0, X1, X2, X3, a, b, c, d, e, f, g, h) movdqa XFER, [TBL + 1*16] - paddd XFER, X0 + paddd XFER, X1 movdqa [rsp + _XFER], XFER - FOUR_ROUNDS_AND_SCHED + FOUR_ROUNDS_AND_SCHED(X1, X2, X3, X0, e, f, g, h, a, b, c, d) movdqa XFER, [TBL + 2*16] - paddd XFER, X0 + paddd XFER, X2 movdqa [rsp + _XFER], XFER - FOUR_ROUNDS_AND_SCHED + FOUR_ROUNDS_AND_SCHED(X2, X3, X0, X1, a, b, c, d, e, f, g, h) movdqa XFER, [TBL + 3*16] - paddd XFER, X0 + paddd XFER, X3 movdqa [rsp + _XFER], XFER add TBL, 4*16 - FOUR_ROUNDS_AND_SCHED + FOUR_ROUNDS_AND_SCHED(X3, X0, X1, X2, e, f, g, h, a, b, c, d) sub SRND, 1 jne .Loop1 @@ -453,17 +428,17 @@ _gcry_sha256_transform_amd64_ssse3: .Loop2: paddd X0, [TBL + 0*16] movdqa [rsp + _XFER], X0 - DO_ROUND 0 - DO_ROUND 1 - DO_ROUND 2 - DO_ROUND 3 + DO_ROUND(0, a, b, c, d, e, f, g, h) + DO_ROUND(1, h, a, b, c, d, e, f, g) + DO_ROUND(2, g, h, a, b, c, d, e, f) + DO_ROUND(3, f, g, h, a, b, c, d, e) paddd X1, [TBL + 1*16] movdqa [rsp + _XFER], X1 add TBL, 2*16 - DO_ROUND 0 - DO_ROUND 1 - DO_ROUND 2 - DO_ROUND 3 + DO_ROUND(0, e, f, g, h, a, b, c, d) + DO_ROUND(1, d, e, f, g, h, a, b, c) + DO_ROUND(2, c, d, e, f, g, h, a, b) + DO_ROUND(3, b, c, d, e, f, g, h, a) movdqa X0, X2 movdqa X1, X3 @@ -471,14 +446,14 @@ _gcry_sha256_transform_amd64_ssse3: sub SRND, 1 jne .Loop2 - addm [4*0 + CTX],a - addm [4*1 + CTX],b - addm [4*2 + CTX],c - addm [4*3 + CTX],d - addm [4*4 + CTX],e - addm [4*5 + CTX],f - addm [4*6 + CTX],g - addm [4*7 + CTX],h + addm([4*0 + CTX],a) + addm([4*1 + CTX],b) + addm([4*2 + CTX],c) + addm([4*3 + CTX],d) + addm([4*4 + CTX],e) + addm([4*5 + CTX],f) + addm([4*6 + CTX],g) + addm([4*7 + CTX],h) mov INP, [rsp + _INP] add INP, 64 diff --git a/cipher/sha512-avx-amd64.S b/cipher/sha512-avx-amd64.S index 991fd639..75f7b070 100644 --- a/cipher/sha512-avx-amd64.S +++ b/cipher/sha512-avx-amd64.S @@ -53,32 +53,32 @@ .text /* Virtual Registers */ -msg = rdi /* ARG1 */ -digest = rsi /* ARG2 */ -msglen = rdx /* ARG3 */ -T1 = rcx -T2 = r8 -a_64 = r9 -b_64 = r10 -c_64 = r11 -d_64 = r12 -e_64 = r13 -f_64 = r14 -g_64 = r15 -h_64 = rbx -tmp0 = rax +#define msg rdi /* ARG1 */ +#define digest rsi /* ARG2 */ +#define msglen rdx /* ARG3 */ +#define T1 rcx +#define T2 r8 +#define a_64 r9 +#define b_64 r10 +#define c_64 r11 +#define d_64 r12 +#define e_64 r13 +#define f_64 r14 +#define g_64 r15 +#define h_64 rbx +#define tmp0 rax /* ; Local variables (stack frame) ; Note: frame_size must be an odd multiple of 8 bytes to XMM align RSP */ -frame_W = 0 /* Message Schedule */ -frame_W_size = (80 * 8) -frame_WK = ((frame_W) + (frame_W_size)) /* W[t] + K[t] | W[t+1] + K[t+1] */ -frame_WK_size = (2 * 8) -frame_GPRSAVE = ((frame_WK) + (frame_WK_size)) -frame_GPRSAVE_size = (5 * 8) -frame_size = ((frame_GPRSAVE) + (frame_GPRSAVE_size)) +#define frame_W 0 /* Message Schedule */ +#define frame_W_size (80 * 8) +#define frame_WK ((frame_W) + (frame_W_size)) /* W[t] + K[t] | W[t+1] + K[t+1] */ +#define frame_WK_size (2 * 8) +#define frame_GPRSAVE ((frame_WK) + (frame_WK_size)) +#define frame_GPRSAVE_size (5 * 8) +#define frame_size ((frame_GPRSAVE) + (frame_GPRSAVE_size)) /* Useful QWORD "arrays" for simpler memory references */ @@ -90,162 +90,151 @@ frame_size = ((frame_GPRSAVE) + (frame_GPRSAVE_size)) /* MSG, DIGEST, K_t, W_t are arrays */ /* WK_2(t) points to 1 of 2 qwords at frame.WK depdending on t being odd/even */ -.macro RotateState - /* Rotate symbles a..h right */ - __TMP = h_64 - h_64 = g_64 - g_64 = f_64 - f_64 = e_64 - e_64 = d_64 - d_64 = c_64 - c_64 = b_64 - b_64 = a_64 - a_64 = __TMP -.endm - -.macro RORQ p1 p2 - /* shld is faster than ror on Intel Sandybridge */ - shld \p1, \p1, (64 - \p2) -.endm - -.macro SHA512_Round t - /* Compute Round %%t */ - mov T1, f_64 /* T1 = f */ - mov tmp0, e_64 /* tmp = e */ - xor T1, g_64 /* T1 = f ^ g */ - RORQ tmp0, 23 /* 41 ; tmp = e ror 23 */ - and T1, e_64 /* T1 = (f ^ g) & e */ - xor tmp0, e_64 /* tmp = (e ror 23) ^ e */ - xor T1, g_64 /* T1 = ((f ^ g) & e) ^ g = CH(e,f,g) */ - add T1, [WK_2(\t)] /* W[t] + K[t] from message scheduler */ - RORQ tmp0, 4 /* 18 ; tmp = ((e ror 23) ^ e) ror 4 */ - xor tmp0, e_64 /* tmp = (((e ror 23) ^ e) ror 4) ^ e */ - mov T2, a_64 /* T2 = a */ - add T1, h_64 /* T1 = CH(e,f,g) + W[t] + K[t] + h */ - RORQ tmp0, 14 /* 14 ; tmp = ((((e ror23)^e)ror4)^e)ror14 = S1(e) */ - add T1, tmp0 /* T1 = CH(e,f,g) + W[t] + K[t] + S1(e) */ - mov tmp0, a_64 /* tmp = a */ - xor T2, c_64 /* T2 = a ^ c */ - and tmp0, c_64 /* tmp = a & c */ - and T2, b_64 /* T2 = (a ^ c) & b */ - xor T2, tmp0 /* T2 = ((a ^ c) & b) ^ (a & c) = Maj(a,b,c) */ - mov tmp0, a_64 /* tmp = a */ - RORQ tmp0, 5 /* 39 ; tmp = a ror 5 */ - xor tmp0, a_64 /* tmp = (a ror 5) ^ a */ - add d_64, T1 /* e(next_state) = d + T1 */ - RORQ tmp0, 6 /* 34 ; tmp = ((a ror 5) ^ a) ror 6 */ - xor tmp0, a_64 /* tmp = (((a ror 5) ^ a) ror 6) ^ a */ - lea h_64, [T1 + T2] /* a(next_state) = T1 + Maj(a,b,c) */ - RORQ tmp0, 28 /* 28 ; tmp = ((((a ror5)^a)ror6)^a)ror28 = S0(a) */ - add h_64, tmp0 /* a(next_state) = T1 + Maj(a,b,c) S0(a) */ - RotateState -.endm - -.macro SHA512_2Sched_2Round_avx t -/* ; Compute rounds %%t-2 and %%t-1 - ; Compute message schedule QWORDS %%t and %%t+1 - - ; Two rounds are computed based on the values for K[t-2]+W[t-2] and - ; K[t-1]+W[t-1] which were previously stored at WK_2 by the message - ; scheduler. - ; The two new schedule QWORDS are stored at [W_t(%%t)] and [W_t(%%t+1)]. - ; They are then added to their respective SHA512 constants at - ; [K_t(%%t)] and [K_t(%%t+1)] and stored at dqword [WK_2(%%t)] - ; For brievity, the comments following vectored instructions only refer to - ; the first of a pair of QWORDS. - ; Eg. XMM4=W[t-2] really means XMM4={W[t-2]|W[t-1]} - ; The computation of the message schedule and the rounds are tightly - ; stitched to take advantage of instruction-level parallelism. - ; For clarity, integer instructions (for the rounds calculation) are indented - ; by one tab. Vectored instructions (for the message scheduler) are indented - ; by two tabs. */ - - vmovdqa xmm4, [W_t(\t-2)] /* XMM4 = W[t-2] */ - vmovdqu xmm5, [W_t(\t-15)] /* XMM5 = W[t-15] */ - mov T1, f_64 - vpsrlq xmm0, xmm4, 61 /* XMM0 = W[t-2]>>61 */ - mov tmp0, e_64 - vpsrlq xmm6, xmm5, 1 /* XMM6 = W[t-15]>>1 */ - xor T1, g_64 - RORQ tmp0, 23 /* 41 */ - vpsrlq xmm1, xmm4, 19 /* XMM1 = W[t-2]>>19 */ - and T1, e_64 - xor tmp0, e_64 - vpxor xmm0, xmm0, xmm1 /* XMM0 = W[t-2]>>61 ^ W[t-2]>>19 */ - xor T1, g_64 - add T1, [WK_2(\t)]; - vpsrlq xmm7, xmm5, 8 /* XMM7 = W[t-15]>>8 */ - RORQ tmp0, 4 /* 18 */ - vpsrlq xmm2, xmm4, 6 /* XMM2 = W[t-2]>>6 */ - xor tmp0, e_64 - mov T2, a_64 - add T1, h_64 - vpxor xmm6, xmm6, xmm7 /* XMM6 = W[t-15]>>1 ^ W[t-15]>>8 */ - RORQ tmp0, 14 /* 14 */ - add T1, tmp0 - vpsrlq xmm8, xmm5, 7 /* XMM8 = W[t-15]>>7 */ - mov tmp0, a_64 - xor T2, c_64 - vpsllq xmm3, xmm4, (64-61) /* XMM3 = W[t-2]<<3 */ - and tmp0, c_64 - and T2, b_64 - vpxor xmm2, xmm2, xmm3 /* XMM2 = W[t-2]>>6 ^ W[t-2]<<3 */ - xor T2, tmp0 - mov tmp0, a_64 - vpsllq xmm9, xmm5, (64-1) /* XMM9 = W[t-15]<<63 */ - RORQ tmp0, 5 /* 39 */ - vpxor xmm8, xmm8, xmm9 /* XMM8 = W[t-15]>>7 ^ W[t-15]<<63 */ - xor tmp0, a_64 - add d_64, T1 - RORQ tmp0, 6 /* 34 */ - xor tmp0, a_64 - vpxor xmm6, xmm6, xmm8 /* XMM6 = W[t-15]>>1 ^ W[t-15]>>8 ^ W[t-15]>>7 ^ W[t-15]<<63 */ - lea h_64, [T1 + T2] - RORQ tmp0, 28 /* 28 */ - vpsllq xmm4, xmm4, (64-19) /* XMM4 = W[t-2]<<25 */ - add h_64, tmp0 - RotateState - vpxor xmm0, xmm0, xmm4 /* XMM0 = W[t-2]>>61 ^ W[t-2]>>19 ^ W[t-2]<<25 */ - mov T1, f_64 - vpxor xmm0, xmm0, xmm2 /* XMM0 = s1(W[t-2]) */ - mov tmp0, e_64 - xor T1, g_64 - vpaddq xmm0, xmm0, [W_t(\t-16)] /* XMM0 = s1(W[t-2]) + W[t-16] */ - vmovdqu xmm1, [W_t(\t- 7)] /* XMM1 = W[t-7] */ - RORQ tmp0, 23 /* 41 */ - and T1, e_64 - xor tmp0, e_64 - xor T1, g_64 - vpsllq xmm5, xmm5, (64-8) /* XMM5 = W[t-15]<<56 */ - add T1, [WK_2(\t+1)] - vpxor xmm6, xmm6, xmm5 /* XMM6 = s0(W[t-15]) */ - RORQ tmp0, 4 /* 18 */ - vpaddq xmm0, xmm0, xmm6 /* XMM0 = s1(W[t-2]) + W[t-16] + s0(W[t-15]) */ - xor tmp0, e_64 - vpaddq xmm0, xmm0, xmm1 /* XMM0 = W[t] = s1(W[t-2]) + W[t-7] + s0(W[t-15]) + W[t-16] */ - mov T2, a_64 - add T1, h_64 - RORQ tmp0, 14 /* 14 */ - add T1, tmp0 - vmovdqa [W_t(\t)], xmm0 /* Store W[t] */ - vpaddq xmm0, xmm0, [K_t(t)] /* Compute W[t]+K[t] */ - vmovdqa [WK_2(t)], xmm0 /* Store W[t]+K[t] for next rounds */ - mov tmp0, a_64 - xor T2, c_64 - and tmp0, c_64 - and T2, b_64 - xor T2, tmp0 - mov tmp0, a_64 - RORQ tmp0, 5 /* 39 */ - xor tmp0, a_64 - add d_64, T1 - RORQ tmp0, 6 /* 34 */ - xor tmp0, a_64 - lea h_64, [T1 + T2] - RORQ tmp0, 28 /* 28 */ - add h_64, tmp0 - RotateState -.endm +#define RORQ(p1, p2) \ + /* shld is faster than ror on Intel Sandybridge */ \ + shld p1, p1, (64 - p2) + +#define SHA512_Round(t, a, b, c, d, e, f, g, h) \ + /* Compute Round %%t */; \ + mov T1, f /* T1 = f */; \ + mov tmp0, e /* tmp = e */; \ + xor T1, g /* T1 = f ^ g */; \ + RORQ( tmp0, 23) /* 41 ; tmp = e ror 23 */; \ + and T1, e /* T1 = (f ^ g) & e */; \ + xor tmp0, e /* tmp = (e ror 23) ^ e */; \ + xor T1, g /* T1 = ((f ^ g) & e) ^ g = CH(e,f,g) */; \ + add T1, [WK_2(t)] /* W[t] + K[t] from message scheduler */; \ + RORQ( tmp0, 4) /* 18 ; tmp = ((e ror 23) ^ e) ror 4 */; \ + xor tmp0, e /* tmp = (((e ror 23) ^ e) ror 4) ^ e */; \ + mov T2, a /* T2 = a */; \ + add T1, h /* T1 = CH(e,f,g) + W[t] + K[t] + h */; \ + RORQ( tmp0, 14) /* 14 ; tmp = ((((e ror23)^e)ror4)^e)ror14 = S1(e) */; \ + add T1, tmp0 /* T1 = CH(e,f,g) + W[t] + K[t] + S1(e) */; \ + mov tmp0, a /* tmp = a */; \ + xor T2, c /* T2 = a ^ c */; \ + and tmp0, c /* tmp = a & c */; \ + and T2, b /* T2 = (a ^ c) & b */; \ + xor T2, tmp0 /* T2 = ((a ^ c) & b) ^ (a & c) = Maj(a,b,c) */; \ + mov tmp0, a /* tmp = a */; \ + RORQ( tmp0, 5) /* 39 ; tmp = a ror 5 */; \ + xor tmp0, a /* tmp = (a ror 5) ^ a */; \ + add d, T1 /* e(next_state) = d + T1 */; \ + RORQ( tmp0, 6) /* 34 ; tmp = ((a ror 5) ^ a) ror 6 */; \ + xor tmp0, a /* tmp = (((a ror 5) ^ a) ror 6) ^ a */; \ + lea h, [T1 + T2] /* a(next_state) = T1 + Maj(a,b,c) */; \ + RORQ( tmp0, 28) /* 28 ; tmp = ((((a ror5)^a)ror6)^a)ror28 = S0(a) */; \ + add h, tmp0 /* a(next_state) = T1 + Maj(a,b,c) S0(a) */ + +#define SHA512_2Sched_2Round_avx_PART1(t, a, b, c, d, e, f, g, h) \ + /* \ + ; Compute rounds %%t-2 and %%t-1 \ + ; Compute message schedule QWORDS %%t and %%t+1 \ + ; \ + ; Two rounds are computed based on the values for K[t-2]+W[t-2] and \ + ; K[t-1]+W[t-1] which were previously stored at WK_2 by the message \ + ; scheduler. \ + ; The two new schedule QWORDS are stored at [W_t(%%t)] and [W_t(%%t+1)]. \ + ; They are then added to their respective SHA512 constants at \ + ; [K_t(%%t)] and [K_t(%%t+1)] and stored at dqword [WK_2(%%t)] \ + ; For brievity, the comments following vectored instructions only refer to \ + ; the first of a pair of QWORDS. \ + ; Eg. XMM4=W[t-2] really means XMM4={W[t-2]|W[t-1]} \ + ; The computation of the message schedule and the rounds are tightly \ + ; stitched to take advantage of instruction-level parallelism. \ + ; For clarity, integer instructions (for the rounds calculation) are indented \ + ; by one tab. Vectored instructions (for the message scheduler) are indented \ + ; by two tabs. \ + */ \ + \ + vmovdqa xmm4, [W_t(t-2)] /* XMM4 = W[t-2] */; \ + vmovdqu xmm5, [W_t(t-15)] /* XMM5 = W[t-15] */; \ + mov T1, f; \ + vpsrlq xmm0, xmm4, 61 /* XMM0 = W[t-2]>>61 */; \ + mov tmp0, e; \ + vpsrlq xmm6, xmm5, 1 /* XMM6 = W[t-15]>>1 */; \ + xor T1, g; \ + RORQ( tmp0, 23) /* 41 */; \ + vpsrlq xmm1, xmm4, 19 /* XMM1 = W[t-2]>>19 */; \ + and T1, e; \ + xor tmp0, e; \ + vpxor xmm0, xmm0, xmm1 /* XMM0 = W[t-2]>>61 ^ W[t-2]>>19 */; \ + xor T1, g; \ + add T1, [WK_2(t)]; \ + vpsrlq xmm7, xmm5, 8 /* XMM7 = W[t-15]>>8 */; \ + RORQ( tmp0, 4) /* 18 */; \ + vpsrlq xmm2, xmm4, 6 /* XMM2 = W[t-2]>>6 */; \ + xor tmp0, e; \ + mov T2, a; \ + add T1, h; \ + vpxor xmm6, xmm6, xmm7 /* XMM6 = W[t-15]>>1 ^ W[t-15]>>8 */; \ + RORQ( tmp0, 14) /* 14 */; \ + add T1, tmp0; \ + vpsrlq xmm8, xmm5, 7 /* XMM8 = W[t-15]>>7 */; \ + mov tmp0, a; \ + xor T2, c; \ + vpsllq xmm3, xmm4, (64-61) /* XMM3 = W[t-2]<<3 */; \ + and tmp0, c; \ + and T2, b; \ + vpxor xmm2, xmm2, xmm3 /* XMM2 = W[t-2]>>6 ^ W[t-2]<<3 */; \ + xor T2, tmp0; \ + mov tmp0, a; \ + vpsllq xmm9, xmm5, (64-1) /* XMM9 = W[t-15]<<63 */; \ + RORQ( tmp0, 5) /* 39 */; \ + vpxor xmm8, xmm8, xmm9 /* XMM8 = W[t-15]>>7 ^ W[t-15]<<63 */; \ + xor tmp0, a; \ + add d, T1; \ + RORQ( tmp0, 6) /* 34 */; \ + xor tmp0, a; \ + vpxor xmm6, xmm6, xmm8 /* XMM6 = W[t-15]>>1 ^ W[t-15]>>8 ^ W[t-15]>>7 ^ W[t-15]<<63 */; \ + lea h, [T1 + T2]; \ + RORQ( tmp0, 28) /* 28 */; \ + vpsllq xmm4, xmm4, (64-19) /* XMM4 = W[t-2]<<25 */; \ + add h, tmp0 + +#define SHA512_2Sched_2Round_avx_PART2(t, a, b, c, d, e, f, g, h) \ + vpxor xmm0, xmm0, xmm4 /* XMM0 = W[t-2]>>61 ^ W[t-2]>>19 ^ W[t-2]<<25 */; \ + mov T1, f; \ + vpxor xmm0, xmm0, xmm2 /* XMM0 = s1(W[t-2]) */; \ + mov tmp0, e; \ + xor T1, g; \ + vpaddq xmm0, xmm0, [W_t(t-16)] /* XMM0 = s1(W[t-2]) + W[t-16] */; \ + vmovdqu xmm1, [W_t(t- 7)] /* XMM1 = W[t-7] */; \ + RORQ( tmp0, 23) /* 41 */; \ + and T1, e; \ + xor tmp0, e; \ + xor T1, g; \ + vpsllq xmm5, xmm5, (64-8) /* XMM5 = W[t-15]<<56 */; \ + add T1, [WK_2(t+1)]; \ + vpxor xmm6, xmm6, xmm5 /* XMM6 = s0(W[t-15]) */; \ + RORQ( tmp0, 4) /* 18 */; \ + vpaddq xmm0, xmm0, xmm6 /* XMM0 = s1(W[t-2]) + W[t-16] + s0(W[t-15]) */; \ + xor tmp0, e; \ + vpaddq xmm0, xmm0, xmm1 /* XMM0 = W[t] = s1(W[t-2]) + W[t-7] + s0(W[t-15]) + W[t-16] */; \ + mov T2, a; \ + add T1, h; \ + RORQ( tmp0, 14) /* 14 */; \ + add T1, tmp0; \ + vmovdqa [W_t(t)], xmm0 /* Store W[t] */; \ + vpaddq xmm0, xmm0, [K_t(t)] /* Compute W[t]+K[t] */; \ + vmovdqa [WK_2(t)], xmm0 /* Store W[t]+K[t] for next rounds */; \ + mov tmp0, a; \ + xor T2, c; \ + and tmp0, c; \ + and T2, b; \ + xor T2, tmp0; \ + mov tmp0, a; \ + RORQ( tmp0, 5) /* 39 */; \ + xor tmp0, a; \ + add d, T1; \ + RORQ( tmp0, 6) /* 34 */; \ + xor tmp0, a; \ + lea h, [T1 + T2]; \ + RORQ( tmp0, 28) /* 28 */; \ + add h, tmp0 + +#define SHA512_2Sched_2Round_avx(t, a, b, c, d, e, f, g, h) \ + SHA512_2Sched_2Round_avx_PART1(t, a, b, c, d, e, f, g, h); \ + SHA512_2Sched_2Round_avx_PART2(t, h, a, b, c, d, e, f, g) /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; @@ -295,37 +284,77 @@ _gcry_sha512_transform_amd64_avx: mov g_64, [DIGEST(6)] mov h_64, [DIGEST(7)] - t = 0 - .rept 80/2 + 1 - /* (80 rounds) / (2 rounds/iteration) + (1 iteration) */ - /* +1 iteration because the scheduler leads hashing by 1 iteration */ - .if t < 2 - /* BSWAP 2 QWORDS */ - vmovdqa xmm1, [.LXMM_QWORD_BSWAP ADD_RIP] - vmovdqu xmm0, [MSG(t)] - vpshufb xmm0, xmm0, xmm1 /* BSWAP */ - vmovdqa [W_t(t)], xmm0 /* Store Scheduled Pair */ - vpaddq xmm0, xmm0, [K_t(t)] /* Compute W[t]+K[t] */ - vmovdqa [WK_2(t)], xmm0 /* Store into WK for rounds */ - .elseif t < 16 - /* BSWAP 2 QWORDS, Compute 2 Rounds */ - vmovdqu xmm0, [MSG(t)] - vpshufb xmm0, xmm0, xmm1 /* BSWAP */ - SHA512_Round (t - 2) /* Round t-2 */ - vmovdqa [W_t(t)], xmm0 /* Store Scheduled Pair */ - vpaddq xmm0, xmm0, [K_t(t)] /* Compute W[t]+K[t] */ - SHA512_Round (t - 1) /* Round t-1 */ - vmovdqa [WK_2(t)], xmm0 /* W[t]+K[t] into WK */ - .elseif t < 79 - /* Schedule 2 QWORDS; Compute 2 Rounds */ - SHA512_2Sched_2Round_avx t - .else - /* Compute 2 Rounds */ - SHA512_Round (t - 2) - SHA512_Round (t - 1) - .endif - t = ((t)+2) - .endr + /* BSWAP 2 QWORDS */ + vmovdqa xmm1, [.LXMM_QWORD_BSWAP ADD_RIP] + vmovdqu xmm0, [MSG(0)] + vpshufb xmm0, xmm0, xmm1 /* BSWAP */ + vmovdqa [W_t(0)], xmm0 /* Store Scheduled Pair */ + vpaddq xmm0, xmm0, [K_t(0)] /* Compute W[t]+K[t] */ + vmovdqa [WK_2(0)], xmm0 /* Store into WK for rounds */ + + #define T_2_14(t, a, b, c, d, e, f, g, h) \ + /* BSWAP 2 QWORDS, Compute 2 Rounds */; \ + vmovdqu xmm0, [MSG(t)]; \ + vpshufb xmm0, xmm0, xmm1 /* BSWAP */; \ + SHA512_Round(((t) - 2), a##_64, b##_64, c##_64, d##_64, \ + e##_64, f##_64, g##_64, h##_64); \ + vmovdqa [W_t(t)], xmm0 /* Store Scheduled Pair */; \ + vpaddq xmm0, xmm0, [K_t(t)] /* Compute W[t]+K[t] */; \ + SHA512_Round(((t) - 1), h##_64, a##_64, b##_64, c##_64, \ + d##_64, e##_64, f##_64, g##_64); \ + vmovdqa [WK_2(t)], xmm0 /* W[t]+K[t] into WK */ + + #define T_16_78(t, a, b, c, d, e, f, g, h) \ + SHA512_2Sched_2Round_avx((t), a##_64, b##_64, c##_64, d##_64, \ + e##_64, f##_64, g##_64, h##_64) + + #define T_80(t, a, b, c, d, e, f, g, h) \ + /* Compute 2 Rounds */; \ + SHA512_Round((t - 2), a##_64, b##_64, c##_64, d##_64, \ + e##_64, f##_64, g##_64, h##_64); \ + SHA512_Round((t - 1), h##_64, a##_64, b##_64, c##_64, \ + d##_64, e##_64, f##_64, g##_64) + + T_2_14(2, a, b, c, d, e, f, g, h) + T_2_14(4, g, h, a, b, c, d, e, f) + T_2_14(6, e, f, g, h, a, b, c, d) + T_2_14(8, c, d, e, f, g, h, a, b) + T_2_14(10, a, b, c, d, e, f, g, h) + T_2_14(12, g, h, a, b, c, d, e, f) + T_2_14(14, e, f, g, h, a, b, c, d) + T_16_78(16, c, d, e, f, g, h, a, b) + T_16_78(18, a, b, c, d, e, f, g, h) + T_16_78(20, g, h, a, b, c, d, e, f) + T_16_78(22, e, f, g, h, a, b, c, d) + T_16_78(24, c, d, e, f, g, h, a, b) + T_16_78(26, a, b, c, d, e, f, g, h) + T_16_78(28, g, h, a, b, c, d, e, f) + T_16_78(30, e, f, g, h, a, b, c, d) + T_16_78(32, c, d, e, f, g, h, a, b) + T_16_78(34, a, b, c, d, e, f, g, h) + T_16_78(36, g, h, a, b, c, d, e, f) + T_16_78(38, e, f, g, h, a, b, c, d) + T_16_78(40, c, d, e, f, g, h, a, b) + T_16_78(42, a, b, c, d, e, f, g, h) + T_16_78(44, g, h, a, b, c, d, e, f) + T_16_78(46, e, f, g, h, a, b, c, d) + T_16_78(48, c, d, e, f, g, h, a, b) + T_16_78(50, a, b, c, d, e, f, g, h) + T_16_78(52, g, h, a, b, c, d, e, f) + T_16_78(54, e, f, g, h, a, b, c, d) + T_16_78(56, c, d, e, f, g, h, a, b) + T_16_78(58, a, b, c, d, e, f, g, h) + T_16_78(60, g, h, a, b, c, d, e, f) + T_16_78(62, e, f, g, h, a, b, c, d) + T_16_78(64, c, d, e, f, g, h, a, b) + T_16_78(66, a, b, c, d, e, f, g, h) + T_16_78(68, g, h, a, b, c, d, e, f) + T_16_78(70, e, f, g, h, a, b, c, d) + T_16_78(72, c, d, e, f, g, h, a, b) + T_16_78(74, a, b, c, d, e, f, g, h) + T_16_78(76, g, h, a, b, c, d, e, f) + T_16_78(78, e, f, g, h, a, b, c, d) + T_80(80, c, d, e, f, g, h, a, b) /* Update digest */ add [DIGEST(0)], a_64 @@ -357,11 +386,12 @@ _gcry_sha512_transform_amd64_avx: vzeroall /* Burn stack */ - t = 0 - .rept frame_W_size / 32 - vmovups [rsp + frame_W + (t) * 32], ymm0 - t = ((t)+1) - .endr + mov eax, 0 +.Lerase_stack: + vmovdqu [rsp + rax], ymm0 + add eax, 32 + cmp eax, frame_W_size + jne .Lerase_stack vmovdqu [rsp + frame_WK], xmm0 xor eax, eax diff --git a/cipher/sha512-avx2-bmi2-amd64.S b/cipher/sha512-avx2-bmi2-amd64.S index 3b28ab6c..7f119e6c 100644 --- a/cipher/sha512-avx2-bmi2-amd64.S +++ b/cipher/sha512-avx2-bmi2-amd64.S @@ -56,46 +56,45 @@ .text /* Virtual Registers */ -Y_0 = ymm4 -Y_1 = ymm5 -Y_2 = ymm6 -Y_3 = ymm7 - -YTMP0 = ymm0 -YTMP1 = ymm1 -YTMP2 = ymm2 -YTMP3 = ymm3 -YTMP4 = ymm8 -XFER = YTMP0 - -BYTE_FLIP_MASK = ymm9 -MASK_YMM_LO = ymm10 -MASK_YMM_LOx = xmm10 - -INP = rdi /* 1st arg */ -CTX = rsi /* 2nd arg */ -NUM_BLKS = rdx /* 3rd arg */ -c = rcx -d = r8 -e = rdx -y3 = rdi - -TBL = rbp - -a = rax -b = rbx - -f = r9 -g = r10 -h = r11 -old_h = rax - -T1 = r12 -y0 = r13 -y1 = r14 -y2 = r15 - -y4 = r12 +#define Y_0 ymm4 +#define Y_1 ymm5 +#define Y_2 ymm6 +#define Y_3 ymm7 + +#define YTMP0 ymm0 +#define YTMP1 ymm1 +#define YTMP2 ymm2 +#define YTMP3 ymm3 +#define YTMP4 ymm8 +#define XFER YTMP0 + +#define BYTE_FLIP_MASK ymm9 +#define MASK_YMM_LO ymm10 +#define MASK_YMM_LOx xmm10 + +#define INP rdi /* 1st arg */ +#define CTX rsi /* 2nd arg */ +#define NUM_BLKS rdx /* 3rd arg */ +#define c rcx +#define d r8 +#define e rdx +#define y3 rdi + +#define TBL rbp + +#define a rax +#define b rbx + +#define f r9 +#define g r10 +#define h r11 + +#define T1 r12 +#define y0 r13 +#define y1 r14 +#define y2 r15 + +#define y4 r12 /* Local variables (stack frame) */ #define frame_XFER 0 @@ -116,218 +115,153 @@ y4 = r12 /* addm [mem], reg */ /* Add reg to mem using reg-mem add and store */ -.macro addm p1 p2 - add \p2, \p1 - mov \p1, \p2 -.endm +#define addm(p1, p2) \ + add p2, p1; \ + mov p1, p2; /* COPY_YMM_AND_BSWAP ymm, [mem], byte_flip_mask */ /* Load ymm with mem and byte swap each dword */ -.macro COPY_YMM_AND_BSWAP p1 p2 p3 - VMOVDQ \p1, \p2 - vpshufb \p1, \p1, \p3 -.endm -/* rotate_Ys */ -/* Rotate values of symbols Y0...Y3 */ -.macro rotate_Ys - __Y_ = Y_0 - Y_0 = Y_1 - Y_1 = Y_2 - Y_2 = Y_3 - Y_3 = __Y_ -.endm - -/* RotateState */ -.macro RotateState - /* Rotate symbles a..h right */ - old_h = h - __TMP_ = h - h = g - g = f - f = e - e = d - d = c - c = b - b = a - a = __TMP_ -.endm +#define COPY_YMM_AND_BSWAP(p1, p2, p3) \ + VMOVDQ p1, p2; \ + vpshufb p1, p1, p3 /* %macro MY_VPALIGNR YDST, YSRC1, YSRC2, RVAL */ /* YDST = {YSRC1, YSRC2} >> RVAL*8 */ -.macro MY_VPALIGNR YDST, YSRC1, YSRC2, RVAL - vperm2f128 \YDST, \YSRC1, \YSRC2, 0x3 /* YDST = {YS1_LO, YS2_HI} */ - vpalignr \YDST, \YDST, \YSRC2, \RVAL /* YDST = {YDS1, YS2} >> RVAL*8 */ -.endm - -.macro ONE_ROUND_PART1 XFER - /* h += Sum1 (e) + Ch (e, f, g) + (k[t] + w[0]); - * d += h; - * h += Sum0 (a) + Maj (a, b, c); - * - * Ch(x, y, z) => ((x & y) + (~x & z)) - * Maj(x, y, z) => ((x & y) + (z & (x ^ y))) - */ - - mov y3, e - add h, [\XFER] - and y3, f - rorx y0, e, 41 - rorx y1, e, 18 +#define MY_VPALIGNR(YDST, YSRC1, YSRC2, RVAL) \ + vperm2i128 YDST, YSRC1, YSRC2, 0x3 /* YDST = {YS1_LO, YS2_HI} */; \ + vpalignr YDST, YDST, YSRC2, RVAL /* YDST = {YDS1, YS2} >> RVAL*8 */ + +#define ONE_ROUND_PART1(XFERIN, a, b, c, d, e, f, g, h) \ + /* h += Sum1 (e) + Ch (e, f, g) + (k[t] + w[0]); \ + * d += h; \ + * h += Sum0 (a) + Maj (a, b, c); \ + * \ + * Ch(x, y, z) => ((x & y) + (~x & z)) \ + * Maj(x, y, z) => ((x & y) + (z & (x ^ y))) \ + */ \ + \ + mov y3, e; \ + add h, [XFERIN]; \ + and y3, f; \ + rorx y0, e, 41; \ + rorx y1, e, 18; \ + lea h, [h + y3]; \ + andn y3, e, g; \ + rorx T1, a, 34; \ + xor y0, y1; \ lea h, [h + y3] - andn y3, e, g - rorx T1, a, 34 - xor y0, y1 - lea h, [h + y3] -.endm -.macro ONE_ROUND_PART2 - rorx y2, a, 39 - rorx y1, e, 14 - mov y3, a - xor T1, y2 - xor y0, y1 - xor y3, b - lea h, [h + y0] - mov y0, a - rorx y2, a, 28 - add d, h - and y3, c - xor T1, y2 - lea h, [h + y3] - lea h, [h + T1] - and y0, b - lea h, [h + y0] -.endm - -.macro ONE_ROUND XFER - ONE_ROUND_PART1 \XFER - ONE_ROUND_PART2 -.endm - -.macro FOUR_ROUNDS_AND_SCHED X -/*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 0 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - /* Extract w[t-7] */ - MY_VPALIGNR YTMP0, Y_3, Y_2, 8 /* YTMP0 = W[-7] */ - /* Calculate w[t-16] + w[t-7] */ - vpaddq YTMP0, YTMP0, Y_0 /* YTMP0 = W[-7] + W[-16] */ - /* Extract w[t-15] */ - MY_VPALIGNR YTMP1, Y_1, Y_0, 8 /* YTMP1 = W[-15] */ - - /* Calculate sigma0 */ - - /* Calculate w[t-15] ror 1 */ - vpsrlq YTMP2, YTMP1, 1 - vpsllq YTMP3, YTMP1, (64-1) - vpor YTMP3, YTMP3, YTMP2 /* YTMP3 = W[-15] ror 1 */ - /* Calculate w[t-15] shr 7 */ - vpsrlq YTMP4, YTMP1, 7 /* YTMP4 = W[-15] >> 7 */ - - ONE_ROUND rsp+frame_XFER+0*8+\X*32 - RotateState - -/*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 1 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - -/*;;;;;;;;;;;;;;;;;;;;;;;;; */ - - /* Calculate w[t-15] ror 8 */ - vpsrlq YTMP2, YTMP1, 8 - vpsllq YTMP1, YTMP1, (64-8) - vpor YTMP1, YTMP1, YTMP2 /* YTMP1 = W[-15] ror 8 */ - /* XOR the three components */ - vpxor YTMP3, YTMP3, YTMP4 /* YTMP3 = W[-15] ror 1 ^ W[-15] >> 7 */ - vpxor YTMP1, YTMP3, YTMP1 /* YTMP1 = s0 */ - - - /* Add three components, w[t-16], w[t-7] and sigma0 */ - vpaddq YTMP0, YTMP0, YTMP1 /* YTMP0 = W[-16] + W[-7] + s0 */ - /* Move to appropriate lanes for calculating w[16] and w[17] */ - vperm2f128 Y_0, YTMP0, YTMP0, 0x0 /* Y_0 = W[-16] + W[-7] + s0 {BABA} */ - /* Move to appropriate lanes for calculating w[18] and w[19] */ - vpand YTMP0, YTMP0, MASK_YMM_LO /* YTMP0 = W[-16] + W[-7] + s0 {DC00} */ - - /* Calculate w[16] and w[17] in both 128 bit lanes */ - - /* Calculate sigma1 for w[16] and w[17] on both 128 bit lanes */ - vperm2f128 YTMP2, Y_3, Y_3, 0x11 /* YTMP2 = W[-2] {BABA} */ - vpsrlq YTMP4, YTMP2, 6 /* YTMP4 = W[-2] >> 6 {BABA} */ - - ONE_ROUND rsp+frame_XFER+1*8+\X*32 - RotateState - -/*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 2 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - -/*;;;;;;;;;;;;;;;;;;;;;;;;; */ +#define ONE_ROUND_PART2(a, b, c, d, e, f, g, h) \ + rorx y2, a, 39; \ + rorx y1, e, 14; \ + mov y3, a; \ + xor T1, y2; \ + xor y0, y1; \ + xor y3, b; \ + lea h, [h + y0]; \ + mov y0, a; \ + rorx y2, a, 28; \ + add d, h; \ + and y3, c; \ + xor T1, y2; \ + lea h, [h + y3]; \ + lea h, [h + T1]; \ + and y0, b; \ + lea h, [h + y0] - vpsrlq YTMP3, YTMP2, 19 /* YTMP3 = W[-2] >> 19 {BABA} */ - vpsllq YTMP1, YTMP2, (64-19) /* YTMP1 = W[-2] << 19 {BABA} */ - vpor YTMP3, YTMP3, YTMP1 /* YTMP3 = W[-2] ror 19 {BABA} */ - vpxor YTMP4, YTMP4, YTMP3 /* YTMP4 = W[-2] ror 19 ^ W[-2] >> 6 {BABA} */ - vpsrlq YTMP3, YTMP2, 61 /* YTMP3 = W[-2] >> 61 {BABA} */ - vpsllq YTMP1, YTMP2, (64-61) /* YTMP1 = W[-2] << 61 {BABA} */ - vpor YTMP3, YTMP3, YTMP1 /* YTMP3 = W[-2] ror 61 {BABA} */ - vpxor YTMP4, YTMP4, YTMP3 /* YTMP4 = s1 = (W[-2] ror 19) ^ (W[-2] ror 61) ^ (W[-2] >> 6) {BABA} */ - - /* Add sigma1 to the other compunents to get w[16] and w[17] */ - vpaddq Y_0, Y_0, YTMP4 /* Y_0 = {W[1], W[0], W[1], W[0]} */ - - /* Calculate sigma1 for w[18] and w[19] for upper 128 bit lane */ - vpsrlq YTMP4, Y_0, 6 /* YTMP4 = W[-2] >> 6 {DC--} */ - - ONE_ROUND rsp+frame_XFER+2*8+\X*32 - RotateState - -/*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 3 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - -/*;;;;;;;;;;;;;;;;;;;;;;;;; */ - - vpsrlq YTMP3, Y_0, 19 /* YTMP3 = W[-2] >> 19 {DC--} */ - vpsllq YTMP1, Y_0, (64-19) /* YTMP1 = W[-2] << 19 {DC--} */ - vpor YTMP3, YTMP3, YTMP1 /* YTMP3 = W[-2] ror 19 {DC--} */ - vpxor YTMP4, YTMP4, YTMP3 /* YTMP4 = W[-2] ror 19 ^ W[-2] >> 6 {DC--} */ - vpsrlq YTMP3, Y_0, 61 /* YTMP3 = W[-2] >> 61 {DC--} */ - vpsllq YTMP1, Y_0, (64-61) /* YTMP1 = W[-2] << 61 {DC--} */ - vpor YTMP3, YTMP3, YTMP1 /* YTMP3 = W[-2] ror 61 {DC--} */ - vpxor YTMP4, YTMP4, YTMP3 /* YTMP4 = s1 = (W[-2] ror 19) ^ (W[-2] ror 61) ^ (W[-2] >> 6) {DC--} */ - - /* Add the sigma0 + w[t-7] + w[t-16] for w[18] and w[19] to newly calculated sigma1 to get w[18] and w[19] */ - vpaddq YTMP2, YTMP0, YTMP4 /* YTMP2 = {W[3], W[2], --, --} */ - - /* Form w[19, w[18], w17], w[16] */ - vpblendd Y_0, Y_0, YTMP2, 0xF0 /* Y_0 = {W[3], W[2], W[1], W[0]} */ - - ONE_ROUND_PART1 rsp+frame_XFER+3*8+\X*32 - vpaddq XFER, Y_0, [TBL + (4+\X)*32] - vmovdqa [rsp + frame_XFER + \X*32], XFER - ONE_ROUND_PART2 - RotateState - rotate_Ys -.endm - -.macro DO_4ROUNDS X - -/*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 0 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - ONE_ROUND rsp+frame_XFER+0*8+\X*32 - RotateState - -/*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 1 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - ONE_ROUND rsp+frame_XFER+1*8+\X*32 - RotateState - -/*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 2 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - ONE_ROUND rsp+frame_XFER+2*8+\X*32 - RotateState - -/*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 3 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */ - - ONE_ROUND rsp+frame_XFER+3*8+\X*32 - RotateState - -.endm +#define ONE_ROUND(XFERIN, a, b, c, d, e, f, g, h) \ + ONE_ROUND_PART1(XFERIN, a, b, c, d, e, f, g, h); \ + ONE_ROUND_PART2(a, b, c, d, e, f, g, h) + +#define FOUR_ROUNDS_AND_SCHED(X, Y_0, Y_1, Y_2, Y_3, a, b, c, d, e, f, g, h) \ + /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 0 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */; \ + /* Extract w[t-7] */; \ + MY_VPALIGNR( YTMP0, Y_3, Y_2, 8) /* YTMP0 = W[-7] */; \ + /* Calculate w[t-16] + w[t-7] */; \ + vpaddq YTMP0, YTMP0, Y_0 /* YTMP0 = W[-7] + W[-16] */; \ + /* Extract w[t-15] */; \ + MY_VPALIGNR( YTMP1, Y_1, Y_0, 8) /* YTMP1 = W[-15] */; \ + \ + /* Calculate sigma0 */; \ + \ + /* Calculate w[t-15] ror 1 */; \ + vpsrlq YTMP2, YTMP1, 1; \ + vpsllq YTMP3, YTMP1, (64-1); \ + vpor YTMP3, YTMP3, YTMP2 /* YTMP3 = W[-15] ror 1 */; \ + /* Calculate w[t-15] shr 7 */; \ + vpsrlq YTMP4, YTMP1, 7 /* YTMP4 = W[-15] >> 7 */; \ + \ + ONE_ROUND(rsp+frame_XFER+0*8+X*32, a, b, c, d, e, f, g, h); \ + \ + /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 1 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */; \ + /* Calculate w[t-15] ror 8 */; \ + vpsrlq YTMP2, YTMP1, 8; \ + vpsllq YTMP1, YTMP1, (64-8); \ + vpor YTMP1, YTMP1, YTMP2 /* YTMP1 = W[-15] ror 8 */; \ + /* XOR the three components */; \ + vpxor YTMP3, YTMP3, YTMP4 /* YTMP3 = W[-15] ror 1 ^ W[-15] >> 7 */; \ + vpxor YTMP1, YTMP3, YTMP1 /* YTMP1 = s0 */; \ + \ + /* Add three components, w[t-16], w[t-7] and sigma0 */; \ + vpaddq YTMP0, YTMP0, YTMP1 /* YTMP0 = W[-16] + W[-7] + s0 */; \ + /* Move to appropriate lanes for calculating w[16] and w[17] */; \ + vperm2i128 Y_0, YTMP0, YTMP0, 0x0 /* Y_0 = W[-16] + W[-7] + s0 {BABA} */; \ + /* Move to appropriate lanes for calculating w[18] and w[19] */; \ + vpand YTMP0, YTMP0, MASK_YMM_LO /* YTMP0 = W[-16] + W[-7] + s0 {DC00} */; \ + \ + /* Calculate w[16] and w[17] in both 128 bit lanes */; \ + \ + /* Calculate sigma1 for w[16] and w[17] on both 128 bit lanes */; \ + vperm2i128 YTMP2, Y_3, Y_3, 0x11 /* YTMP2 = W[-2] {BABA} */; \ + vpsrlq YTMP4, YTMP2, 6 /* YTMP4 = W[-2] >> 6 {BABA} */; \ + \ + ONE_ROUND(rsp+frame_XFER+1*8+X*32, h, a, b, c, d, e, f, g); \ + \ + /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 2 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */; \ + vpsrlq YTMP3, YTMP2, 19 /* YTMP3 = W[-2] >> 19 {BABA} */; \ + vpsllq YTMP1, YTMP2, (64-19) /* YTMP1 = W[-2] << 19 {BABA} */; \ + vpor YTMP3, YTMP3, YTMP1 /* YTMP3 = W[-2] ror 19 {BABA} */; \ + vpxor YTMP4, YTMP4, YTMP3 /* YTMP4 = W[-2] ror 19 ^ W[-2] >> 6 {BABA} */; \ + vpsrlq YTMP3, YTMP2, 61 /* YTMP3 = W[-2] >> 61 {BABA} */; \ + vpsllq YTMP1, YTMP2, (64-61) /* YTMP1 = W[-2] << 61 {BABA} */; \ + vpor YTMP3, YTMP3, YTMP1 /* YTMP3 = W[-2] ror 61 {BABA} */; \ + vpxor YTMP4, YTMP4, YTMP3 /* YTMP4 = s1 = (W[-2] ror 19) ^ (W[-2] ror 61) ^ (W[-2] >> 6) {BABA} */; \ + \ + /* Add sigma1 to the other compunents to get w[16] and w[17] */; \ + vpaddq Y_0, Y_0, YTMP4 /* Y_0 = {W[1], W[0], W[1], W[0]} */; \ + \ + /* Calculate sigma1 for w[18] and w[19] for upper 128 bit lane */; \ + vpsrlq YTMP4, Y_0, 6 /* YTMP4 = W[-2] >> 6 {DC--} */; \ + \ + ONE_ROUND(rsp+frame_XFER+2*8+X*32, g, h, a, b, c, d, e, f); \ + \ + /*;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; RND N + 3 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; */; \ + vpsrlq YTMP3, Y_0, 19 /* YTMP3 = W[-2] >> 19 {DC--} */; \ + vpsllq YTMP1, Y_0, (64-19) /* YTMP1 = W[-2] << 19 {DC--} */; \ + vpor YTMP3, YTMP3, YTMP1 /* YTMP3 = W[-2] ror 19 {DC--} */; \ + vpxor YTMP4, YTMP4, YTMP3 /* YTMP4 = W[-2] ror 19 ^ W[-2] >> 6 {DC--} */; \ + vpsrlq YTMP3, Y_0, 61 /* YTMP3 = W[-2] >> 61 {DC--} */; \ + vpsllq YTMP1, Y_0, (64-61) /* YTMP1 = W[-2] << 61 {DC--} */; \ + vpor YTMP3, YTMP3, YTMP1 /* YTMP3 = W[-2] ror 61 {DC--} */; \ + vpxor YTMP4, YTMP4, YTMP3 /* YTMP4 = s1 = (W[-2] ror 19) ^ (W[-2] ror 61) ^ (W[-2] >> 6) {DC--} */; \ + \ + /* Add the sigma0 + w[t-7] + w[t-16] for w[18] and w[19] to newly calculated sigma1 to get w[18] and w[19] */; \ + vpaddq YTMP2, YTMP0, YTMP4 /* YTMP2 = {W[3], W[2], --, --} */; \ + \ + /* Form w[19, w[18], w17], w[16] */; \ + vpblendd Y_0, Y_0, YTMP2, 0xF0 /* Y_0 = {W[3], W[2], W[1], W[0]} */; \ + \ + ONE_ROUND_PART1(rsp+frame_XFER+3*8+X*32, f, g, h, a, b, c, d, e); \ + vpaddq XFER, Y_0, [TBL + (4+X)*32]; \ + vmovdqa [rsp + frame_XFER + X*32], XFER; \ + ONE_ROUND_PART2(f, g, h, a, b, c, d, e) + +#define DO_4ROUNDS(X, a, b, c, d, e, f, g, h) \ + ONE_ROUND(rsp+frame_XFER+0*8+X*32, a, b, c, d, e, f, g, h); \ + ONE_ROUND(rsp+frame_XFER+1*8+X*32, h, a, b, c, d, e, f, g); \ + ONE_ROUND(rsp+frame_XFER+2*8+X*32, g, h, a, b, c, d, e, f); \ + ONE_ROUND(rsp+frame_XFER+3*8+X*32, f, g, h, a, b, c, d, e) /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; @@ -390,10 +324,10 @@ _gcry_sha512_transform_amd64_avx2: lea TBL,[.LK512 ADD_RIP] /*; byte swap first 16 dwords */ - COPY_YMM_AND_BSWAP Y_0, [INP + 0*32], BYTE_FLIP_MASK - COPY_YMM_AND_BSWAP Y_1, [INP + 1*32], BYTE_FLIP_MASK - COPY_YMM_AND_BSWAP Y_2, [INP + 2*32], BYTE_FLIP_MASK - COPY_YMM_AND_BSWAP Y_3, [INP + 3*32], BYTE_FLIP_MASK + COPY_YMM_AND_BSWAP(Y_0, [INP + 0*32], BYTE_FLIP_MASK) + COPY_YMM_AND_BSWAP(Y_1, [INP + 1*32], BYTE_FLIP_MASK) + COPY_YMM_AND_BSWAP(Y_2, [INP + 2*32], BYTE_FLIP_MASK) + COPY_YMM_AND_BSWAP(Y_3, [INP + 3*32], BYTE_FLIP_MASK) add INP, 128 mov [rsp + frame_INP], INP @@ -408,20 +342,20 @@ _gcry_sha512_transform_amd64_avx2: vmovdqa [rsp + frame_XFER + 3*32], XFER /*; schedule 64 input dwords, by doing 12 rounds of 4 each */ - movq [rsp + frame_SRND],4 + mov qword ptr [rsp + frame_SRND], 4 .align 16 .Loop0: - FOUR_ROUNDS_AND_SCHED 0 - FOUR_ROUNDS_AND_SCHED 1 - FOUR_ROUNDS_AND_SCHED 2 - FOUR_ROUNDS_AND_SCHED 3 + FOUR_ROUNDS_AND_SCHED(0, Y_0, Y_1, Y_2, Y_3, a, b, c, d, e, f, g, h) + FOUR_ROUNDS_AND_SCHED(1, Y_1, Y_2, Y_3, Y_0, e, f, g, h, a, b, c, d) + FOUR_ROUNDS_AND_SCHED(2, Y_2, Y_3, Y_0, Y_1, a, b, c, d, e, f, g, h) + FOUR_ROUNDS_AND_SCHED(3, Y_3, Y_0, Y_1, Y_2, e, f, g, h, a, b, c, d) add TBL, 4*32 - subq [rsp + frame_SRND], 1 + sub qword ptr [rsp + frame_SRND], 1 jne .Loop0 - subq [rsp + frame_NBLKS], 1 + sub qword ptr [rsp + frame_NBLKS], 1 je .Ldone_hash mov INP, [rsp + frame_INP] @@ -429,62 +363,62 @@ _gcry_sha512_transform_amd64_avx2: lea TBL,[.LK512 ADD_RIP] /* load next block and byte swap */ - COPY_YMM_AND_BSWAP Y_0, [INP + 0*32], BYTE_FLIP_MASK - COPY_YMM_AND_BSWAP Y_1, [INP + 1*32], BYTE_FLIP_MASK - COPY_YMM_AND_BSWAP Y_2, [INP + 2*32], BYTE_FLIP_MASK - COPY_YMM_AND_BSWAP Y_3, [INP + 3*32], BYTE_FLIP_MASK + COPY_YMM_AND_BSWAP(Y_0, [INP + 0*32], BYTE_FLIP_MASK) + COPY_YMM_AND_BSWAP(Y_1, [INP + 1*32], BYTE_FLIP_MASK) + COPY_YMM_AND_BSWAP(Y_2, [INP + 2*32], BYTE_FLIP_MASK) + COPY_YMM_AND_BSWAP(Y_3, [INP + 3*32], BYTE_FLIP_MASK) add INP, 128 mov [rsp + frame_INP], INP - DO_4ROUNDS 0 + DO_4ROUNDS(0, a, b, c, d, e, f, g, h) vpaddq XFER, Y_0, [TBL + 0*32] vmovdqa [rsp + frame_XFER + 0*32], XFER - DO_4ROUNDS 1 + DO_4ROUNDS(1, e, f, g, h, a, b, c, d) vpaddq XFER, Y_1, [TBL + 1*32] vmovdqa [rsp + frame_XFER + 1*32], XFER - DO_4ROUNDS 2 + DO_4ROUNDS(2, a, b, c, d, e, f, g, h) vpaddq XFER, Y_2, [TBL + 2*32] vmovdqa [rsp + frame_XFER + 2*32], XFER - DO_4ROUNDS 3 + DO_4ROUNDS(3, e, f, g, h, a, b, c, d) vpaddq XFER, Y_3, [TBL + 3*32] vmovdqa [rsp + frame_XFER + 3*32], XFER - addm [8*0 + CTX],a - addm [8*1 + CTX],b - addm [8*2 + CTX],c - addm [8*3 + CTX],d - addm [8*4 + CTX],e - addm [8*5 + CTX],f - addm [8*6 + CTX],g - addm [8*7 + CTX],h + addm([8*0 + CTX],a) + addm([8*1 + CTX],b) + addm([8*2 + CTX],c) + addm([8*3 + CTX],d) + addm([8*4 + CTX],e) + addm([8*5 + CTX],f) + addm([8*6 + CTX],g) + addm([8*7 + CTX],h) /*; schedule 64 input dwords, by doing 12 rounds of 4 each */ - movq [rsp + frame_SRND],4 + mov qword ptr [rsp + frame_SRND],4 jmp .Loop0 .Ldone_hash: vzeroall - DO_4ROUNDS 0 + DO_4ROUNDS(0, a, b, c, d, e, f, g, h) vmovdqa [rsp + frame_XFER + 0*32], ymm0 /* burn stack */ - DO_4ROUNDS 1 + DO_4ROUNDS(1, e, f, g, h, a, b, c, d) vmovdqa [rsp + frame_XFER + 1*32], ymm0 /* burn stack */ - DO_4ROUNDS 2 + DO_4ROUNDS(2, a, b, c, d, e, f, g, h) vmovdqa [rsp + frame_XFER + 2*32], ymm0 /* burn stack */ - DO_4ROUNDS 3 + DO_4ROUNDS(3, e, f, g, h, a, b, c, d) vmovdqa [rsp + frame_XFER + 3*32], ymm0 /* burn stack */ - addm [8*0 + CTX],a + addm([8*0 + CTX],a) xor eax, eax /* burn stack */ - addm [8*1 + CTX],b - addm [8*2 + CTX],c - addm [8*3 + CTX],d - addm [8*4 + CTX],e - addm [8*5 + CTX],f - addm [8*6 + CTX],g - addm [8*7 + CTX],h + addm([8*1 + CTX],b) + addm([8*2 + CTX],c) + addm([8*3 + CTX],d) + addm([8*4 + CTX],e) + addm([8*5 + CTX],f) + addm([8*6 + CTX],g) + addm([8*7 + CTX],h) /* Restore GPRs */ mov rbp, [rsp + frame_GPRSAVE + 8 * 0] diff --git a/cipher/sha512-ssse3-amd64.S b/cipher/sha512-ssse3-amd64.S index 39bfe362..6a1328a6 100644 --- a/cipher/sha512-ssse3-amd64.S +++ b/cipher/sha512-ssse3-amd64.S @@ -56,32 +56,32 @@ .text /* Virtual Registers */ -msg = rdi /* ARG1 */ -digest = rsi /* ARG2 */ -msglen = rdx /* ARG3 */ -T1 = rcx -T2 = r8 -a_64 = r9 -b_64 = r10 -c_64 = r11 -d_64 = r12 -e_64 = r13 -f_64 = r14 -g_64 = r15 -h_64 = rbx -tmp0 = rax +#define msg rdi /* ARG1 */ +#define digest rsi /* ARG2 */ +#define msglen rdx /* ARG3 */ +#define T1 rcx +#define T2 r8 +#define a_64 r9 +#define b_64 r10 +#define c_64 r11 +#define d_64 r12 +#define e_64 r13 +#define f_64 r14 +#define g_64 r15 +#define h_64 rbx +#define tmp0 rax /* ; Local variables (stack frame) ; Note: frame_size must be an odd multiple of 8 bytes to XMM align RSP */ -frame_W = 0 /* Message Schedule */ -frame_W_size = (80 * 8) -frame_WK = ((frame_W) + (frame_W_size)) /* W[t] + K[t] | W[t+1] + K[t+1] */ -frame_WK_size = (2 * 8) -frame_GPRSAVE = ((frame_WK) + (frame_WK_size)) -frame_GPRSAVE_size = (5 * 8) -frame_size = ((frame_GPRSAVE) + (frame_GPRSAVE_size)) +#define frame_W 0 /* Message Schedule */ +#define frame_W_size (80 * 8) +#define frame_WK ((frame_W) + (frame_W_size)) /* W[t] + K[t] | W[t+1] + K[t+1] */ +#define frame_WK_size (2 * 8) +#define frame_GPRSAVE ((frame_WK) + (frame_WK_size)) +#define frame_GPRSAVE_size (5 * 8) +#define frame_size ((frame_GPRSAVE) + (frame_GPRSAVE_size)) /* Useful QWORD "arrays" for simpler memory references */ @@ -93,161 +93,151 @@ frame_size = ((frame_GPRSAVE) + (frame_GPRSAVE_size)) /* MSG, DIGEST, K_t, W_t are arrays */ /* WK_2(t) points to 1 of 2 qwords at frame.WK depdending on t being odd/even */ -.macro RotateState - /* Rotate symbles a..h right */ - __TMP = h_64 - h_64 = g_64 - g_64 = f_64 - f_64 = e_64 - e_64 = d_64 - d_64 = c_64 - c_64 = b_64 - b_64 = a_64 - a_64 = __TMP -.endm - -.macro SHA512_Round t - /* Compute Round %%t */ - mov T1, f_64 /* T1 = f */ - mov tmp0, e_64 /* tmp = e */ - xor T1, g_64 /* T1 = f ^ g */ - ror tmp0, 23 /* 41 ; tmp = e ror 23 */ - and T1, e_64 /* T1 = (f ^ g) & e */ - xor tmp0, e_64 /* tmp = (e ror 23) ^ e */ - xor T1, g_64 /* T1 = ((f ^ g) & e) ^ g = CH(e,f,g) */ - add T1, [WK_2(\t)] /* W[t] + K[t] from message scheduler */ - ror tmp0, 4 /* 18 ; tmp = ((e ror 23) ^ e) ror 4 */ - xor tmp0, e_64 /* tmp = (((e ror 23) ^ e) ror 4) ^ e */ - mov T2, a_64 /* T2 = a */ - add T1, h_64 /* T1 = CH(e,f,g) + W[t] + K[t] + h */ - ror tmp0, 14 /* 14 ; tmp = ((((e ror23)^e)ror4)^e)ror14 = S1(e) */ - add T1, tmp0 /* T1 = CH(e,f,g) + W[t] + K[t] + S1(e) */ - mov tmp0, a_64 /* tmp = a */ - xor T2, c_64 /* T2 = a ^ c */ - and tmp0, c_64 /* tmp = a & c */ - and T2, b_64 /* T2 = (a ^ c) & b */ - xor T2, tmp0 /* T2 = ((a ^ c) & b) ^ (a & c) = Maj(a,b,c) */ - mov tmp0, a_64 /* tmp = a */ - ror tmp0, 5 /* 39 ; tmp = a ror 5 */ - xor tmp0, a_64 /* tmp = (a ror 5) ^ a */ - add d_64, T1 /* e(next_state) = d + T1 */ - ror tmp0, 6 /* 34 ; tmp = ((a ror 5) ^ a) ror 6 */ - xor tmp0, a_64 /* tmp = (((a ror 5) ^ a) ror 6) ^ a */ - lea h_64, [T1 + T2] /* a(next_state) = T1 + Maj(a,b,c) */ - ror tmp0, 28 /* 28 ; tmp = ((((a ror5)^a)ror6)^a)ror28 = S0(a) */ - add h_64, tmp0 /* a(next_state) = T1 + Maj(a,b,c) S0(a) */ - RotateState -.endm - -.macro SHA512_2Sched_2Round_sse t -/* ; Compute rounds %%t-2 and %%t-1 - ; Compute message schedule QWORDS %%t and %%t+1 - - ; Two rounds are computed based on the values for K[t-2]+W[t-2] and - ; K[t-1]+W[t-1] which were previously stored at WK_2 by the message - ; scheduler. - ; The two new schedule QWORDS are stored at [W_t(%%t)] and [W_t(%%t+1)]. - ; They are then added to their respective SHA512 constants at - ; [K_t(%%t)] and [K_t(%%t+1)] and stored at dqword [WK_2(%%t)] - ; For brievity, the comments following vectored instructions only refer to - ; the first of a pair of QWORDS. - ; Eg. XMM2=W[t-2] really means XMM2={W[t-2]|W[t-1]} - ; The computation of the message schedule and the rounds are tightly - ; stitched to take advantage of instruction-level parallelism. - ; For clarity, integer instructions (for the rounds calculation) are indented - ; by one tab. Vectored instructions (for the message scheduler) are indented - ; by two tabs. */ - - mov T1, f_64 - movdqa xmm2, [W_t(\t-2)] /* XMM2 = W[t-2] */ - xor T1, g_64 - and T1, e_64 - movdqa xmm0, xmm2 /* XMM0 = W[t-2] */ - xor T1, g_64 - add T1, [WK_2(\t)] - movdqu xmm5, [W_t(\t-15)] /* XMM5 = W[t-15] */ - mov tmp0, e_64 - ror tmp0, 23 /* 41 */ - movdqa xmm3, xmm5 /* XMM3 = W[t-15] */ - xor tmp0, e_64 - ror tmp0, 4 /* 18 */ - psrlq xmm0, 61 - 19 /* XMM0 = W[t-2] >> 42 */ - xor tmp0, e_64 - ror tmp0, 14 /* 14 */ - psrlq xmm3, (8 - 7) /* XMM3 = W[t-15] >> 1 */ - add T1, tmp0 - add T1, h_64 - pxor xmm0, xmm2 /* XMM0 = (W[t-2] >> 42) ^ W[t-2] */ - mov T2, a_64 - xor T2, c_64 - pxor xmm3, xmm5 /* XMM3 = (W[t-15] >> 1) ^ W[t-15] */ - and T2, b_64 - mov tmp0, a_64 - psrlq xmm0, 19 - 6 /* XMM0 = ((W[t-2]>>42)^W[t-2])>>13 */ - and tmp0, c_64 - xor T2, tmp0 - psrlq xmm3, (7 - 1) /* XMM3 = ((W[t-15]>>1)^W[t-15])>>6 */ - mov tmp0, a_64 - ror tmp0, 5 /* 39 */ - pxor xmm0, xmm2 /* XMM0 = (((W[t-2]>>42)^W[t-2])>>13)^W[t-2] */ - xor tmp0, a_64 - ror tmp0, 6 /* 34 */ - pxor xmm3, xmm5 /* XMM3 = (((W[t-15]>>1)^W[t-15])>>6)^W[t-15] */ - xor tmp0, a_64 - ror tmp0, 28 /* 28 */ - psrlq xmm0, 6 /* XMM0 = ((((W[t-2]>>42)^W[t-2])>>13)^W[t-2])>>6 */ - add T2, tmp0 - add d_64, T1 - psrlq xmm3, 1 /* XMM3 = (((W[t-15]>>1)^W[t-15])>>6)^W[t-15]>>1 */ - lea h_64, [T1 + T2] - RotateState - movdqa xmm1, xmm2 /* XMM1 = W[t-2] */ - mov T1, f_64 - xor T1, g_64 - movdqa xmm4, xmm5 /* XMM4 = W[t-15] */ - and T1, e_64 - xor T1, g_64 - psllq xmm1, (64 - 19) - (64 - 61) /* XMM1 = W[t-2] << 42 */ - add T1, [WK_2(\t+1)] - mov tmp0, e_64 - psllq xmm4, (64 - 1) - (64 - 8) /* XMM4 = W[t-15] << 7 */ - ror tmp0, 23 /* 41 */ - xor tmp0, e_64 - pxor xmm1, xmm2 /* XMM1 = (W[t-2] << 42)^W[t-2] */ - ror tmp0, 4 /* 18 */ - xor tmp0, e_64 - pxor xmm4, xmm5 /* XMM4 = (W[t-15]<<7)^W[t-15] */ - ror tmp0, 14 /* 14 */ - add T1, tmp0 - psllq xmm1, (64 - 61) /* XMM1 = ((W[t-2] << 42)^W[t-2])<<3 */ - add T1, h_64 - mov T2, a_64 - psllq xmm4, (64 - 8) /* XMM4 = ((W[t-15]<<7)^W[t-15])<<56 */ - xor T2, c_64 - and T2, b_64 - pxor xmm0, xmm1 /* XMM0 = s1(W[t-2]) */ - mov tmp0, a_64 - and tmp0, c_64 - movdqu xmm1, [W_t(\t- 7)] /* XMM1 = W[t-7] */ - xor T2, tmp0 - pxor xmm3, xmm4 /* XMM3 = s0(W[t-15]) */ - mov tmp0, a_64 - paddq xmm0, xmm3 /* XMM0 = s1(W[t-2]) + s0(W[t-15]) */ - ror tmp0, 5 /* 39 */ - paddq xmm0, [W_t(\t-16)] /* XMM0 = s1(W[t-2]) + s0(W[t-15]) + W[t-16] */ - xor tmp0, a_64 - paddq xmm0, xmm1 /* XMM0 = s1(W[t-2]) + W[t-7] + s0(W[t-15]) + W[t-16] */ - ror tmp0, 6 /* 34 */ - movdqa [W_t(\t)], xmm0 /* Store scheduled qwords */ - xor tmp0, a_64 - paddq xmm0, [K_t(t)] /* Compute W[t]+K[t] */ - ror tmp0, 28 /* 28 */ - movdqa [WK_2(t)], xmm0 /* Store W[t]+K[t] for next rounds */ - add T2, tmp0 - add d_64, T1 - lea h_64, [T1 + T2] - RotateState -.endm +#define SHA512_Round(t, a, b, c, d, e, f, g, h) \ + /* Compute Round %%t */; \ + mov T1, f /* T1 = f */; \ + mov tmp0, e /* tmp = e */; \ + xor T1, g /* T1 = f ^ g */; \ + ror tmp0, 23 /* 41 ; tmp = e ror 23 */; \ + and T1, e /* T1 = (f ^ g) & e */; \ + xor tmp0, e /* tmp = (e ror 23) ^ e */; \ + xor T1, g /* T1 = ((f ^ g) & e) ^ g = CH(e,f,g) */; \ + add T1, [WK_2(t)] /* W[t] + K[t] from message scheduler */; \ + ror tmp0, 4 /* 18 ; tmp = ((e ror 23) ^ e) ror 4 */; \ + xor tmp0, e /* tmp = (((e ror 23) ^ e) ror 4) ^ e */; \ + mov T2, a /* T2 = a */; \ + add T1, h /* T1 = CH(e,f,g) + W[t] + K[t] + h */; \ + ror tmp0, 14 /* 14 ; tmp = ((((e ror23)^e)ror4)^e)ror14 = S1(e) */; \ + add T1, tmp0 /* T1 = CH(e,f,g) + W[t] + K[t] + S1(e) */; \ + mov tmp0, a /* tmp = a */; \ + xor T2, c /* T2 = a ^ c */; \ + and tmp0, c /* tmp = a & c */; \ + and T2, b /* T2 = (a ^ c) & b */; \ + xor T2, tmp0 /* T2 = ((a ^ c) & b) ^ (a & c) = Maj(a,b,c) */; \ + mov tmp0, a /* tmp = a */; \ + ror tmp0, 5 /* 39 ; tmp = a ror 5 */; \ + xor tmp0, a /* tmp = (a ror 5) ^ a */; \ + add d, T1 /* e(next_state) = d + T1 */; \ + ror tmp0, 6 /* 34 ; tmp = ((a ror 5) ^ a) ror 6 */; \ + xor tmp0, a /* tmp = (((a ror 5) ^ a) ror 6) ^ a */; \ + lea h, [T1 + T2] /* a(next_state) = T1 + Maj(a,b,c) */; \ + ror tmp0, 28 /* 28 ; tmp = ((((a ror5)^a)ror6)^a)ror28 = S0(a) */; \ + add h, tmp0 /* a(next_state) = T1 + Maj(a,b,c) S0(a) */ + +#define SHA512_2Sched_2Round_sse_PART1(t, a, b, c, d, e, f, g, h) \ + /* \ + ; Compute rounds %%t-2 and %%t-1 \ + ; Compute message schedule QWORDS %%t and %%t+1 \ + ; \ + ; Two rounds are computed based on the values for K[t-2]+W[t-2] and \ + ; K[t-1]+W[t-1] which were previously stored at WK_2 by the message \ + ; scheduler. \ + ; The two new schedule QWORDS are stored at [W_t(%%t)] and [W_t(%%t+1)]. \ + ; They are then added to their respective SHA512 constants at \ + ; [K_t(%%t)] and [K_t(%%t+1)] and stored at dqword [WK_2(%%t)] \ + ; For brievity, the comments following vectored instructions only refer to \ + ; the first of a pair of QWORDS. \ + ; Eg. XMM2=W[t-2] really means XMM2={W[t-2]|W[t-1]} \ + ; The computation of the message schedule and the rounds are tightly \ + ; stitched to take advantage of instruction-level parallelism. \ + ; For clarity, integer instructions (for the rounds calculation) are indented \ + ; by one tab. Vectored instructions (for the message scheduler) are indented \ + ; by two tabs. \ + */ \ + \ + mov T1, f; \ + movdqa xmm2, [W_t(t-2)] /* XMM2 = W[t-2] */; \ + xor T1, g; \ + and T1, e; \ + movdqa xmm0, xmm2 /* XMM0 = W[t-2] */; \ + xor T1, g; \ + add T1, [WK_2(t)]; \ + movdqu xmm5, [W_t(t-15)] /* XMM5 = W[t-15] */; \ + mov tmp0, e; \ + ror tmp0, 23 /* 41 */; \ + movdqa xmm3, xmm5 /* XMM3 = W[t-15] */; \ + xor tmp0, e; \ + ror tmp0, 4 /* 18 */; \ + psrlq xmm0, 61 - 19 /* XMM0 = W[t-2] >> 42 */; \ + xor tmp0, e; \ + ror tmp0, 14 /* 14 */; \ + psrlq xmm3, (8 - 7) /* XMM3 = W[t-15] >> 1 */; \ + add T1, tmp0; \ + add T1, h; \ + pxor xmm0, xmm2 /* XMM0 = (W[t-2] >> 42) ^ W[t-2] */; \ + mov T2, a; \ + xor T2, c; \ + pxor xmm3, xmm5 /* XMM3 = (W[t-15] >> 1) ^ W[t-15] */; \ + and T2, b; \ + mov tmp0, a; \ + psrlq xmm0, 19 - 6 /* XMM0 = ((W[t-2]>>42)^W[t-2])>>13 */; \ + and tmp0, c; \ + xor T2, tmp0; \ + psrlq xmm3, (7 - 1) /* XMM3 = ((W[t-15]>>1)^W[t-15])>>6 */; \ + mov tmp0, a; \ + ror tmp0, 5 /* 39 */; \ + pxor xmm0, xmm2 /* XMM0 = (((W[t-2]>>42)^W[t-2])>>13)^W[t-2] */; \ + xor tmp0, a; \ + ror tmp0, 6 /* 34 */; \ + pxor xmm3, xmm5 /* XMM3 = (((W[t-15]>>1)^W[t-15])>>6)^W[t-15] */; \ + xor tmp0, a; \ + ror tmp0, 28 /* 28 */; \ + psrlq xmm0, 6 /* XMM0 = ((((W[t-2]>>42)^W[t-2])>>13)^W[t-2])>>6 */; \ + add T2, tmp0; \ + add d, T1; \ + psrlq xmm3, 1 /* XMM3 = (((W[t-15]>>1)^W[t-15])>>6)^W[t-15]>>1 */; \ + lea h, [T1 + T2] + +#define SHA512_2Sched_2Round_sse_PART2(t, a, b, c, d, e, f, g, h) \ + movdqa xmm1, xmm2 /* XMM1 = W[t-2] */; \ + mov T1, f; \ + xor T1, g; \ + movdqa xmm4, xmm5 /* XMM4 = W[t-15] */; \ + and T1, e; \ + xor T1, g; \ + psllq xmm1, (64 - 19) - (64 - 61) /* XMM1 = W[t-2] << 42 */; \ + add T1, [WK_2(t+1)]; \ + mov tmp0, e; \ + psllq xmm4, (64 - 1) - (64 - 8) /* XMM4 = W[t-15] << 7 */; \ + ror tmp0, 23 /* 41 */; \ + xor tmp0, e; \ + pxor xmm1, xmm2 /* XMM1 = (W[t-2] << 42)^W[t-2] */; \ + ror tmp0, 4 /* 18 */; \ + xor tmp0, e; \ + pxor xmm4, xmm5 /* XMM4 = (W[t-15]<<7)^W[t-15] */; \ + ror tmp0, 14 /* 14 */; \ + add T1, tmp0; \ + psllq xmm1, (64 - 61) /* XMM1 = ((W[t-2] << 42)^W[t-2])<<3 */; \ + add T1, h; \ + mov T2, a; \ + psllq xmm4, (64 - 8) /* XMM4 = ((W[t-15]<<7)^W[t-15])<<56 */; \ + xor T2, c; \ + and T2, b; \ + pxor xmm0, xmm1 /* XMM0 = s1(W[t-2]) */; \ + mov tmp0, a; \ + and tmp0, c; \ + movdqu xmm1, [W_t(t- 7)] /* XMM1 = W[t-7] */; \ + xor T2, tmp0; \ + pxor xmm3, xmm4 /* XMM3 = s0(W[t-15]) */; \ + mov tmp0, a; \ + paddq xmm0, xmm3 /* XMM0 = s1(W[t-2]) + s0(W[t-15]) */; \ + ror tmp0, 5 /* 39 */; \ + paddq xmm0, [W_t(t-16)] /* XMM0 = s1(W[t-2]) + s0(W[t-15]) + W[t-16] */; \ + xor tmp0, a; \ + paddq xmm0, xmm1 /* XMM0 = s1(W[t-2]) + W[t-7] + s0(W[t-15]) + W[t-16] */; \ + ror tmp0, 6 /* 34 */; \ + movdqa [W_t(t)], xmm0 /* Store scheduled qwords */; \ + xor tmp0, a; \ + paddq xmm0, [K_t(t)] /* Compute W[t]+K[t] */; \ + ror tmp0, 28 /* 28 */; \ + movdqa [WK_2(t)], xmm0 /* Store W[t]+K[t] for next rounds */; \ + add T2, tmp0; \ + add d, T1; \ + lea h, [T1 + T2] + +#define SHA512_2Sched_2Round_sse(t, a, b, c, d, e, f, g, h) \ + SHA512_2Sched_2Round_sse_PART1(t, a, b, c, d, e, f, g, h); \ + SHA512_2Sched_2Round_sse_PART2(t, h, a, b, c, d, e, f, g) /* ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; @@ -295,37 +285,77 @@ _gcry_sha512_transform_amd64_ssse3: mov g_64, [DIGEST(6)] mov h_64, [DIGEST(7)] - t = 0 - .rept 80/2 + 1 - /* (80 rounds) / (2 rounds/iteration) + (1 iteration) */ - /* +1 iteration because the scheduler leads hashing by 1 iteration */ - .if t < 2 - /* BSWAP 2 QWORDS */ - movdqa xmm1, [.LXMM_QWORD_BSWAP ADD_RIP] - movdqu xmm0, [MSG(t)] - pshufb xmm0, xmm1 /* BSWAP */ - movdqa [W_t(t)], xmm0 /* Store Scheduled Pair */ - paddq xmm0, [K_t(t)] /* Compute W[t]+K[t] */ - movdqa [WK_2(t)], xmm0 /* Store into WK for rounds */ - .elseif t < 16 - /* BSWAP 2 QWORDS; Compute 2 Rounds */ - movdqu xmm0, [MSG(t)] - pshufb xmm0, xmm1 /* BSWAP */ - SHA512_Round (t - 2) /* Round t-2 */ - movdqa [W_t(t)], xmm0 /* Store Scheduled Pair */ - paddq xmm0, [K_t(t)] /* Compute W[t]+K[t] */ - SHA512_Round (t - 1) /* Round t-1 */ - movdqa [WK_2(t)], xmm0 /* Store W[t]+K[t] into WK */ - .elseif t < 79 - /* Schedule 2 QWORDS; Compute 2 Rounds */ - SHA512_2Sched_2Round_sse t - .else - /* Compute 2 Rounds */ - SHA512_Round (t - 2) - SHA512_Round (t - 1) - .endif - t = (t)+2 - .endr + /* BSWAP 2 QWORDS */ + movdqa xmm1, [.LXMM_QWORD_BSWAP ADD_RIP] + movdqu xmm0, [MSG(0)] + pshufb xmm0, xmm1 /* BSWAP */ + movdqa [W_t(0)], xmm0 /* Store Scheduled Pair */ + paddq xmm0, [K_t(0)] /* Compute W[t]+K[t] */ + movdqa [WK_2(0)], xmm0 /* Store into WK for rounds */ + + #define T_2_14(t, a, b, c, d, e, f, g, h) \ + /* BSWAP 2 QWORDS; Compute 2 Rounds */; \ + movdqu xmm0, [MSG(t)]; \ + pshufb xmm0, xmm1 /* BSWAP */; \ + SHA512_Round(((t) - 2), a##_64, b##_64, c##_64, d##_64, \ + e##_64, f##_64, g##_64, h##_64); \ + movdqa [W_t(t)], xmm0 /* Store Scheduled Pair */; \ + paddq xmm0, [K_t(t)] /* Compute W[t]+K[t] */; \ + SHA512_Round(((t) - 1), h##_64, a##_64, b##_64, c##_64, \ + d##_64, e##_64, f##_64, g##_64); \ + movdqa [WK_2(t)], xmm0 /* Store W[t]+K[t] into WK */ + + #define T_16_78(t, a, b, c, d, e, f, g, h) \ + SHA512_2Sched_2Round_sse((t), a##_64, b##_64, c##_64, d##_64, \ + e##_64, f##_64, g##_64, h##_64) + + #define T_80(t, a, b, c, d, e, f, g, h) \ + /* Compute 2 Rounds */; \ + SHA512_Round((t - 2), a##_64, b##_64, c##_64, d##_64, \ + e##_64, f##_64, g##_64, h##_64); \ + SHA512_Round((t - 1), h##_64, a##_64, b##_64, c##_64, \ + d##_64, e##_64, f##_64, g##_64) + + T_2_14(2, a, b, c, d, e, f, g, h) + T_2_14(4, g, h, a, b, c, d, e, f) + T_2_14(6, e, f, g, h, a, b, c, d) + T_2_14(8, c, d, e, f, g, h, a, b) + T_2_14(10, a, b, c, d, e, f, g, h) + T_2_14(12, g, h, a, b, c, d, e, f) + T_2_14(14, e, f, g, h, a, b, c, d) + T_16_78(16, c, d, e, f, g, h, a, b) + T_16_78(18, a, b, c, d, e, f, g, h) + T_16_78(20, g, h, a, b, c, d, e, f) + T_16_78(22, e, f, g, h, a, b, c, d) + T_16_78(24, c, d, e, f, g, h, a, b) + T_16_78(26, a, b, c, d, e, f, g, h) + T_16_78(28, g, h, a, b, c, d, e, f) + T_16_78(30, e, f, g, h, a, b, c, d) + T_16_78(32, c, d, e, f, g, h, a, b) + T_16_78(34, a, b, c, d, e, f, g, h) + T_16_78(36, g, h, a, b, c, d, e, f) + T_16_78(38, e, f, g, h, a, b, c, d) + T_16_78(40, c, d, e, f, g, h, a, b) + T_16_78(42, a, b, c, d, e, f, g, h) + T_16_78(44, g, h, a, b, c, d, e, f) + T_16_78(46, e, f, g, h, a, b, c, d) + T_16_78(48, c, d, e, f, g, h, a, b) + T_16_78(50, a, b, c, d, e, f, g, h) + T_16_78(52, g, h, a, b, c, d, e, f) + T_16_78(54, e, f, g, h, a, b, c, d) + T_16_78(56, c, d, e, f, g, h, a, b) + T_16_78(58, a, b, c, d, e, f, g, h) + T_16_78(60, g, h, a, b, c, d, e, f) + T_16_78(62, e, f, g, h, a, b, c, d) + T_16_78(64, c, d, e, f, g, h, a, b) + T_16_78(66, a, b, c, d, e, f, g, h) + T_16_78(68, g, h, a, b, c, d, e, f) + T_16_78(70, e, f, g, h, a, b, c, d) + T_16_78(72, c, d, e, f, g, h, a, b) + T_16_78(74, a, b, c, d, e, f, g, h) + T_16_78(76, g, h, a, b, c, d, e, f) + T_16_78(78, e, f, g, h, a, b, c, d) + T_80(80, c, d, e, f, g, h, a, b) /* Update digest */ add [DIGEST(0)], a_64 @@ -362,11 +392,12 @@ _gcry_sha512_transform_amd64_ssse3: pxor xmm5, xmm5 /* Burn stack */ - t = 0 - .rept frame_W_size / 16 - movdqu [rsp + frame_W + (t) * 16], xmm0 - t = ((t)+1) - .endr + mov eax, 0 +.Lerase_stack: + movdqu [rsp + rax], xmm0 + add eax, 16 + cmp eax, frame_W_size + jne .Lerase_stack movdqu [rsp + frame_WK], xmm0 xor eax, eax diff --git a/configure.ac b/configure.ac index f7339a3e..e4a10b78 100644 --- a/configure.ac +++ b/configure.ac @@ -1741,21 +1741,11 @@ AC_CACHE_CHECK([whether GCC assembler is compatible for Intel syntax assembly im ".text\n\t" "actest:\n\t" "pxor xmm1, xmm7;\n\t" - /* Intel syntax implementation also use GAS macros, so check - * for them here. */ - "VAL_A = xmm4\n\t" - "VAL_B = xmm2\n\t" - ".macro SET_VAL_A p1\n\t" - " VAL_A = \\\\p1 \n\t" - ".endm\n\t" - ".macro SET_VAL_B p1\n\t" - " VAL_B = \\\\p1 \n\t" - ".endm\n\t" - "vmovdqa VAL_A, VAL_B;\n\t" - "SET_VAL_A eax\n\t" - "SET_VAL_B ebp\n\t" - "add VAL_A, VAL_B;\n\t" - "add VAL_B, 0b10101;\n\t" + "vperm2i128 ymm2, ymm3, ymm0, 1;\n\t" + "add eax, ebp;\n\t" + "rorx eax, ebp, 1;\n\t" + "sub eax, [esp + 4];\n\t" + "add dword ptr [esp + eax], 0b10101;\n\t" ".att_syntax prefix\n\t" );]], [ actest(); ])], [gcry_cv_gcc_platform_as_ok_for_intel_syntax=yes]) -- 2.27.0 From lomov.vl at yandex.ru Sat Jan 23 03:32:27 2021 From: lomov.vl at yandex.ru (Vladimir Lomov) Date: Sat, 23 Jan 2021 10:32:27 +0800 Subject: gpg-agent 'crashes' with libgcrypt 1.9.0 In-Reply-To: <87v9bo3oci.fsf@wheatstone.g10code.de> References: <87v9bo3oci.fsf@wheatstone.g10code.de> Message-ID: Hello, ** Werner Koch [2021-01-22 17:53:17 +0100]: > On Fri, 22 Jan 2021 11:38, Vladimir Lomov said: >> Jan 21 10:13:33 smoon4.bkoty.ru gpg-agent[25312]: free(): invalid pointer > https://dev.gnupg.org/T5254 > has a fix. I am going to release 1.9.1 next week. I built libgcrypt 1.9.0 with this patch (took it from git.gnupg.org) and tried to run $ gpg -s --clearsign -b PKGBUILD but it ended with the same error for both distribution package and locally rebuilt one. Is there a way to debug this? I experimented with running gpg-agent under gdb and strace but, obviously due to security reasons, couldn't get it to work. > Shalom-Salam, > Werner --- WBR, Vladimir Lomov -- Nuclear powered vacuuum cleaners will probably be a reality within 10 years. -- Alex Lewyt (President of the Lewyt Corporation, manufacturers of vacuum cleaners), quoted in The New York Times, June 10, 1955. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: not available URL: From ametzler at bebt.de Sun Jan 24 08:01:02 2021 From: ametzler at bebt.de (Andreas Metzler) Date: Sun, 24 Jan 2021 08:01:02 +0100 Subject: [PATCH] cipher/sha512: Fix non-NEON ARM assembly implementation In-Reply-To: <87o8hh5iev.fsf@wheatstone.g10code.de> References: <87o8hidsa5.fsf@gmail.com> <87o8hh5iev.fsf@wheatstone.g10code.de> Message-ID: On 2021-01-22 Werner Koch via Gnupg-devel wrote: [...] > We have a couple of fixes already in the repo, see also > https://dev/gnupg.org/T5251 for the NEON thing. I am not sure whether > the Poly1305 bug has already been reported. Hello Werner, I only see 3 commits in the published repo after tag/ibgcrypt-1.9.0: * Post release updates * doc: Fix wrong CVE id in NEWS * Merge branch 'LIBGCRYPT-1.9-BRANCH' cu Andreas From gniibe at fsij.org Mon Jan 25 06:17:50 2021 From: gniibe at fsij.org (NIIBE Yutaka) Date: Mon, 25 Jan 2021 14:17:50 +0900 Subject: gpg-agent 'crashes' with libgcrypt 1.9.0 In-Reply-To: References: <87v9bo3oci.fsf@wheatstone.g10code.de> Message-ID: <877do161dt.fsf@iwagami.gniibe.org> Hello, Sorry for the trouble. Most likely, it's my fault. I think that you are using Ed25519 key. In 1.9, we handle the private key as fixed-size opaque string consistently, while it was handled differently in 1.8. We have a support of non fixed-size key which was created by GnuPG 2.2, but it had a bug. Please test following patch. -- -------------- next part -------------- A non-text attachment was scrubbed... Name: libgcrypt-1.9.0-fix-ed25519.patch Type: text/x-diff Size: 3521 bytes Desc: not available URL: From lomov.vl at yandex.ru Mon Jan 25 07:12:24 2021 From: lomov.vl at yandex.ru (Vladimir Lomov) Date: Mon, 25 Jan 2021 14:12:24 +0800 Subject: gpg-agent 'crashes' with libgcrypt 1.9.0 In-Reply-To: <877do161dt.fsf@iwagami.gniibe.org> References: <87v9bo3oci.fsf@wheatstone.g10code.de> <877do161dt.fsf@iwagami.gniibe.org> Message-ID: Hello ** NIIBE Yutaka [2021-01-25 14:17:50 +0900]: > Hello, > Sorry for the trouble. Most likely, it's my fault. > I think that you are using Ed25519 key. Yes, I didn't thought it is important but yes, in pinentry dialog it is identified as EDDSA (public part is available by wkd for vladimir at bkoty.ru). > In 1.9, we handle the private key as fixed-size opaque string consistently, > while it was handled differently in 1.8. We have a support of non > fixed-size key which was created by GnuPG 2.2, but it had a bug. > Please test following patch. After I rebuilt libgcrypt and gnupg (in that order) I can sign, encrypt and decrypt messages as it was before, no more failures or gpg-agent "crashes". Thank you! --- WBR, Vladimir Lomov -- Drop in any mailbox. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: not available URL: From konstantin at linuxfoundation.org Mon Jan 25 17:39:24 2021 From: konstantin at linuxfoundation.org (Konstantin Ryabitsev) Date: Mon, 25 Jan 2021 11:39:24 -0500 Subject: Error building 1.9.0 on CentOS-7 Message-ID: <20210125163924.cbg54ignmlu4rlsv@chatter.i7.local> Hello: I checked the known bugs for 1.9.0, but there doesn't appear to be a match for the one I'm seeing building gnupg-2.2.27 on CentOS-7: gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S: Assembler messages: gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:217: Error: junk `()' after expression gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:218: Error: junk `()' after expression gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:220: Error: junk `()' after expression gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:221: Error: junk `()' after expression gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:267: Error: junk `()' after expression gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:268: Error: junk `()' after expression make[4]: *** [blake2b-amd64-avx2.lo] Error 1 $ gcc --version gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) Best regards, -K From jussi.kivilinna at iki.fi Mon Jan 25 20:02:29 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Mon, 25 Jan 2021 21:02:29 +0200 Subject: Error building 1.9.0 on CentOS-7 In-Reply-To: <20210125163924.cbg54ignmlu4rlsv@chatter.i7.local> References: <20210125163924.cbg54ignmlu4rlsv@chatter.i7.local> Message-ID: On 25.1.2021 18.39, Konstantin Ryabitsev via Gcrypt-devel wrote: > Hello: > > I checked the known bugs for 1.9.0, but there doesn't appear to be a match for > the one I'm seeing building gnupg-2.2.27 on CentOS-7: > > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S: Assembler messages: > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:217: Error: junk `()' after expression > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:218: Error: junk `()' after expression > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:220: Error: junk `()' after expression > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:221: Error: junk `()' after expression > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:267: Error: junk `()' after expression > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:268: Error: junk `()' after expression > make[4]: *** [blake2b-amd64-avx2.lo] Error 1 > Thanks for report. Attached patch should fix the issue. -Jussi -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-blake2-fix-RIP-register-access-for-AVX-AVX2-implemen.patch Type: text/x-patch Size: 2937 bytes Desc: not available URL: From konstantin at linuxfoundation.org Mon Jan 25 20:54:34 2021 From: konstantin at linuxfoundation.org (Konstantin Ryabitsev) Date: Mon, 25 Jan 2021 14:54:34 -0500 Subject: Error building 1.9.0 on CentOS-7 In-Reply-To: References: <20210125163924.cbg54ignmlu4rlsv@chatter.i7.local> Message-ID: <20210125195434.qixz63vbg5n2icc4@chatter.i7.local> On Mon, Jan 25, 2021 at 09:02:29PM +0200, Jussi Kivilinna wrote: > > I checked the known bugs for 1.9.0, but there doesn't appear to be a match for > > the one I'm seeing building gnupg-2.2.27 on CentOS-7: > > > > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S: Assembler messages: > > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:217: Error: junk `()' after expression > > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:218: Error: junk `()' after expression > > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:220: Error: junk `()' after expression > > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:221: Error: junk `()' after expression > > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:267: Error: junk `()' after expression > > gnupg-2.2.27/PLAY/src/libgcrypt/cipher/blake2b-amd64-avx2.S:268: Error: junk `()' after expression > > make[4]: *** [blake2b-amd64-avx2.lo] Error 1 > > > > Thanks for report. Attached patch should fix the issue. I can confirm that the problem is fixed with the patch. Tested-by: Konstantin Ryabitsev Thanks, -K From jussi.kivilinna at iki.fi Mon Jan 25 19:47:08 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Mon, 25 Jan 2021 20:47:08 +0200 Subject: [PATCH 2/2] Add VAES/AVX2 AMD64 accelerated implementation of AES In-Reply-To: <20210125184708.1415112-1-jussi.kivilinna@iki.fi> References: <20210125184708.1415112-1-jussi.kivilinna@iki.fi> Message-ID: <20210125184708.1415112-2-jussi.kivilinna@iki.fi> * cipher/Makefile.am: Add 'rijndael-vaes.c' and 'rijndael-vaes-avx2-amd64.S'. * cipher/rijndael-internal.h (USE_VAES): New. * cipher/rijndael-vaes-avx2-amd64.S: New. * cipher/rijndael-vaes.c: New. * cipher/rijndael.c (_gcry_aes_vaes_cfb_dec, _gcry_aes_vaes_cbc_dec) (_gcry_aes_vaes_ctr_enc, _gcry_aes_vaes_ocb_crypt): New. (do_setkey) [USE_AESNI] [USE_VAES]: Add detection for VAES. * configure.ac: Add 'rijndael-vaes.lo' and 'rijndael-vaes-avx2-amd64.lo'. -- Patch adds VAES/AVX2 accelerated implementation for CBC-decryption, CFB-decryption, CTR-encryption, OCB-encryption and OCB-decryption. Benchmarks on AMD Ryzen 5800X: Before: AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz CBC dec | 0.066 ns/B 14344 MiB/s 0.322 c/B 4850 CFB dec | 0.067 ns/B 14321 MiB/s 0.323 c/B 4850 CTR enc | 0.066 ns/B 14458 MiB/s 0.320 c/B 4850 OCB enc | 0.070 ns/B 13597 MiB/s 0.340 c/B 4850 OCB dec | 0.068 ns/B 14081 MiB/s 0.328 c/B 4850 After (~1.9x faster): AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz CBC dec | 0.034 ns/B 28080 MiB/s 0.165 c/B 4850 CFB dec | 0.034 ns/B 27957 MiB/s 0.165 c/B 4850 CTR enc | 0.034 ns/B 28411 MiB/s 0.163 c/B 4850 OCB enc | 0.037 ns/B 25642 MiB/s 0.180 c/B 4850 OCB dec | 0.036 ns/B 26827 MiB/s 0.172 c/B 4850 Signed-off-by: Jussi Kivilinna --- cipher/Makefile.am | 1 + cipher/rijndael-internal.h | 10 + cipher/rijndael-vaes-avx2-amd64.S | 2180 +++++++++++++++++++++++++++++ cipher/rijndael-vaes.c | 146 ++ cipher/rijndael.c | 40 + configure.ac | 4 + 6 files changed, 2381 insertions(+) create mode 100644 cipher/rijndael-vaes-avx2-amd64.S create mode 100644 cipher/rijndael-vaes.c diff --git a/cipher/Makefile.am b/cipher/Makefile.am index 62f7ec3e..8fb0a8b5 100644 --- a/cipher/Makefile.am +++ b/cipher/Makefile.am @@ -100,6 +100,7 @@ EXTRA_libcipher_la_SOURCES = \ rijndael-aesni.c rijndael-padlock.c \ rijndael-amd64.S rijndael-arm.S \ rijndael-ssse3-amd64.c rijndael-ssse3-amd64-asm.S \ + rijndael-vaes.c rijndael-vaes-avx2-amd64.S \ rijndael-armv8-ce.c rijndael-armv8-aarch32-ce.S \ rijndael-armv8-aarch64-ce.S rijndael-aarch64.S \ rijndael-ppc.c rijndael-ppc9le.c \ diff --git a/cipher/rijndael-internal.h b/cipher/rijndael-internal.h index 7e01f6b0..75ecb74c 100644 --- a/cipher/rijndael-internal.h +++ b/cipher/rijndael-internal.h @@ -89,6 +89,16 @@ # endif #endif /* ENABLE_AESNI_SUPPORT */ +/* USE_VAES inidicates whether to compile with Intel VAES code. */ +#undef USE_VAES +#if (defined(HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS) || \ + defined(HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS)) && \ + defined(__x86_64__) && defined(ENABLE_AVX2_SUPPORT) && \ + defined(HAVE_GCC_INLINE_ASM_VAES) && \ + defined(USE_AESNI) +# define USE_VAES 1 +#endif + /* USE_ARM_CE indicates whether to enable ARMv8 Crypto Extension assembly * code. */ #undef USE_ARM_CE diff --git a/cipher/rijndael-vaes-avx2-amd64.S b/cipher/rijndael-vaes-avx2-amd64.S new file mode 100644 index 00000000..80db87df --- /dev/null +++ b/cipher/rijndael-vaes-avx2-amd64.S @@ -0,0 +1,2180 @@ +/* VAES/AVX2 AMD64 accelerated AES for Libgcrypt + * Copyright (C) 2021 Jussi Kivilinna + * + * This file is part of Libgcrypt. + * + * Libgcrypt is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as + * published by the Free Software Foundation; either version 2.1 of + * the License, or (at your option) any later version. + * + * Libgcrypt is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . + */ + +#if defined(__x86_64__) +#include +#if (defined(HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS) || \ + defined(HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS)) && \ + defined(ENABLE_AESNI_SUPPORT) && defined(ENABLE_AVX2_SUPPORT) && \ + defined(HAVE_GCC_INLINE_ASM_VAES) + +#include "asm-common-amd64.h" + +.text + +/********************************************************************** + helper macros + **********************************************************************/ +#define inc_le128(x, minus_one, tmp) \ + vpcmpeqq minus_one, x, tmp; \ + vpsubq minus_one, x, x; \ + vpslldq $8, tmp, tmp; \ + vpsubq tmp, x, x; + +#define add2_le128(x, minus_one, minus_two, tmp1, tmp2) \ + vpcmpeqq minus_one, x, tmp1; \ + vpcmpeqq minus_two, x, tmp2; \ + vpor tmp1, tmp2, tmp2; \ + vpsubq minus_two, x, x; \ + vpslldq $8, tmp2, tmp2; \ + vpsubq tmp2, x, x; + +#define AES_OP8(op, key, b0, b1, b2, b3, b4, b5, b6, b7) \ + op key, b0, b0; \ + op key, b1, b1; \ + op key, b2, b2; \ + op key, b3, b3; \ + op key, b4, b4; \ + op key, b5, b5; \ + op key, b6, b6; \ + op key, b7, b7; + +#define VAESENC8(key, b0, b1, b2, b3, b4, b5, b6, b7) \ + AES_OP8(vaesenc, key, b0, b1, b2, b3, b4, b5, b6, b7) + +#define VAESDEC8(key, b0, b1, b2, b3, b4, b5, b6, b7) \ + AES_OP8(vaesdec, key, b0, b1, b2, b3, b4, b5, b6, b7) + +#define XOR8(key, b0, b1, b2, b3, b4, b5, b6, b7) \ + AES_OP8(vpxor, key, b0, b1, b2, b3, b4, b5, b6, b7) + +#define AES_OP4(op, key, b0, b1, b2, b3) \ + op key, b0, b0; \ + op key, b1, b1; \ + op key, b2, b2; \ + op key, b3, b3; + +#define VAESENC4(key, b0, b1, b2, b3) \ + AES_OP4(vaesenc, key, b0, b1, b2, b3) + +#define VAESDEC4(key, b0, b1, b2, b3) \ + AES_OP4(vaesdec, key, b0, b1, b2, b3) + +#define XOR4(key, b0, b1, b2, b3) \ + AES_OP4(vpxor, key, b0, b1, b2, b3) + +/********************************************************************** + CBC-mode decryption + **********************************************************************/ +ELF(.type _gcry_vaes_avx2_cbc_dec_amd64, at function) +.globl _gcry_vaes_avx2_cbc_dec_amd64 +_gcry_vaes_avx2_cbc_dec_amd64: + /* input: + * %rdi: round keys + * %rsi: iv + * %rdx: dst + * %rcx: src + * %r8: nblocks + * %r9: nrounds + */ + CFI_STARTPROC(); + + /* Load IV. */ + vmovdqu (%rsi), %xmm15; + + /* Process 16 blocks per loop. */ +.align 8 +.Lcbc_dec_blk16: + cmpq $16, %r8; + jb .Lcbc_dec_blk8; + + leaq -16(%r8), %r8; + + /* Load input and xor first key. Update IV. */ + vbroadcasti128 (0 * 16)(%rdi), %ymm8; + vmovdqu (0 * 16)(%rcx), %ymm0; + vmovdqu (2 * 16)(%rcx), %ymm1; + vmovdqu (4 * 16)(%rcx), %ymm2; + vmovdqu (6 * 16)(%rcx), %ymm3; + vmovdqu (8 * 16)(%rcx), %ymm4; + vmovdqu (10 * 16)(%rcx), %ymm5; + vmovdqu (12 * 16)(%rcx), %ymm6; + vmovdqu (14 * 16)(%rcx), %ymm7; + vpxor %ymm8, %ymm0, %ymm0; + vpxor %ymm8, %ymm1, %ymm1; + vpxor %ymm8, %ymm2, %ymm2; + vpxor %ymm8, %ymm3, %ymm3; + vpxor %ymm8, %ymm4, %ymm4; + vpxor %ymm8, %ymm5, %ymm5; + vpxor %ymm8, %ymm6, %ymm6; + vpxor %ymm8, %ymm7, %ymm7; + vbroadcasti128 (1 * 16)(%rdi), %ymm8; + vinserti128 $1, (0 * 16)(%rcx), %ymm15, %ymm9; + vmovdqu (1 * 16)(%rcx), %ymm10; + vmovdqu (3 * 16)(%rcx), %ymm11; + vmovdqu (5 * 16)(%rcx), %ymm12; + vmovdqu (7 * 16)(%rcx), %ymm13; + vmovdqu (9 * 16)(%rcx), %ymm14; + vmovdqu (15 * 16)(%rcx), %xmm15; + leaq (16 * 16)(%rcx), %rcx; + + /* AES rounds */ + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (2 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (3 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (4 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (5 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (6 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (7 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (8 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (9 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (10 * 16)(%rdi), %ymm8; + cmpl $12, %r9d; + jb .Lcbc_dec_blk16_last; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (11 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (12 * 16)(%rdi), %ymm8; + jz .Lcbc_dec_blk16_last; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (13 * 16)(%rdi), %ymm8; + VAESDEC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (14 * 16)(%rdi), %ymm8; + + /* Last round and output handling. */ + .Lcbc_dec_blk16_last: + vpxor %ymm8, %ymm9, %ymm9; + vpxor %ymm8, %ymm10, %ymm10; + vpxor %ymm8, %ymm11, %ymm11; + vpxor %ymm8, %ymm12, %ymm12; + vpxor %ymm8, %ymm13, %ymm13; + vpxor %ymm8, %ymm14, %ymm14; + vaesdeclast %ymm9, %ymm0, %ymm0; + vaesdeclast %ymm10, %ymm1, %ymm1; + vpxor (-5 * 16)(%rcx), %ymm8, %ymm9; + vpxor (-3 * 16)(%rcx), %ymm8, %ymm10; + vaesdeclast %ymm11, %ymm2, %ymm2; + vaesdeclast %ymm12, %ymm3, %ymm3; + vaesdeclast %ymm13, %ymm4, %ymm4; + vaesdeclast %ymm14, %ymm5, %ymm5; + vaesdeclast %ymm9, %ymm6, %ymm6; + vaesdeclast %ymm10, %ymm7, %ymm7; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + vmovdqu %ymm4, (8 * 16)(%rdx); + vmovdqu %ymm5, (10 * 16)(%rdx); + vmovdqu %ymm6, (12 * 16)(%rdx); + vmovdqu %ymm7, (14 * 16)(%rdx); + leaq (16 * 16)(%rdx), %rdx; + + jmp .Lcbc_dec_blk16; + + /* Handle trailing eight blocks. */ +.align 8 +.Lcbc_dec_blk8: + cmpq $8, %r8; + jb .Lcbc_dec_blk4; + + leaq -8(%r8), %r8; + + /* Load input and xor first key. Update IV. */ + vbroadcasti128 (0 * 16)(%rdi), %ymm4; + vmovdqu (0 * 16)(%rcx), %ymm0; + vmovdqu (2 * 16)(%rcx), %ymm1; + vmovdqu (4 * 16)(%rcx), %ymm2; + vmovdqu (6 * 16)(%rcx), %ymm3; + vpxor %ymm4, %ymm0, %ymm0; + vpxor %ymm4, %ymm1, %ymm1; + vpxor %ymm4, %ymm2, %ymm2; + vpxor %ymm4, %ymm3, %ymm3; + vbroadcasti128 (1 * 16)(%rdi), %ymm4; + vinserti128 $1, (0 * 16)(%rcx), %ymm15, %ymm10; + vmovdqu (1 * 16)(%rcx), %ymm11; + vmovdqu (3 * 16)(%rcx), %ymm12; + vmovdqu (5 * 16)(%rcx), %ymm13; + vmovdqu (7 * 16)(%rcx), %xmm15; + leaq (8 * 16)(%rcx), %rcx; + + /* AES rounds */ + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (2 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (3 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (4 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (5 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (6 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (7 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (8 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (9 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (10 * 16)(%rdi), %ymm4; + cmpl $12, %r9d; + jb .Lcbc_dec_blk8_last; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (11 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (12 * 16)(%rdi), %ymm4; + jz .Lcbc_dec_blk8_last; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (13 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (14 * 16)(%rdi), %ymm4; + + /* Last round and output handling. */ + .Lcbc_dec_blk8_last: + vpxor %ymm4, %ymm10, %ymm10; + vpxor %ymm4, %ymm11, %ymm11; + vpxor %ymm4, %ymm12, %ymm12; + vpxor %ymm4, %ymm13, %ymm13; + vaesdeclast %ymm10, %ymm0, %ymm0; + vaesdeclast %ymm11, %ymm1, %ymm1; + vaesdeclast %ymm12, %ymm2, %ymm2; + vaesdeclast %ymm13, %ymm3, %ymm3; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + leaq (8 * 16)(%rdx), %rdx; + + /* Handle trailing four blocks. */ +.align 8 +.Lcbc_dec_blk4: + cmpq $4, %r8; + jb .Lcbc_dec_blk1; + + leaq -4(%r8), %r8; + + /* Load input and xor first key. */ + vmovdqa (0 * 16)(%rdi), %xmm4; + vmovdqu (0 * 16)(%rcx), %xmm10; + vmovdqu (1 * 16)(%rcx), %xmm11; + vmovdqu (2 * 16)(%rcx), %xmm12; + vmovdqu (3 * 16)(%rcx), %xmm13; + leaq (4 * 16)(%rcx), %rcx; + vpxor %xmm10, %xmm4, %xmm0; + vpxor %xmm11, %xmm4, %xmm1; + vpxor %xmm12, %xmm4, %xmm2; + vpxor %xmm13, %xmm4, %xmm3; + + /* AES rounds */ + vmovdqa (1 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (2 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (3 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (4 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (5 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (6 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (7 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (8 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (9 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (10 * 16)(%rdi), %xmm4; + cmpl $12, %r9d; + jb .Lcbc_dec_blk4_last; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (11 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (12 * 16)(%rdi), %xmm4; + jz .Lcbc_dec_blk4_last; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (13 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (14 * 16)(%rdi), %xmm4; + + /* Last round and output handling. */ + .Lcbc_dec_blk4_last: + vpxor %xmm4, %xmm15, %xmm15; + vpxor %xmm4, %xmm10, %xmm10; + vpxor %xmm4, %xmm11, %xmm11; + vpxor %xmm4, %xmm12, %xmm12; + vaesdeclast %xmm15, %xmm0, %xmm0; + vmovdqa %xmm13, %xmm15; + vaesdeclast %xmm10, %xmm1, %xmm1; + vaesdeclast %xmm11, %xmm2, %xmm2; + vaesdeclast %xmm12, %xmm3, %xmm3; + vmovdqu %xmm0, (0 * 16)(%rdx); + vmovdqu %xmm1, (1 * 16)(%rdx); + vmovdqu %xmm2, (2 * 16)(%rdx); + vmovdqu %xmm3, (3 * 16)(%rdx); + leaq (4 * 16)(%rdx), %rdx; + + /* Process trailing one to three blocks, one per loop. */ +.align 8 +.Lcbc_dec_blk1: + cmpq $1, %r8; + jb .Ldone_cbc_dec; + + leaq -1(%r8), %r8; + + /* Load input. */ + vmovdqu (%rcx), %xmm2; + leaq 16(%rcx), %rcx; + + /* Xor first key. */ + vpxor (0 * 16)(%rdi), %xmm2, %xmm0; + + /* AES rounds. */ + vaesdec (1 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (2 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (3 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (4 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (5 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (6 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (7 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (8 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (9 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (10 * 16)(%rdi), %xmm1; + cmpl $12, %r9d; + jb .Lcbc_dec_blk1_last; + vaesdec %xmm1, %xmm0, %xmm0; + vaesdec (11 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (12 * 16)(%rdi), %xmm1; + jz .Lcbc_dec_blk1_last; + vaesdec %xmm1, %xmm0, %xmm0; + vaesdec (13 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (14 * 16)(%rdi), %xmm1; + + /* Last round and output handling. */ + .Lcbc_dec_blk1_last: + vpxor %xmm1, %xmm15, %xmm15; + vaesdeclast %xmm15, %xmm0, %xmm0; + vmovdqa %xmm2, %xmm15; + vmovdqu %xmm0, (%rdx); + leaq 16(%rdx), %rdx; + + jmp .Lcbc_dec_blk1; + +.align 8 +.Ldone_cbc_dec: + /* Store IV. */ + vmovdqu %xmm15, (%rsi); + + vzeroall; + ret + CFI_ENDPROC(); +ELF(.size _gcry_vaes_avx2_cbc_dec_amd64,.-_gcry_vaes_avx2_cbc_dec_amd64) + +/********************************************************************** + CFB-mode decryption + **********************************************************************/ +ELF(.type _gcry_vaes_avx2_cfb_dec_amd64, at function) +.globl _gcry_vaes_avx2_cfb_dec_amd64 +_gcry_vaes_avx2_cfb_dec_amd64: + /* input: + * %rdi: round keys + * %rsi: iv + * %rdx: dst + * %rcx: src + * %r8: nblocks + * %r9: nrounds + */ + CFI_STARTPROC(); + + /* Load IV. */ + vmovdqu (%rsi), %xmm15; + + /* Process 16 blocks per loop. */ +.align 8 +.Lcfb_dec_blk16: + cmpq $16, %r8; + jb .Lcfb_dec_blk8; + + leaq -16(%r8), %r8; + + /* Load input and xor first key. Update IV. */ + vbroadcasti128 (0 * 16)(%rdi), %ymm8; + vinserti128 $1, (0 * 16)(%rcx), %ymm15, %ymm0; + vmovdqu (1 * 16)(%rcx), %ymm1; + vmovdqu (3 * 16)(%rcx), %ymm2; + vmovdqu (5 * 16)(%rcx), %ymm3; + vmovdqu (7 * 16)(%rcx), %ymm4; + vmovdqu (9 * 16)(%rcx), %ymm5; + vmovdqu (11 * 16)(%rcx), %ymm6; + vmovdqu (13 * 16)(%rcx), %ymm7; + vmovdqu (15 * 16)(%rcx), %xmm15; + vpxor %ymm8, %ymm0, %ymm0; + vpxor %ymm8, %ymm1, %ymm1; + vpxor %ymm8, %ymm2, %ymm2; + vpxor %ymm8, %ymm3, %ymm3; + vpxor %ymm8, %ymm4, %ymm4; + vpxor %ymm8, %ymm5, %ymm5; + vpxor %ymm8, %ymm6, %ymm6; + vpxor %ymm8, %ymm7, %ymm7; + vbroadcasti128 (1 * 16)(%rdi), %ymm8; + vmovdqu (0 * 16)(%rcx), %ymm9; + vmovdqu (2 * 16)(%rcx), %ymm10; + vmovdqu (4 * 16)(%rcx), %ymm11; + vmovdqu (6 * 16)(%rcx), %ymm12; + vmovdqu (8 * 16)(%rcx), %ymm13; + vmovdqu (10 * 16)(%rcx), %ymm14; + + leaq (16 * 16)(%rcx), %rcx; + + /* AES rounds */ + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (2 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (3 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (4 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (5 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (6 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (7 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (8 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (9 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (10 * 16)(%rdi), %ymm8; + cmpl $12, %r9d; + jb .Lcfb_dec_blk16_last; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (11 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (12 * 16)(%rdi), %ymm8; + jz .Lcfb_dec_blk16_last; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (13 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (14 * 16)(%rdi), %ymm8; + + /* Last round and output handling. */ + .Lcfb_dec_blk16_last: + vpxor %ymm8, %ymm9, %ymm9; + vpxor %ymm8, %ymm10, %ymm10; + vpxor %ymm8, %ymm11, %ymm11; + vpxor %ymm8, %ymm12, %ymm12; + vpxor %ymm8, %ymm13, %ymm13; + vpxor %ymm8, %ymm14, %ymm14; + vaesenclast %ymm9, %ymm0, %ymm0; + vaesenclast %ymm10, %ymm1, %ymm1; + vpxor (-4 * 16)(%rcx), %ymm8, %ymm9; + vpxor (-2 * 16)(%rcx), %ymm8, %ymm10; + vaesenclast %ymm11, %ymm2, %ymm2; + vaesenclast %ymm12, %ymm3, %ymm3; + vaesenclast %ymm13, %ymm4, %ymm4; + vaesenclast %ymm14, %ymm5, %ymm5; + vaesenclast %ymm9, %ymm6, %ymm6; + vaesenclast %ymm10, %ymm7, %ymm7; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + vmovdqu %ymm4, (8 * 16)(%rdx); + vmovdqu %ymm5, (10 * 16)(%rdx); + vmovdqu %ymm6, (12 * 16)(%rdx); + vmovdqu %ymm7, (14 * 16)(%rdx); + leaq (16 * 16)(%rdx), %rdx; + + jmp .Lcfb_dec_blk16; + + /* Handle trailing eight blocks. */ +.align 8 +.Lcfb_dec_blk8: + cmpq $8, %r8; + jb .Lcfb_dec_blk4; + + leaq -8(%r8), %r8; + + /* Load input and xor first key. Update IV. */ + vbroadcasti128 (0 * 16)(%rdi), %ymm4; + vinserti128 $1, (0 * 16)(%rcx), %ymm15, %ymm0; + vmovdqu (1 * 16)(%rcx), %ymm1; + vmovdqu (3 * 16)(%rcx), %ymm2; + vmovdqu (5 * 16)(%rcx), %ymm3; + vmovdqu (7 * 16)(%rcx), %xmm15; + vpxor %ymm4, %ymm0, %ymm0; + vpxor %ymm4, %ymm1, %ymm1; + vpxor %ymm4, %ymm2, %ymm2; + vpxor %ymm4, %ymm3, %ymm3; + vbroadcasti128 (1 * 16)(%rdi), %ymm4; + vmovdqu (0 * 16)(%rcx), %ymm10; + vmovdqu (2 * 16)(%rcx), %ymm11; + vmovdqu (4 * 16)(%rcx), %ymm12; + vmovdqu (6 * 16)(%rcx), %ymm13; + + leaq (8 * 16)(%rcx), %rcx; + + /* AES rounds */ + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (2 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (3 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (4 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (5 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (6 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (7 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (8 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (9 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (10 * 16)(%rdi), %ymm4; + cmpl $12, %r9d; + jb .Lcfb_dec_blk8_last; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (11 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (12 * 16)(%rdi), %ymm4; + jz .Lcfb_dec_blk8_last; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (13 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (14 * 16)(%rdi), %ymm4; + + /* Last round and output handling. */ + .Lcfb_dec_blk8_last: + vpxor %ymm4, %ymm10, %ymm10; + vpxor %ymm4, %ymm11, %ymm11; + vpxor %ymm4, %ymm12, %ymm12; + vpxor %ymm4, %ymm13, %ymm13; + vaesenclast %ymm10, %ymm0, %ymm0; + vaesenclast %ymm11, %ymm1, %ymm1; + vaesenclast %ymm12, %ymm2, %ymm2; + vaesenclast %ymm13, %ymm3, %ymm3; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + leaq (8 * 16)(%rdx), %rdx; + + /* Handle trailing four blocks. */ +.align 8 +.Lcfb_dec_blk4: + cmpq $4, %r8; + jb .Lcfb_dec_blk1; + + leaq -4(%r8), %r8; + + /* Load input and xor first key. */ + vmovdqa (0 * 16)(%rdi), %xmm4; + vpxor %xmm15, %xmm4, %xmm0; + vmovdqu (0 * 16)(%rcx), %xmm12; + vmovdqu (1 * 16)(%rcx), %xmm13; + vmovdqu (2 * 16)(%rcx), %xmm14; + vpxor %xmm12, %xmm4, %xmm1; + vpxor %xmm13, %xmm4, %xmm2; + vpxor %xmm14, %xmm4, %xmm3; + vmovdqa (1 * 16)(%rdi), %xmm4; + + /* Load input as next IV. */ + vmovdqu (3 * 16)(%rcx), %xmm15; + leaq (4 * 16)(%rcx), %rcx; + + /* AES rounds */ + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (2 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (3 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (4 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (5 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (6 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (7 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (8 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (9 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (10 * 16)(%rdi), %xmm4; + cmpl $12, %r9d; + jb .Lcfb_dec_blk4_last; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (11 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (12 * 16)(%rdi), %xmm4; + jz .Lcfb_dec_blk4_last; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (13 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (14 * 16)(%rdi), %xmm4; + + /* Last round and output handling. */ + .Lcfb_dec_blk4_last: + vpxor %xmm4, %xmm12, %xmm12; + vpxor %xmm4, %xmm13, %xmm13; + vpxor %xmm4, %xmm14, %xmm14; + vpxor %xmm4, %xmm15, %xmm4; + vaesenclast %xmm12, %xmm0, %xmm0; + vaesenclast %xmm13, %xmm1, %xmm1; + vaesenclast %xmm14, %xmm2, %xmm2; + vaesenclast %xmm4, %xmm3, %xmm3; + vmovdqu %xmm0, (0 * 16)(%rdx); + vmovdqu %xmm1, (1 * 16)(%rdx); + vmovdqu %xmm2, (2 * 16)(%rdx); + vmovdqu %xmm3, (3 * 16)(%rdx); + leaq (4 * 16)(%rdx), %rdx; + + /* Process trailing one to three blocks, one per loop. */ +.align 8 +.Lcfb_dec_blk1: + cmpq $1, %r8; + jb .Ldone_cfb_dec; + + leaq -1(%r8), %r8; + + /* Xor first key. */ + vpxor (0 * 16)(%rdi), %xmm15, %xmm0; + + /* Load input as next IV. */ + vmovdqu (%rcx), %xmm15; + leaq 16(%rcx), %rcx; + + /* AES rounds. */ + vaesenc (1 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (2 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (3 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (4 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (5 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (6 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (7 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (8 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (9 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (10 * 16)(%rdi), %xmm1; + cmpl $12, %r9d; + jb .Lcfb_dec_blk1_last; + vaesenc %xmm1, %xmm0, %xmm0; + vaesenc (11 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (12 * 16)(%rdi), %xmm1; + jz .Lcfb_dec_blk1_last; + vaesenc %xmm1, %xmm0, %xmm0; + vaesenc (13 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (14 * 16)(%rdi), %xmm1; + + /* Last round and output handling. */ + .Lcfb_dec_blk1_last: + vpxor %xmm15, %xmm1, %xmm1; + vaesenclast %xmm1, %xmm0, %xmm0; + vmovdqu %xmm0, (%rdx); + leaq 16(%rdx), %rdx; + + jmp .Lcfb_dec_blk1; + +.align 8 +.Ldone_cfb_dec: + /* Store IV. */ + vmovdqu %xmm15, (%rsi); + + vzeroall; + ret + CFI_ENDPROC(); +ELF(.size _gcry_vaes_avx2_cfb_dec_amd64,.-_gcry_vaes_avx2_cfb_dec_amd64) + +/********************************************************************** + CTR-mode encryption + **********************************************************************/ +ELF(.type _gcry_vaes_avx2_ctr_enc_amd64, at function) +.globl _gcry_vaes_avx2_ctr_enc_amd64 +_gcry_vaes_avx2_ctr_enc_amd64: + /* input: + * %rdi: round keys + * %rsi: counter + * %rdx: dst + * %rcx: src + * %r8: nblocks + * %r9: nrounds + */ + CFI_STARTPROC(); + + movq 8(%rsi), %r10; + movq 0(%rsi), %r11; + bswapq %r10; + bswapq %r11; + + vpcmpeqd %ymm15, %ymm15, %ymm15; + vpsrldq $8, %ymm15, %ymm15; // 0:-1 + vpaddq %ymm15, %ymm15, %ymm14; // 0:-2 + vbroadcasti128 .Lbswap128_mask rRIP, %ymm13; + + /* Process 16 blocks per loop. */ +.align 8 +.Lctr_enc_blk16: + cmpq $16, %r8; + jb .Lctr_enc_blk8; + + leaq -16(%r8), %r8; + + vbroadcasti128 (%rsi), %ymm7; + vbroadcasti128 (0 * 16)(%rdi), %ymm8; + + /* detect if carry handling is needed */ + addb $16, 15(%rsi); + jc .Lctr_enc_blk16_handle_carry; + + /* Increment counters. */ + vpaddb .Lbige_addb_0 rRIP, %ymm7, %ymm0; + vpaddb .Lbige_addb_2 rRIP, %ymm7, %ymm1; + vpaddb .Lbige_addb_4 rRIP, %ymm7, %ymm2; + vpaddb .Lbige_addb_6 rRIP, %ymm7, %ymm3; + vpaddb .Lbige_addb_8 rRIP, %ymm7, %ymm4; + vpaddb .Lbige_addb_10 rRIP, %ymm7, %ymm5; + vpaddb .Lbige_addb_12 rRIP, %ymm7, %ymm6; + vpaddb .Lbige_addb_14 rRIP, %ymm7, %ymm7; + leaq 16(%r10), %r10; + + .Lctr_enc_blk16_rounds: + /* AES rounds */ + XOR8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (1 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (2 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (3 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (4 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (5 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (6 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (7 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (8 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (9 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (10 * 16)(%rdi), %ymm8; + cmpl $12, %r9d; + jb .Lctr_enc_blk16_last; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (11 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (12 * 16)(%rdi), %ymm8; + jz .Lctr_enc_blk16_last; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (13 * 16)(%rdi), %ymm8; + VAESENC8(%ymm8, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (14 * 16)(%rdi), %ymm8; + + /* Last round and output handling. */ + .Lctr_enc_blk16_last: + vpxor (0 * 16)(%rcx), %ymm8, %ymm9; /* Xor src to last round key. */ + vpxor (2 * 16)(%rcx), %ymm8, %ymm10; + vpxor (4 * 16)(%rcx), %ymm8, %ymm11; + vpxor (6 * 16)(%rcx), %ymm8, %ymm12; + vaesenclast %ymm9, %ymm0, %ymm0; + vaesenclast %ymm10, %ymm1, %ymm1; + vaesenclast %ymm11, %ymm2, %ymm2; + vaesenclast %ymm12, %ymm3, %ymm3; + vpxor (8 * 16)(%rcx), %ymm8, %ymm9; + vpxor (10 * 16)(%rcx), %ymm8, %ymm10; + vpxor (12 * 16)(%rcx), %ymm8, %ymm11; + vpxor (14 * 16)(%rcx), %ymm8, %ymm8; + leaq (16 * 16)(%rcx), %rcx; + vaesenclast %ymm9, %ymm4, %ymm4; + vaesenclast %ymm10, %ymm5, %ymm5; + vaesenclast %ymm11, %ymm6, %ymm6; + vaesenclast %ymm8, %ymm7, %ymm7; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + vmovdqu %ymm4, (8 * 16)(%rdx); + vmovdqu %ymm5, (10 * 16)(%rdx); + vmovdqu %ymm6, (12 * 16)(%rdx); + vmovdqu %ymm7, (14 * 16)(%rdx); + leaq (16 * 16)(%rdx), %rdx; + + jmp .Lctr_enc_blk16; + + .align 8 + .Lctr_enc_blk16_handle_carry: + /* Increment counters (handle carry). */ + vpshufb %xmm13, %xmm7, %xmm1; /* be => le */ + vmovdqa %xmm1, %xmm0; + inc_le128(%xmm1, %xmm15, %xmm5); + vinserti128 $1, %xmm1, %ymm0, %ymm7; /* ctr: +1:+0 */ + vpshufb %ymm13, %ymm7, %ymm0; + addq $16, %r10; + adcq $0, %r11; + bswapq %r10; + bswapq %r11; + movq %r10, 8(%rsi); + movq %r11, 0(%rsi); + bswapq %r10; + bswapq %r11; + add2_le128(%ymm7, %ymm15, %ymm14, %ymm9, %ymm10); /* ctr: +3:+2 */ + vpshufb %ymm13, %ymm7, %ymm1; + add2_le128(%ymm7, %ymm15, %ymm14, %ymm9, %ymm10); /* ctr: +5:+4 */ + vpshufb %ymm13, %ymm7, %ymm2; + add2_le128(%ymm7, %ymm15, %ymm14, %ymm9, %ymm10); /* ctr: +7:+6 */ + vpshufb %ymm13, %ymm7, %ymm3; + add2_le128(%ymm7, %ymm15, %ymm14, %ymm9, %ymm10); /* ctr: +9:+8 */ + vpshufb %ymm13, %ymm7, %ymm4; + add2_le128(%ymm7, %ymm15, %ymm14, %ymm9, %ymm10); /* ctr: +11:+10 */ + vpshufb %ymm13, %ymm7, %ymm5; + add2_le128(%ymm7, %ymm15, %ymm14, %ymm9, %ymm10); /* ctr: +13:+12 */ + vpshufb %ymm13, %ymm7, %ymm6; + add2_le128(%ymm7, %ymm15, %ymm14, %ymm9, %ymm10); /* ctr: +15:+14 */ + vpshufb %ymm13, %ymm7, %ymm7; + + jmp .Lctr_enc_blk16_rounds; + + /* Handle trailing eight blocks. */ +.align 8 +.Lctr_enc_blk8: + cmpq $8, %r8; + jb .Lctr_enc_blk4; + + leaq -8(%r8), %r8; + + vbroadcasti128 (%rsi), %ymm3; + vbroadcasti128 (0 * 16)(%rdi), %ymm4; + + /* detect if carry handling is needed */ + addb $8, 15(%rsi); + jc .Lctr_enc_blk8_handle_carry; + + /* Increment counters. */ + vpaddb .Lbige_addb_0 rRIP, %ymm3, %ymm0; + vpaddb .Lbige_addb_2 rRIP, %ymm3, %ymm1; + vpaddb .Lbige_addb_4 rRIP, %ymm3, %ymm2; + vpaddb .Lbige_addb_6 rRIP, %ymm3, %ymm3; + leaq 8(%r10), %r10; + + .Lctr_enc_blk8_rounds: + /* AES rounds */ + XOR4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (1 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (2 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (3 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (4 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (5 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (6 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (7 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (8 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (9 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (10 * 16)(%rdi), %ymm4; + cmpl $12, %r9d; + jb .Lctr_enc_blk8_last; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (11 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (12 * 16)(%rdi), %ymm4; + jz .Lctr_enc_blk8_last; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (13 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (14 * 16)(%rdi), %ymm4; + + /* Last round and output handling. */ + .Lctr_enc_blk8_last: + vpxor (0 * 16)(%rcx), %ymm4, %ymm5; /* Xor src to last round key. */ + vpxor (2 * 16)(%rcx), %ymm4, %ymm6; + vpxor (4 * 16)(%rcx), %ymm4, %ymm7; + vpxor (6 * 16)(%rcx), %ymm4, %ymm4; + leaq (8 * 16)(%rcx), %rcx; + vaesenclast %ymm5, %ymm0, %ymm0; + vaesenclast %ymm6, %ymm1, %ymm1; + vaesenclast %ymm7, %ymm2, %ymm2; + vaesenclast %ymm4, %ymm3, %ymm3; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + leaq (8 * 16)(%rdx), %rdx; + + jmp .Lctr_enc_blk4; + + .align 8 + .Lctr_enc_blk8_handle_carry: + /* Increment counters (handle carry). */ + vpshufb %xmm13, %xmm3, %xmm1; /* be => le */ + vmovdqa %xmm1, %xmm0; + inc_le128(%xmm1, %xmm15, %xmm5); + vinserti128 $1, %xmm1, %ymm0, %ymm3; /* ctr: +1:+0 */ + vpshufb %ymm13, %ymm3, %ymm0; + addq $8, %r10; + adcq $0, %r11; + bswapq %r10; + bswapq %r11; + movq %r10, 8(%rsi); + movq %r11, 0(%rsi); + bswapq %r10; + bswapq %r11; + add2_le128(%ymm3, %ymm15, %ymm14, %ymm5, %ymm6); /* ctr: +3:+2 */ + vpshufb %ymm13, %ymm3, %ymm1; + add2_le128(%ymm3, %ymm15, %ymm14, %ymm5, %ymm6); /* ctr: +5:+4 */ + vpshufb %ymm13, %ymm3, %ymm2; + add2_le128(%ymm3, %ymm15, %ymm14, %ymm5, %ymm6); /* ctr: +7:+6 */ + vpshufb %ymm13, %ymm3, %ymm3; + + jmp .Lctr_enc_blk8_rounds; + + /* Handle trailing four blocks. */ +.align 8 +.Lctr_enc_blk4: + cmpq $4, %r8; + jb .Lctr_enc_blk1; + + leaq -4(%r8), %r8; + + vmovdqu (%rsi), %xmm0; + vmovdqa (0 * 16)(%rdi), %xmm4; + + /* detect if carry handling is needed */ + addb $4, 15(%rsi); + jc .Lctr_enc_blk4_handle_carry; + + /* Increment counters. */ + vpaddb .Lbige_addb_1 rRIP, %xmm0, %xmm1; + vpaddb .Lbige_addb_2 rRIP, %xmm0, %xmm2; + vpaddb .Lbige_addb_3 rRIP, %xmm0, %xmm3; + leaq 4(%r10), %r10; + + .Lctr_enc_blk4_rounds: + /* AES rounds */ + XOR4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (1 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (2 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (3 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (4 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (5 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (6 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (7 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (8 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (9 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (10 * 16)(%rdi), %xmm4; + cmpl $12, %r9d; + jb .Lctr_enc_blk4_last; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (11 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (12 * 16)(%rdi), %xmm4; + jz .Lctr_enc_blk4_last; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (13 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (14 * 16)(%rdi), %xmm4; + + /* Last round and output handling. */ + .Lctr_enc_blk4_last: + vpxor (0 * 16)(%rcx), %xmm4, %xmm5; /* Xor src to last round key. */ + vpxor (1 * 16)(%rcx), %xmm4, %xmm6; + vpxor (2 * 16)(%rcx), %xmm4, %xmm7; + vpxor (3 * 16)(%rcx), %xmm4, %xmm4; + leaq (4 * 16)(%rcx), %rcx; + vaesenclast %xmm5, %xmm0, %xmm0; + vaesenclast %xmm6, %xmm1, %xmm1; + vaesenclast %xmm7, %xmm2, %xmm2; + vaesenclast %xmm4, %xmm3, %xmm3; + vmovdqu %xmm0, (0 * 16)(%rdx); + vmovdqu %xmm1, (1 * 16)(%rdx); + vmovdqu %xmm2, (2 * 16)(%rdx); + vmovdqu %xmm3, (3 * 16)(%rdx); + leaq (4 * 16)(%rdx), %rdx; + + jmp .Lctr_enc_blk1; + + .align 8 + .Lctr_enc_blk4_handle_carry: + /* Increment counters (handle carry). */ + vpshufb %xmm13, %xmm0, %xmm3; /* be => le */ + addq $4, %r10; + adcq $0, %r11; + bswapq %r10; + bswapq %r11; + movq %r10, 8(%rsi); + movq %r11, 0(%rsi); + bswapq %r10; + bswapq %r11; + inc_le128(%xmm3, %xmm15, %xmm5); + vpshufb %xmm13, %xmm3, %xmm1; + inc_le128(%xmm3, %xmm15, %xmm5); + vpshufb %xmm13, %xmm3, %xmm2; + inc_le128(%xmm3, %xmm15, %xmm5); + vpshufb %xmm13, %xmm3, %xmm3; + + jmp .Lctr_enc_blk4_rounds; + + /* Process trailing one to three blocks, one per loop. */ +.align 8 +.Lctr_enc_blk1: + cmpq $1, %r8; + jb .Ldone_ctr_enc; + + leaq -1(%r8), %r8; + + /* Load and increament counter. */ + vmovdqu (%rsi), %xmm0; + addq $1, %r10; + adcq $0, %r11; + bswapq %r10; + bswapq %r11; + movq %r10, 8(%rsi); + movq %r11, 0(%rsi); + bswapq %r10; + bswapq %r11; + + /* AES rounds. */ + vpxor (0 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (1 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (2 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (3 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (4 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (5 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (6 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (7 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (8 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (9 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (10 * 16)(%rdi), %xmm1; + cmpl $12, %r9d; + jb .Lctr_enc_blk1_last; + vaesenc %xmm1, %xmm0, %xmm0; + vaesenc (11 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (12 * 16)(%rdi), %xmm1; + jz .Lctr_enc_blk1_last; + vaesenc %xmm1, %xmm0, %xmm0; + vaesenc (13 * 16)(%rdi), %xmm0, %xmm0; + vmovdqa (14 * 16)(%rdi), %xmm1; + + /* Last round and output handling. */ + .Lctr_enc_blk1_last: + vpxor (%rcx), %xmm1, %xmm1; /* Xor src to last round key. */ + leaq 16(%rcx), %rcx; + vaesenclast %xmm1, %xmm0, %xmm0; /* Last round and xor with xmm1. */ + vmovdqu %xmm0, (%rdx); + leaq 16(%rdx), %rdx; + + jmp .Lctr_enc_blk1; + +.align 8 +.Ldone_ctr_enc: + vzeroall; + xorl %r10d, %r10d; + xorl %r11d, %r11d; + ret + CFI_ENDPROC(); +ELF(.size _gcry_vaes_avx2_ctr_enc_amd64,.-_gcry_vaes_avx2_ctr_enc_amd64) + +/********************************************************************** + OCB-mode encryption/decryption + **********************************************************************/ +ELF(.type _gcry_vaes_avx2_ocb_checksum, at function) +_gcry_vaes_avx2_ocb_checksum: + /* input: + * %rax: offset pointer + * %r10: plaintext pointer + * %r11: nblocks + */ + CFI_STARTPROC(); + + vpxor %xmm0, %xmm0, %xmm0; + cmpq $4, %r11; + jb .Locb_checksum_blk1; + vpxor %xmm1, %xmm1, %xmm1; + vpxor %xmm2, %xmm2, %xmm2; + vpxor %xmm3, %xmm3, %xmm3; + cmpq $16, %r11; + jb .Locb_checksum_blk4; + vpxor %xmm4, %xmm4, %xmm4; + vpxor %xmm5, %xmm5, %xmm5; + vpxor %xmm6, %xmm6, %xmm6; + vpxor %xmm7, %xmm7, %xmm7; + cmpq $32, %r11; + jb .Locb_checksum_blk16; + vpxor %xmm8, %xmm8, %xmm8; + vpxor %xmm9, %xmm9, %xmm9; + vpxor %xmm10, %xmm10, %xmm10; + vpxor %xmm11, %xmm11, %xmm11; + vpxor %xmm12, %xmm12, %xmm12; + vpxor %xmm13, %xmm13, %xmm13; + vpxor %xmm14, %xmm14, %xmm14; + vpxor %xmm15, %xmm15, %xmm15; + +.align 8 +.Locb_checksum_blk32: + cmpq $32, %r11; + jb .Locb_checksum_blk32_done; + + leaq -32(%r11), %r11; + + vpxor (0 * 16)(%r10), %ymm0, %ymm0; + vpxor (2 * 16)(%r10), %ymm1, %ymm1; + vpxor (4 * 16)(%r10), %ymm2, %ymm2; + vpxor (6 * 16)(%r10), %ymm3, %ymm3; + vpxor (8 * 16)(%r10), %ymm4, %ymm4; + vpxor (10 * 16)(%r10), %ymm5, %ymm5; + vpxor (12 * 16)(%r10), %ymm6, %ymm6; + vpxor (14 * 16)(%r10), %ymm7, %ymm7; + vpxor (16 * 16)(%r10), %ymm8, %ymm8; + vpxor (18 * 16)(%r10), %ymm9, %ymm9; + vpxor (20 * 16)(%r10), %ymm10, %ymm10; + vpxor (22 * 16)(%r10), %ymm11, %ymm11; + vpxor (24 * 16)(%r10), %ymm12, %ymm12; + vpxor (26 * 16)(%r10), %ymm13, %ymm13; + vpxor (28 * 16)(%r10), %ymm14, %ymm14; + vpxor (30 * 16)(%r10), %ymm15, %ymm15; + leaq (32 * 16)(%r10), %r10; + + jmp .Locb_checksum_blk32; + +.align 8 +.Locb_checksum_blk32_done: + vpxor %ymm8, %ymm0, %ymm0; + vpxor %ymm9, %ymm1, %ymm1; + vpxor %ymm10, %ymm2, %ymm2; + vpxor %ymm11, %ymm3, %ymm3; + vpxor %ymm12, %ymm4, %ymm4; + vpxor %ymm13, %ymm5, %ymm5; + vpxor %ymm14, %ymm6, %ymm6; + vpxor %ymm15, %ymm7, %ymm7; + +.align 8 +.Locb_checksum_blk16: + cmpq $16, %r11; + jb .Locb_checksum_blk16_done; + + leaq -16(%r11), %r11; + + vpxor (0 * 16)(%r10), %ymm0, %ymm0; + vpxor (2 * 16)(%r10), %ymm1, %ymm1; + vpxor (4 * 16)(%r10), %ymm2, %ymm2; + vpxor (6 * 16)(%r10), %ymm3, %ymm3; + vpxor (8 * 16)(%r10), %ymm4, %ymm4; + vpxor (10 * 16)(%r10), %ymm5, %ymm5; + vpxor (12 * 16)(%r10), %ymm6, %ymm6; + vpxor (14 * 16)(%r10), %ymm7, %ymm7; + leaq (16 * 16)(%r10), %r10; + + jmp .Locb_checksum_blk16; + +.align 8 +.Locb_checksum_blk16_done: + vpxor %ymm4, %ymm0, %ymm0; + vpxor %ymm5, %ymm1, %ymm1; + vpxor %ymm6, %ymm2, %ymm2; + vpxor %ymm7, %ymm3, %ymm3; + vextracti128 $1, %ymm0, %xmm4; + vextracti128 $1, %ymm1, %xmm5; + vextracti128 $1, %ymm2, %xmm6; + vextracti128 $1, %ymm3, %xmm7; + vpxor %xmm4, %xmm0, %xmm0; + vpxor %xmm5, %xmm1, %xmm1; + vpxor %xmm6, %xmm2, %xmm2; + vpxor %xmm7, %xmm3, %xmm3; + +.align 8 +.Locb_checksum_blk4: + cmpq $4, %r11; + jb .Locb_checksum_blk4_done; + + leaq -4(%r11), %r11; + + vpxor (0 * 16)(%r10), %xmm0, %xmm0; + vpxor (1 * 16)(%r10), %xmm1, %xmm1; + vpxor (2 * 16)(%r10), %xmm2, %xmm2; + vpxor (3 * 16)(%r10), %xmm3, %xmm3; + leaq (4 * 16)(%r10), %r10; + + jmp .Locb_checksum_blk4; + +.align 8 +.Locb_checksum_blk4_done: + vpxor %xmm1, %xmm0, %xmm0; + vpxor %xmm3, %xmm2, %xmm2; + vpxor %xmm2, %xmm0, %xmm0; + +.align 8 +.Locb_checksum_blk1: + cmpq $1, %r11; + jb .Locb_checksum_done; + + leaq -1(%r11), %r11; + + vpxor (%r10), %xmm0, %xmm0; + leaq 16(%r10), %r10; + + jmp .Locb_checksum_blk1; + +.align 8 +.Locb_checksum_done: + vpxor (%rax), %xmm0, %xmm0; + vmovdqu %xmm0, (%rax); + ret; + CFI_ENDPROC(); +ELF(.size _gcry_vaes_avx2_ocb_checksum,.-_gcry_vaes_avx2_ocb_checksum) + +#define STACK_REGS_POS (16 * 16 + 4 * 16) +#define STACK_ALLOC (STACK_REGS_POS + 6 * 8) + +ELF(.type _gcry_vaes_avx2_ocb_crypt_amd64, at function) +.globl _gcry_vaes_avx2_ocb_crypt_amd64 +_gcry_vaes_avx2_ocb_crypt_amd64: + /* input: + * %rdi: round keys + * %esi: nblk + * %rdx: dst + * %rcx: src + * %r8: nblocks + * %r9: nrounds + * 16(%rbp): offset + * 24(%rbp): checksum + * 32(%rbp): L-array + * 40(%rbp): encrypt (%r15d) + */ + CFI_STARTPROC(); + + pushq %rbp; + CFI_PUSH(%rbp); + movq %rsp, %rbp; + CFI_DEF_CFA_REGISTER(%rbp); + + subq $STACK_ALLOC, %rsp; + andq $~63, %rsp; + + movq %r12, (STACK_REGS_POS + 0 * 8)(%rsp); + CFI_REG_ON_STACK(r12, STACK_REGS_POS + 0 * 8); + movq %r13, (STACK_REGS_POS + 1 * 8)(%rsp); + CFI_REG_ON_STACK(r13, STACK_REGS_POS + 1 * 8); + movq %r14, (STACK_REGS_POS + 2 * 8)(%rsp); + CFI_REG_ON_STACK(r14, STACK_REGS_POS + 2 * 8); + movq %r15, (STACK_REGS_POS + 3 * 8)(%rsp); + CFI_REG_ON_STACK(r15, STACK_REGS_POS + 3 * 8); + + movl 40(%rbp), %r15d; /* encrypt-flag. */ + movq 16(%rbp), %r14; /* offset ptr. */ + + /* Handle encryption checksumming. */ + testl %r15d, %r15d; + jz .Locb_dec_checksum_prepare; + movq 24(%rbp), %rax; /* checksum ptr. */ + movq %rcx, %r10; + movq %r8, %r11; + call _gcry_vaes_avx2_ocb_checksum; + jmp .Locb_enc_checksum_done; +.Locb_dec_checksum_prepare: + /* Store plaintext address and number of blocks for decryption + * checksumming. */ + movq %rdx, (STACK_REGS_POS + 4 * 8)(%rsp); + movq %r8, (STACK_REGS_POS + 5 * 8)(%rsp); +.Locb_enc_checksum_done: + + vmovdqu (%r14), %xmm15; /* Load offset. */ + movq 32(%rbp), %r14; /* L-array ptr. */ + vmovdqa (0 * 16)(%rdi), %xmm0; /* first key */ + movl $(10 * 16), %eax; + cmpl $12, %r9d; + jb .Llast_key_ptr; + movl $(12 * 16), %eax; + je .Llast_key_ptr; + movl $(14 * 16), %eax; + .align 8 + .Llast_key_ptr: + vpxor (%rdi, %rax), %xmm0, %xmm0; /* first key ^ last key */ + vpxor (0 * 16)(%rdi), %xmm15, %xmm15; /* offset ^ first key */ + vmovdqa %xmm0, (14 * 16)(%rsp); + vmovdqa %xmm0, (15 * 16)(%rsp); + +.align 8 +.Lhandle_unaligned_ocb: + /* Get number of blocks to align nblk to 16 (and L-array optimization). */ + movl %esi, %r10d; + negl %r10d; + andl $15, %r10d; + cmpq %r8, %r10; + cmovaq %r8, %r10; + cmpq $1, %r10; + jb .Lunaligned_ocb_done; + + /* Unaligned: Process eight blocks per loop. */ +.align 8 +.Locb_unaligned_blk8: + cmpq $8, %r10; + jb .Locb_unaligned_blk4; + + leaq -8(%r8), %r8; + leaq -8(%r10), %r10; + + leal 1(%esi), %r11d; + leal 2(%esi), %r12d; + leal 3(%esi), %r13d; + leal 4(%esi), %eax; + tzcntl %r11d, %r11d; + tzcntl %r12d, %r12d; + tzcntl %r13d, %r13d; + tzcntl %eax, %eax; + shll $4, %r11d; + shll $4, %r12d; + shll $4, %r13d; + shll $4, %eax; + vpxor (%r14, %r11), %xmm15, %xmm5; + vpxor (%r14, %r12), %xmm5, %xmm6; + vpxor (%r14, %r13), %xmm6, %xmm7; + vpxor (%r14, %rax), %xmm7, %xmm8; + + leal 5(%esi), %r11d; + leal 6(%esi), %r12d; + leal 7(%esi), %r13d; + leal 8(%esi), %esi; + tzcntl %r11d, %r11d; + tzcntl %r12d, %r12d; + tzcntl %r13d, %r13d; + tzcntl %esi, %eax; + shll $4, %r11d; + shll $4, %r12d; + shll $4, %r13d; + shll $4, %eax; + vpxor (%r14, %r11), %xmm8, %xmm9; + vpxor (%r14, %r12), %xmm9, %xmm10; + vpxor (%r14, %r13), %xmm10, %xmm11; + vpxor (%r14, %rax), %xmm11, %xmm15; + + vinserti128 $1, %xmm6, %ymm5, %ymm5; + vinserti128 $1, %xmm8, %ymm7, %ymm6; + vinserti128 $1, %xmm10, %ymm9, %ymm7; + vinserti128 $1, %xmm15, %ymm11, %ymm8; + + vpxor (0 * 16)(%rcx), %ymm5, %ymm0; + vpxor (2 * 16)(%rcx), %ymm6, %ymm1; + vpxor (4 * 16)(%rcx), %ymm7, %ymm2; + vpxor (6 * 16)(%rcx), %ymm8, %ymm3; + leaq (8 * 16)(%rcx), %rcx; + + vmovdqa (14 * 16)(%rsp), %ymm9; + + testl %r15d, %r15d; + jz .Locb_unaligned_blk8_dec; + + /* AES rounds */ + vbroadcasti128 (1 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (2 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (3 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (4 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (5 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (6 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (7 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (8 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (9 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + cmpl $12, %r9d; + jb .Locb_unaligned_blk8_enc_last; + vbroadcasti128 (10 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (11 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + jz .Locb_unaligned_blk8_enc_last; + vbroadcasti128 (12 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (13 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + + /* Last round and output handling. */ + .Locb_unaligned_blk8_enc_last: + vpxor %ymm5, %ymm9, %ymm5; /* Xor src to last round key. */ + vpxor %ymm6, %ymm9, %ymm6; + vpxor %ymm7, %ymm9, %ymm7; + vpxor %ymm8, %ymm9, %ymm4; + vaesenclast %ymm5, %ymm0, %ymm0; + vaesenclast %ymm6, %ymm1, %ymm1; + vaesenclast %ymm7, %ymm2, %ymm2; + vaesenclast %ymm4, %ymm3, %ymm3; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + leaq (8 * 16)(%rdx), %rdx; + + jmp .Locb_unaligned_blk8; + + .align 8 + .Locb_unaligned_blk8_dec: + /* AES rounds */ + vbroadcasti128 (1 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (2 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (3 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (4 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (5 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (6 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (7 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (8 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (9 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + cmpl $12, %r9d; + jb .Locb_unaligned_blk8_dec_last; + vbroadcasti128 (10 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (11 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + jz .Locb_unaligned_blk8_dec_last; + vbroadcasti128 (12 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (13 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + + /* Last round and output handling. */ + .Locb_unaligned_blk8_dec_last: + vpxor %ymm5, %ymm9, %ymm5; /* Xor src to last round key. */ + vpxor %ymm6, %ymm9, %ymm6; + vpxor %ymm7, %ymm9, %ymm7; + vpxor %ymm8, %ymm9, %ymm4; + vaesdeclast %ymm5, %ymm0, %ymm0; + vaesdeclast %ymm6, %ymm1, %ymm1; + vaesdeclast %ymm7, %ymm2, %ymm2; + vaesdeclast %ymm4, %ymm3, %ymm3; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + leaq (8 * 16)(%rdx), %rdx; + + jmp .Locb_unaligned_blk8; + + /* Unaligned: Process four blocks per loop. */ +.align 8 +.Locb_unaligned_blk4: + cmpq $4, %r10; + jb .Locb_unaligned_blk1; + + leaq -4(%r8), %r8; + leaq -4(%r10), %r10; + + leal 1(%esi), %r11d; + leal 2(%esi), %r12d; + leal 3(%esi), %r13d; + leal 4(%esi), %esi; + tzcntl %r11d, %r11d; + tzcntl %r12d, %r12d; + tzcntl %r13d, %r13d; + tzcntl %esi, %eax; + shll $4, %r11d; + shll $4, %r12d; + shll $4, %r13d; + shll $4, %eax; + + vpxor (%r14, %r11), %xmm15, %xmm5; + vpxor (%r14, %r12), %xmm5, %xmm6; + vpxor (%r14, %r13), %xmm6, %xmm7; + vpxor (%r14, %rax), %xmm7, %xmm15; + + vpxor (0 * 16)(%rcx), %xmm5, %xmm0; + vpxor (1 * 16)(%rcx), %xmm6, %xmm1; + vpxor (2 * 16)(%rcx), %xmm7, %xmm2; + vpxor (3 * 16)(%rcx), %xmm15, %xmm3; + leaq (4 * 16)(%rcx), %rcx; + + vmovdqa (14 * 16)(%rsp), %xmm8; + + testl %r15d, %r15d; + jz .Locb_unaligned_blk4_dec; + + /* AES rounds */ + vmovdqa (1 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (2 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (3 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (4 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (5 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (6 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (7 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (8 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (9 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + cmpl $12, %r9d; + jb .Locb_unaligned_blk4_enc_last; + vmovdqa (10 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (11 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + jz .Locb_unaligned_blk4_enc_last; + vmovdqa (12 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (13 * 16)(%rdi), %xmm4; + VAESENC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + + /* Last round and output handling. */ + .Locb_unaligned_blk4_enc_last: + vpxor %xmm5, %xmm8, %xmm5; /* Xor src to last round key. */ + vpxor %xmm6, %xmm8, %xmm6; + vpxor %xmm7, %xmm8, %xmm7; + vpxor %xmm15, %xmm8, %xmm4; + vaesenclast %xmm5, %xmm0, %xmm0; + vaesenclast %xmm6, %xmm1, %xmm1; + vaesenclast %xmm7, %xmm2, %xmm2; + vaesenclast %xmm4, %xmm3, %xmm3; + vmovdqu %xmm0, (0 * 16)(%rdx); + vmovdqu %xmm1, (1 * 16)(%rdx); + vmovdqu %xmm2, (2 * 16)(%rdx); + vmovdqu %xmm3, (3 * 16)(%rdx); + leaq (4 * 16)(%rdx), %rdx; + + jmp .Locb_unaligned_blk4; + + .align 8 + .Locb_unaligned_blk4_dec: + /* AES rounds */ + vmovdqa (1 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (2 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (3 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (4 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (5 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (6 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (7 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (8 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (9 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + cmpl $12, %r9d; + jb .Locb_unaligned_blk4_dec_last; + vmovdqa (10 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (11 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + jz .Locb_unaligned_blk4_dec_last; + vmovdqa (12 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + vmovdqa (13 * 16)(%rdi), %xmm4; + VAESDEC4(%xmm4, %xmm0, %xmm1, %xmm2, %xmm3); + + /* Last round and output handling. */ + .Locb_unaligned_blk4_dec_last: + vpxor %xmm5, %xmm8, %xmm5; /* Xor src to last round key. */ + vpxor %xmm6, %xmm8, %xmm6; + vpxor %xmm7, %xmm8, %xmm7; + vpxor %xmm15, %xmm8, %xmm4; + vaesdeclast %xmm5, %xmm0, %xmm0; + vaesdeclast %xmm6, %xmm1, %xmm1; + vaesdeclast %xmm7, %xmm2, %xmm2; + vaesdeclast %xmm4, %xmm3, %xmm3; + vmovdqu %xmm0, (0 * 16)(%rdx); + vmovdqu %xmm1, (1 * 16)(%rdx); + vmovdqu %xmm2, (2 * 16)(%rdx); + vmovdqu %xmm3, (3 * 16)(%rdx); + leaq (4 * 16)(%rdx), %rdx; + + jmp .Locb_unaligned_blk4; + + /* Unaligned: Process one block per loop. */ +.align 8 +.Locb_unaligned_blk1: + cmpq $1, %r10; + jb .Lunaligned_ocb_done; + + leaq -1(%r8), %r8; + leaq -1(%r10), %r10; + + leal 1(%esi), %esi; + tzcntl %esi, %r11d; + shll $4, %r11d; + vpxor (%r14, %r11), %xmm15, %xmm15; + vpxor (%rcx), %xmm15, %xmm0; + leaq 16(%rcx), %rcx; + + testl %r15d, %r15d; + jz .Locb_unaligned_blk1_dec; + /* AES rounds. */ + vaesenc (1 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (2 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (3 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (4 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (5 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (6 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (7 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (8 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (9 * 16)(%rdi), %xmm0, %xmm0; + cmpl $12, %r9d; + jb .Locb_unaligned_blk1_enc_last; + vaesenc (10 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (11 * 16)(%rdi), %xmm0, %xmm0; + jz .Locb_unaligned_blk1_enc_last; + vaesenc (12 * 16)(%rdi), %xmm0, %xmm0; + vaesenc (13 * 16)(%rdi), %xmm0, %xmm0; + + /* Last round and output handling. */ + .Locb_unaligned_blk1_enc_last: + vpxor (14 * 16)(%rsp), %xmm15, %xmm1; + vaesenclast %xmm1, %xmm0, %xmm0; + vmovdqu %xmm0, (%rdx); + leaq 16(%rdx), %rdx; + + jmp .Locb_unaligned_blk1; + + .align 8 + .Locb_unaligned_blk1_dec: + /* AES rounds. */ + vaesdec (1 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (2 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (3 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (4 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (5 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (6 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (7 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (8 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (9 * 16)(%rdi), %xmm0, %xmm0; + cmpl $12, %r9d; + jb .Locb_unaligned_blk1_dec_last; + vaesdec (10 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (11 * 16)(%rdi), %xmm0, %xmm0; + jz .Locb_unaligned_blk1_dec_last; + vaesdec (12 * 16)(%rdi), %xmm0, %xmm0; + vaesdec (13 * 16)(%rdi), %xmm0, %xmm0; + + /* Last round and output handling. */ + .Locb_unaligned_blk1_dec_last: + vpxor (14 * 16)(%rsp), %xmm15, %xmm1; + vaesdeclast %xmm1, %xmm0, %xmm0; + vmovdqu %xmm0, (%rdx); + leaq 16(%rdx), %rdx; + + jmp .Locb_unaligned_blk1; + +.align 8 +.Lunaligned_ocb_done: + cmpq $1, %r8; + jb .Ldone_ocb; + + /* Short buffers do not benefit from L-array optimization. */ + movq %r8, %r10; + cmpq $16, %r8; + jb .Locb_unaligned_blk8; + + vinserti128 $1, %xmm15, %ymm15, %ymm15; + + /* Prepare L-array optimization. + * Since nblk is aligned to 16, offsets will have following + * construction: + * - block1 = ntz{0} = offset ^ L[0] + * - block2 = ntz{1} = offset ^ L[0] ^ L[1] + * - block3 = ntz{0} = offset ^ L[1] + * - block4 = ntz{2} = offset ^ L[1] ^ L[2] + * - block5 = ntz{0} = offset ^ L[0] ^ L[1] ^ L[2] + * - block6 = ntz{1} = offset ^ L[0] ^ L[2] + * - block7 = ntz{0} = offset ^ L[2] + * - block8 = ntz{3} = offset ^ L[2] ^ L[3] + * - block9 = ntz{0} = offset ^ L[0] ^ L[2] ^ L[3] + * - block10 = ntz{1} = offset ^ L[0] ^ L[1] ^ L[2] ^ L[3] + * - block11 = ntz{0} = offset ^ L[1] ^ L[2] ^ L[3] + * - block12 = ntz{2} = offset ^ L[1] ^ L[3] + * - block13 = ntz{0} = offset ^ L[0] ^ L[1] ^ L[3] + * - block14 = ntz{1} = offset ^ L[0] ^ L[3] + * - block15 = ntz{0} = offset ^ L[3] + * - block16 = ntz{x} = offset ^ L[3] ^ L[ntz{x}] + */ + vmovdqu (0 * 16)(%r14), %xmm0; + vmovdqu (1 * 16)(%r14), %xmm1; + vmovdqu (2 * 16)(%r14), %xmm2; + vmovdqu (3 * 16)(%r14), %xmm3; + vpxor %xmm0, %xmm1, %xmm4; /* L[0] ^ L[1] */ + vpxor %xmm0, %xmm2, %xmm5; /* L[0] ^ L[2] */ + vpxor %xmm0, %xmm3, %xmm6; /* L[0] ^ L[3] */ + vpxor %xmm1, %xmm2, %xmm7; /* L[1] ^ L[2] */ + vpxor %xmm1, %xmm3, %xmm8; /* L[1] ^ L[3] */ + vpxor %xmm2, %xmm3, %xmm9; /* L[2] ^ L[3] */ + vpxor %xmm4, %xmm2, %xmm10; /* L[0] ^ L[1] ^ L[2] */ + vpxor %xmm5, %xmm3, %xmm11; /* L[0] ^ L[2] ^ L[3] */ + vpxor %xmm7, %xmm3, %xmm12; /* L[1] ^ L[2] ^ L[3] */ + vpxor %xmm0, %xmm8, %xmm13; /* L[0] ^ L[1] ^ L[3] */ + vpxor %xmm4, %xmm9, %xmm14; /* L[0] ^ L[1] ^ L[2] ^ L[3] */ + vinserti128 $1, %xmm4, %ymm0, %ymm0; + vinserti128 $1, %xmm7, %ymm1, %ymm1; + vinserti128 $1, %xmm5, %ymm10, %ymm10; + vinserti128 $1, %xmm9, %ymm2, %ymm2; + vinserti128 $1, %xmm14, %ymm11, %ymm11; + vinserti128 $1, %xmm8, %ymm12, %ymm12; + vinserti128 $1, %xmm6, %ymm13, %ymm13; + vmovdqa %ymm0, (0 * 16)(%rsp); + vmovdqa %ymm1, (2 * 16)(%rsp); + vmovdqa %ymm10, (4 * 16)(%rsp); + vmovdqa %ymm2, (6 * 16)(%rsp); + vmovdqa %ymm11, (8 * 16)(%rsp); + vmovdqa %ymm12, (10 * 16)(%rsp); + vmovdqa %ymm13, (12 * 16)(%rsp); + + /* Aligned: Process 16 blocks per loop. */ +.align 8 +.Locb_aligned_blk16: + cmpq $16, %r8; + jb .Locb_aligned_blk8; + + leaq -16(%r8), %r8; + + leal 16(%esi), %esi; + tzcntl %esi, %eax; + shll $4, %eax; + + vpxor (0 * 16)(%rsp), %ymm15, %ymm8; + vpxor (2 * 16)(%rsp), %ymm15, %ymm9; + vpxor (4 * 16)(%rsp), %ymm15, %ymm10; + vpxor (6 * 16)(%rsp), %ymm15, %ymm11; + vpxor (8 * 16)(%rsp), %ymm15, %ymm12; + + vpxor (3 * 16)(%r14), %xmm15, %xmm13; /* offset ^ first key ^ L[3] */ + vpxor (%r14, %rax), %xmm13, %xmm14; /* offset ^ first key ^ L[3] ^ L[ntz{nblk+16}] */ + vinserti128 $1, %xmm14, %ymm13, %ymm14; + + vpxor (10 * 16)(%rsp), %ymm15, %ymm13; + vpxor (14 * 16)(%rcx), %ymm14, %ymm7; + + vpxor (0 * 16)(%rcx), %ymm8, %ymm0; + vpxor (2 * 16)(%rcx), %ymm9, %ymm1; + vpxor (4 * 16)(%rcx), %ymm10, %ymm2; + vpxor (6 * 16)(%rcx), %ymm11, %ymm3; + vpxor (8 * 16)(%rcx), %ymm12, %ymm4; + vpxor (10 * 16)(%rcx), %ymm13, %ymm5; + vmovdqa %ymm13, (16 * 16)(%rsp); + vpxor (12 * 16)(%rsp), %ymm15, %ymm13; + vpxor (12 * 16)(%rcx), %ymm13, %ymm6; + vmovdqa %ymm13, (18 * 16)(%rsp); + + leaq (16 * 16)(%rcx), %rcx; + + vperm2i128 $0x11, %ymm14, %ymm14, %ymm15; + + testl %r15d, %r15d; + jz .Locb_aligned_blk16_dec; + /* AES rounds */ + vbroadcasti128 (1 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (2 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (3 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (4 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (5 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (6 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (7 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (8 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (9 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + cmpl $12, %r9d; + jb .Locb_aligned_blk16_enc_last; + vbroadcasti128 (10 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (11 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + jz .Locb_aligned_blk16_enc_last; + vbroadcasti128 (12 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (13 * 16)(%rdi), %ymm13; + VAESENC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + + /* Last round and output handling. */ + .Locb_aligned_blk16_enc_last: + vmovdqa (14 * 16)(%rsp), %ymm13; + vpxor %ymm8, %ymm13, %ymm8; + vpxor %ymm9, %ymm13, %ymm9; + vpxor %ymm10, %ymm13, %ymm10; + vpxor %ymm11, %ymm13, %ymm11; + vaesenclast %ymm8, %ymm0, %ymm0; + vaesenclast %ymm9, %ymm1, %ymm1; + vaesenclast %ymm10, %ymm2, %ymm2; + vaesenclast %ymm11, %ymm3, %ymm3; + vpxor %ymm12, %ymm13, %ymm12; + vpxor (16 * 16)(%rsp), %ymm13, %ymm8; + vpxor (18 * 16)(%rsp), %ymm13, %ymm9; + vpxor %ymm14, %ymm13, %ymm13; + vaesenclast %ymm12, %ymm4, %ymm4; + vaesenclast %ymm8, %ymm5, %ymm5; + vaesenclast %ymm9, %ymm6, %ymm6; + vaesenclast %ymm13, %ymm7, %ymm7; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + vmovdqu %ymm4, (8 * 16)(%rdx); + vmovdqu %ymm5, (10 * 16)(%rdx); + vmovdqu %ymm6, (12 * 16)(%rdx); + vmovdqu %ymm7, (14 * 16)(%rdx); + leaq (16 * 16)(%rdx), %rdx; + + jmp .Locb_aligned_blk16; + + .align 8 + .Locb_aligned_blk16_dec: + /* AES rounds */ + vbroadcasti128 (1 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (2 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (3 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (4 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (5 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (6 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (7 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (8 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (9 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + cmpl $12, %r9d; + jb .Locb_aligned_blk16_dec_last; + vbroadcasti128 (10 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (11 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + jz .Locb_aligned_blk16_dec_last; + vbroadcasti128 (12 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + vbroadcasti128 (13 * 16)(%rdi), %ymm13; + VAESDEC8(%ymm13, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7); + + /* Last round and output handling. */ + .Locb_aligned_blk16_dec_last: + vmovdqa (14 * 16)(%rsp), %ymm13; + vpxor %ymm8, %ymm13, %ymm8; + vpxor %ymm9, %ymm13, %ymm9; + vpxor %ymm10, %ymm13, %ymm10; + vpxor %ymm11, %ymm13, %ymm11; + vaesdeclast %ymm8, %ymm0, %ymm0; + vaesdeclast %ymm9, %ymm1, %ymm1; + vaesdeclast %ymm10, %ymm2, %ymm2; + vaesdeclast %ymm11, %ymm3, %ymm3; + vpxor %ymm12, %ymm13, %ymm12; + vpxor (16 * 16)(%rsp), %ymm13, %ymm8; + vpxor (18 * 16)(%rsp), %ymm13, %ymm9; + vpxor %ymm14, %ymm13, %ymm13; + vaesdeclast %ymm12, %ymm4, %ymm4; + vaesdeclast %ymm8, %ymm5, %ymm5; + vaesdeclast %ymm9, %ymm6, %ymm6; + vaesdeclast %ymm13, %ymm7, %ymm7; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + vmovdqu %ymm4, (8 * 16)(%rdx); + vmovdqu %ymm5, (10 * 16)(%rdx); + vmovdqu %ymm6, (12 * 16)(%rdx); + vmovdqu %ymm7, (14 * 16)(%rdx); + leaq (16 * 16)(%rdx), %rdx; + + jmp .Locb_aligned_blk16; + + /* Aligned: Process trailing eight blocks. */ +.align 8 +.Locb_aligned_blk8: + cmpq $8, %r8; + jb .Locb_aligned_done; + + leaq -8(%r8), %r8; + + leal 8(%esi), %esi; + tzcntl %esi, %eax; + shll $4, %eax; + + vpxor (0 * 16)(%rsp), %ymm15, %ymm5; + vpxor (2 * 16)(%rsp), %ymm15, %ymm6; + vpxor (4 * 16)(%rsp), %ymm15, %ymm7; + + vpxor (2 * 16)(%r14), %xmm15, %xmm13; /* offset ^ first key ^ L[2] */ + vpxor (%r14, %rax), %xmm13, %xmm14; /* offset ^ first key ^ L[2] ^ L[ntz{nblk+8}] */ + vinserti128 $1, %xmm14, %ymm13, %ymm14; + + vpxor (0 * 16)(%rcx), %ymm5, %ymm0; + vpxor (2 * 16)(%rcx), %ymm6, %ymm1; + vpxor (4 * 16)(%rcx), %ymm7, %ymm2; + vpxor (6 * 16)(%rcx), %ymm14, %ymm3; + leaq (8 * 16)(%rcx), %rcx; + + vperm2i128 $0x11, %ymm14, %ymm14, %ymm15; + + vmovdqa (14 * 16)(%rsp), %ymm8; + + testl %r15d, %r15d; + jz .Locb_aligned_blk8_dec; + + /* AES rounds */ + vbroadcasti128 (1 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (2 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (3 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (4 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (5 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (6 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (7 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (8 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (9 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + cmpl $12, %r9d; + jb .Locb_aligned_blk8_enc_last; + vbroadcasti128 (10 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (11 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + jz .Locb_aligned_blk8_enc_last; + vbroadcasti128 (12 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (13 * 16)(%rdi), %ymm4; + VAESENC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + + /* Last round and output handling. */ + .Locb_aligned_blk8_enc_last: + vpxor %ymm5, %ymm8, %ymm5; + vpxor %ymm6, %ymm8, %ymm6; + vpxor %ymm7, %ymm8, %ymm7; + vpxor %ymm14, %ymm8, %ymm4; + vaesenclast %ymm5, %ymm0, %ymm0; + vaesenclast %ymm6, %ymm1, %ymm1; + vaesenclast %ymm7, %ymm2, %ymm2; + vaesenclast %ymm4, %ymm3, %ymm3; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + leaq (8 * 16)(%rdx), %rdx; + + jmp .Locb_aligned_done; + + .align 8 + .Locb_aligned_blk8_dec: + /* AES rounds */ + vbroadcasti128 (1 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (2 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (3 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (4 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (5 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (6 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (7 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (8 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (9 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + cmpl $12, %r9d; + jb .Locb_aligned_blk8_dec_last; + vbroadcasti128 (10 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (11 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + jz .Locb_aligned_blk8_dec_last; + vbroadcasti128 (12 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (13 * 16)(%rdi), %ymm4; + VAESDEC4(%ymm4, %ymm0, %ymm1, %ymm2, %ymm3); + vbroadcasti128 (14 * 16)(%rdi), %ymm4; + + /* Last round and output handling. */ + .Locb_aligned_blk8_dec_last: + vpxor %ymm5, %ymm8, %ymm5; + vpxor %ymm6, %ymm8, %ymm6; + vpxor %ymm7, %ymm8, %ymm7; + vpxor %ymm14, %ymm8, %ymm4; + vaesdeclast %ymm5, %ymm0, %ymm0; + vaesdeclast %ymm6, %ymm1, %ymm1; + vaesdeclast %ymm7, %ymm2, %ymm2; + vaesdeclast %ymm4, %ymm3, %ymm3; + vmovdqu %ymm0, (0 * 16)(%rdx); + vmovdqu %ymm1, (2 * 16)(%rdx); + vmovdqu %ymm2, (4 * 16)(%rdx); + vmovdqu %ymm3, (6 * 16)(%rdx); + leaq (8 * 16)(%rdx), %rdx; + + /*jmp .Locb_aligned_done;*/ + +.align 8 +.Locb_aligned_done: + /* Burn stack. */ + vpxor %ymm0, %ymm0, %ymm0; + vmovdqa %ymm0, (0 * 16)(%rsp); + vmovdqa %ymm0, (2 * 16)(%rsp); + vmovdqa %ymm0, (4 * 16)(%rsp); + vmovdqa %ymm0, (6 * 16)(%rsp); + vmovdqa %ymm0, (8 * 16)(%rsp); + vmovdqa %ymm0, (10 * 16)(%rsp); + vmovdqa %ymm0, (12 * 16)(%rsp); + vmovdqa %ymm0, (16 * 16)(%rsp); + vmovdqa %ymm0, (18 * 16)(%rsp); + + /* Handle tailing 1?7 blocks in nblk-unaligned loop. */ + movq %r8, %r10; + cmpq $1, %r8; + jnb .Locb_unaligned_blk8; + +.align 8 +.Ldone_ocb: + movq 16(%rbp), %r14; /* offset ptr. */ + vpxor (0 * 16)(%rdi), %xmm15, %xmm15; /* offset ^ first key ^ first key */ + vmovdqu %xmm15, (%r14); /* Store offset. */ + + /* Handle decryption checksumming. */ + + testl %r15d, %r15d; + jnz .Locb_dec_checksum_done; + movq 24(%rbp), %rax; /* checksum ptr. */ + movq (STACK_REGS_POS + 4 * 8)(%rsp), %r10; + movq (STACK_REGS_POS + 5 * 8)(%rsp), %r11; + call _gcry_vaes_avx2_ocb_checksum; +.Locb_dec_checksum_done: + + /* Burn stack. */ + vpxor %ymm0, %ymm0, %ymm0; + vmovdqa %ymm0, (0 * 16)(%rsp); + + vzeroall; + + movq (STACK_REGS_POS + 0 * 8)(%rsp), %r12; + CFI_RESTORE(%r12); + movq (STACK_REGS_POS + 1 * 8)(%rsp), %r13; + CFI_RESTORE(%r13); + movq (STACK_REGS_POS + 2 * 8)(%rsp), %r14; + CFI_RESTORE(%r14); + movq (STACK_REGS_POS + 3 * 8)(%rsp), %r15; + CFI_RESTORE(%r15); + + leave; + CFI_LEAVE(); + ret + CFI_ENDPROC(); +ELF(.size _gcry_vaes_avx2_ctr_enc_amd64,.-_gcry_vaes_avx2_ctr_enc_amd64) + +/********************************************************************** + constants + **********************************************************************/ +ELF(.type _gcry_vaes_consts, at object) +_gcry_vaes_consts: +.align 32 +.Lbige_addb_0: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 +.Lbige_addb_1: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 +.Lbige_addb_2: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2 +.Lbige_addb_3: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3 +.Lbige_addb_4: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4 +.Lbige_addb_5: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5 +.Lbige_addb_6: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6 +.Lbige_addb_7: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7 +.Lbige_addb_8: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8 +.Lbige_addb_9: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9 +.Lbige_addb_10: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10 +.Lbige_addb_11: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 11 +.Lbige_addb_12: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12 +.Lbige_addb_13: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13 +.Lbige_addb_14: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14 +.Lbige_addb_15: + .byte 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15 +.Lbswap128_mask: + .byte 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 + +#endif /* HAVE_GCC_INLINE_ASM_VAES */ +#endif /* __x86_64__ */ diff --git a/cipher/rijndael-vaes.c b/cipher/rijndael-vaes.c new file mode 100644 index 00000000..636e3e95 --- /dev/null +++ b/cipher/rijndael-vaes.c @@ -0,0 +1,146 @@ +/* VAES/AVX2 accelerated AES for Libgcrypt + * Copyright (C) 2021 Jussi Kivilinna + * + * This file is part of Libgcrypt. + * + * Libgcrypt is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as + * published by the Free Software Foundation; either version 2.1 of + * the License, or (at your option) any later version. + * + * Libgcrypt is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . + * + */ + +#include +#include +#include + +#include "types.h" /* for byte and u32 typedefs */ +#include "g10lib.h" +#include "cipher.h" +#include "bufhelp.h" +#include "cipher-selftest.h" +#include "rijndael-internal.h" +#include "./cipher-internal.h" + + +#ifdef USE_VAES + + +# ifdef HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS +# define ASM_FUNC_ABI __attribute__((sysv_abi)) +# else +# define ASM_FUNC_ABI +# endif + + +extern void _gcry_aes_aesni_prepare_decryption(RIJNDAEL_context *ctx); + + +extern void _gcry_vaes_avx2_cbc_dec_amd64 (const void *keysched, + unsigned char *iv, + void *outbuf_arg, + const void *inbuf_arg, + size_t nblocks, + unsigned int nrounds) ASM_FUNC_ABI; + +extern void _gcry_vaes_avx2_cfb_dec_amd64 (const void *keysched, + unsigned char *iv, + void *outbuf_arg, + const void *inbuf_arg, + size_t nblocks, + unsigned int nrounds) ASM_FUNC_ABI; + +extern void _gcry_vaes_avx2_ctr_enc_amd64 (const void *keysched, + unsigned char *ctr, + void *outbuf_arg, + const void *inbuf_arg, + size_t nblocks, + unsigned int nrounds) ASM_FUNC_ABI; + +extern void _gcry_vaes_avx2_ocb_crypt_amd64 (const void *keysched, + unsigned int blkn, + void *outbuf_arg, + const void *inbuf_arg, + size_t nblocks, + unsigned int nrounds, + unsigned char *offset, + unsigned char *checksum, + unsigned char *L_table, + int encrypt) ASM_FUNC_ABI; + + +void +_gcry_aes_vaes_cbc_dec (RIJNDAEL_context *ctx, unsigned char *iv, + unsigned char *outbuf, const unsigned char *inbuf, + size_t nblocks) +{ + const void *keysched = ctx->keyschdec32; + unsigned int nrounds = ctx->rounds; + + if (!ctx->decryption_prepared) + { + _gcry_aes_aesni_prepare_decryption (ctx); + ctx->decryption_prepared = 1; + } + + _gcry_vaes_avx2_cbc_dec_amd64 (keysched, iv, outbuf, inbuf, nblocks, nrounds); +} + +void +_gcry_aes_vaes_cfb_dec (RIJNDAEL_context *ctx, unsigned char *iv, + unsigned char *outbuf, const unsigned char *inbuf, + size_t nblocks) +{ + const void *keysched = ctx->keyschenc32; + unsigned int nrounds = ctx->rounds; + + _gcry_vaes_avx2_cfb_dec_amd64 (keysched, iv, outbuf, inbuf, nblocks, nrounds); +} + +void +_gcry_aes_vaes_ctr_enc (RIJNDAEL_context *ctx, unsigned char *iv, + unsigned char *outbuf, const unsigned char *inbuf, + size_t nblocks) +{ + const void *keysched = ctx->keyschenc32; + unsigned int nrounds = ctx->rounds; + + _gcry_vaes_avx2_ctr_enc_amd64 (keysched, iv, outbuf, inbuf, nblocks, nrounds); +} + +size_t +_gcry_aes_vaes_ocb_crypt (gcry_cipher_hd_t c, void *outbuf_arg, + const void *inbuf_arg, size_t nblocks, + int encrypt) +{ + RIJNDAEL_context *ctx = (void *)&c->context.c; + const void *keysched = encrypt ? ctx->keyschenc32 : ctx->keyschdec32; + unsigned char *outbuf = outbuf_arg; + const unsigned char *inbuf = inbuf_arg; + unsigned int nrounds = ctx->rounds; + u64 blkn = c->u_mode.ocb.data_nblocks; + + if (!encrypt && !ctx->decryption_prepared) + { + _gcry_aes_aesni_prepare_decryption (ctx); + ctx->decryption_prepared = 1; + } + + c->u_mode.ocb.data_nblocks = blkn + nblocks; + + _gcry_vaes_avx2_ocb_crypt_amd64 (keysched, (unsigned int)blkn, outbuf, inbuf, + nblocks, nrounds, c->u_iv.iv, c->u_ctr.ctr, + c->u_mode.ocb.L[0], encrypt); + + return 0; +} + +#endif /* USE_VAES */ diff --git a/cipher/rijndael.c b/cipher/rijndael.c index fe137327..dd03bd6a 100644 --- a/cipher/rijndael.c +++ b/cipher/rijndael.c @@ -102,6 +102,23 @@ extern void _gcry_aes_aesni_xts_crypt (void *context, unsigned char *tweak, size_t nblocks, int encrypt); #endif +#ifdef USE_VAES +/* VAES (AMD64) accelerated implementation of AES */ + +extern void _gcry_aes_vaes_cfb_dec (void *context, unsigned char *iv, + void *outbuf_arg, const void *inbuf_arg, + size_t nblocks); +extern void _gcry_aes_vaes_cbc_dec (void *context, unsigned char *iv, + void *outbuf_arg, const void *inbuf_arg, + size_t nblocks); +extern void _gcry_aes_vaes_ctr_enc (void *context, unsigned char *ctr, + void *outbuf_arg, const void *inbuf_arg, + size_t nblocks); +extern size_t _gcry_aes_vaes_ocb_crypt (gcry_cipher_hd_t c, void *outbuf_arg, + const void *inbuf_arg, size_t nblocks, + int encrypt); +#endif + #ifdef USE_SSSE3 /* SSSE3 (AMD64) vector permutation implementation of AES */ extern void _gcry_aes_ssse3_do_setkey(RIJNDAEL_context *ctx, const byte *key); @@ -480,6 +497,17 @@ do_setkey (RIJNDAEL_context *ctx, const byte *key, const unsigned keylen, bulk_ops->ocb_crypt = _gcry_aes_aesni_ocb_crypt; bulk_ops->ocb_auth = _gcry_aes_aesni_ocb_auth; bulk_ops->xts_crypt = _gcry_aes_aesni_xts_crypt; + +#ifdef USE_VAES + if ((hwfeatures & HWF_INTEL_VAES) && (hwfeatures & HWF_INTEL_AVX2)) + { + /* Setup VAES bulk encryption routines. */ + bulk_ops->cfb_dec = _gcry_aes_vaes_cfb_dec; + bulk_ops->cbc_dec = _gcry_aes_vaes_cbc_dec; + bulk_ops->ctr_enc = _gcry_aes_vaes_ctr_enc; + bulk_ops->ocb_crypt = _gcry_aes_vaes_ocb_crypt; + } +#endif } #endif #ifdef USE_PADLOCK @@ -1644,7 +1672,11 @@ selftest_basic_256 (void) static const char* selftest_ctr_128 (void) { +#ifdef USE_VAES + const int nblocks = 16+1; +#else const int nblocks = 8+1; +#endif const int blocksize = BLOCKSIZE; const int context_size = sizeof(RIJNDAEL_context); @@ -1658,7 +1690,11 @@ selftest_ctr_128 (void) static const char* selftest_cbc_128 (void) { +#ifdef USE_VAES + const int nblocks = 16+2; +#else const int nblocks = 8+2; +#endif const int blocksize = BLOCKSIZE; const int context_size = sizeof(RIJNDAEL_context); @@ -1672,7 +1708,11 @@ selftest_cbc_128 (void) static const char* selftest_cfb_128 (void) { +#ifdef USE_VAES + const int nblocks = 16+2; +#else const int nblocks = 8+2; +#endif const int blocksize = BLOCKSIZE; const int context_size = sizeof(RIJNDAEL_context); diff --git a/configure.ac b/configure.ac index f74056f2..6e38c27c 100644 --- a/configure.ac +++ b/configure.ac @@ -2563,6 +2563,10 @@ if test "$found" = "1" ; then # Build with the SSSE3 implementation GCRYPT_CIPHERS="$GCRYPT_CIPHERS rijndael-ssse3-amd64.lo" GCRYPT_CIPHERS="$GCRYPT_CIPHERS rijndael-ssse3-amd64-asm.lo" + + # Build with the VAES/AVX2 implementation + GCRYPT_CIPHERS="$GCRYPT_CIPHERS rijndael-vaes.lo" + GCRYPT_CIPHERS="$GCRYPT_CIPHERS rijndael-vaes-avx2-amd64.lo" ;; arm*-*-*) # Build with the assembly implementation -- 2.27.0 From jussi.kivilinna at iki.fi Mon Jan 25 19:47:07 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Mon, 25 Jan 2021 20:47:07 +0200 Subject: [PATCH 1/2] camellia: add x86_64 VAES/AVX2 accelerated implementation Message-ID: <20210125184708.1415112-1-jussi.kivilinna@iki.fi> * cipher/Makefile.am: Add 'camellia-aesni-avx2-amd64.h' and 'camellia-vaes-avx2-amd64.S'. * cipher/camellia-aesni-avx2-amd64.S: New, old content moved to... * cipher/camellia-aesni-avx2-amd64.h: ...here. (IF_AESNI, IF_VAES, FUNC_NAME): New. * cipher/camellia-vaes-avx2-amd64.S: New. * cipher/camellia-glue.c (USE_VAES_AVX2): New. (CAMELLIA_context): New member 'use_vaes_avx2'. (_gcry_camellia_vaes_avx2_ctr_enc, _gcry_camellia_vaes_avx2_cbc_dec) (_gcry_camellia_vaes_avx2_cfb_dec, _gcry_camellia_vaes_avx2_ocb_enc) (_gcry_camellia_vaes_avx2_ocb_dec) (_gcry_camellia_vaes_avx2_ocb_auth): New. (camellia_setkey): Check for HWF_INTEL_VAES. (_gcry_camellia_ctr_enc, _gcry_camellia_cbc_dec) (_gcry_camellia_cfb_dec, _gcry_camellia_ocb_crypt) (_gcry_camellia_ocb_auth): Add USE_VAES_AVX2 code. * configure.ac: Add 'camellia-vaes-avx2-amd64.lo'. (gcry_cv_gcc_inline_asm_vaes): New check. * src/g10lib.h (HWF_INTEL_VAES): New. * src/hwf-x86.c (detect_x86_gnuc): Check for VAES. * src/hwfeatures.c (hwflist): Add "intel-vaes". -- New AMD Ryzen 5000 processors support VAES instructions with 256-bit AVX2 registers. Camellia AES-NI/AVX2 implementation had to split 256-bit vector for AES processing, but now we can use those 256-bit registers directly with VAES. Benchmarks on AMD Ryzen 5800X: Before (AES-NI/AVX2): CAMELLIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz CBC dec | 0.539 ns/B 1769 MiB/s 2.62 c/B 4852 CFB dec | 0.528 ns/B 1806 MiB/s 2.56 c/B 4852?1 CTR enc | 0.552 ns/B 1728 MiB/s 2.68 c/B 4850 OCB enc | 0.550 ns/B 1734 MiB/s 2.65 c/B 4825 OCB dec | 0.577 ns/B 1653 MiB/s 2.78 c/B 4825 OCB auth | 0.546 ns/B 1747 MiB/s 2.63 c/B 4825 After (VAES/AVX2, CBC-dec ~13%, CFB-dec/CTR/OCB ~20% faster): CAMELLIA128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz CBC dec | 0.477 ns/B 1999 MiB/s 2.31 c/B 4850 CFB dec | 0.433 ns/B 2201 MiB/s 2.10 c/B 4850 CTR enc | 0.438 ns/B 2176 MiB/s 2.13 c/B 4851 OCB enc | 0.449 ns/B 2122 MiB/s 2.18 c/B 4850 OCB dec | 0.468 ns/B 2038 MiB/s 2.27 c/B 4850 OCB auth | 0.447 ns/B 2131 MiB/s 2.17 c/B 4850 Signed-off-by: Jussi Kivilinna --- cipher/Makefile.am | 1 + cipher/camellia-aesni-avx2-amd64.S | 1762 +-------------------------- cipher/camellia-aesni-avx2-amd64.h | 1794 ++++++++++++++++++++++++++++ cipher/camellia-glue.c | 114 +- cipher/camellia-vaes-avx2-amd64.S | 35 + configure.ac | 26 + src/g10lib.h | 1 + src/hwf-x86.c | 10 +- src/hwfeatures.c | 1 + 9 files changed, 1979 insertions(+), 1765 deletions(-) create mode 100644 cipher/camellia-aesni-avx2-amd64.h create mode 100644 cipher/camellia-vaes-avx2-amd64.S diff --git a/cipher/Makefile.am b/cipher/Makefile.am index 6d3ec35e..62f7ec3e 100644 --- a/cipher/Makefile.am +++ b/cipher/Makefile.am @@ -133,6 +133,7 @@ EXTRA_libcipher_la_SOURCES = \ twofish-avx2-amd64.S \ rfc2268.c \ camellia.c camellia.h camellia-glue.c camellia-aesni-avx-amd64.S \ + camellia-aesni-avx2-amd64.h camellia-vaes-avx2-amd64.S \ camellia-aesni-avx2-amd64.S camellia-arm.S camellia-aarch64.S \ blake2.c \ blake2b-amd64-avx2.S blake2s-amd64-avx.S diff --git a/cipher/camellia-aesni-avx2-amd64.S b/cipher/camellia-aesni-avx2-amd64.S index f620f040..5102d191 100644 --- a/cipher/camellia-aesni-avx2-amd64.S +++ b/cipher/camellia-aesni-avx2-amd64.S @@ -1,6 +1,6 @@ -/* camellia-avx2-aesni-amd64.S - AES-NI/AVX2 implementation of Camellia cipher +/* camellia-aesni-avx2-amd64.S - AES-NI/AVX2 implementation of Camellia cipher * - * Copyright (C) 2013-2015,2020 Jussi Kivilinna + * Copyright (C) 2021 Jussi Kivilinna * * This file is part of Libgcrypt. * @@ -25,1758 +25,10 @@ defined(HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS)) && \ defined(ENABLE_AESNI_SUPPORT) && defined(ENABLE_AVX2_SUPPORT) -#include "asm-common-amd64.h" +#undef CAMELLIA_VAES_BUILD +#define FUNC_NAME(func) _gcry_camellia_aesni_avx2_ ## func -#define CAMELLIA_TABLE_BYTE_LEN 272 +#include "camellia-aesni-avx2-amd64.h" -/* struct CAMELLIA_context: */ -#define key_table 0 -#define key_bitlength CAMELLIA_TABLE_BYTE_LEN - -/* register macros */ -#define CTX %rdi -#define RIO %r8 - -/********************************************************************** - helper macros - **********************************************************************/ -#define filter_8bit(x, lo_t, hi_t, mask4bit, tmp0) \ - vpand x, mask4bit, tmp0; \ - vpandn x, mask4bit, x; \ - vpsrld $4, x, x; \ - \ - vpshufb tmp0, lo_t, tmp0; \ - vpshufb x, hi_t, x; \ - vpxor tmp0, x, x; - -#define ymm0_x xmm0 -#define ymm1_x xmm1 -#define ymm2_x xmm2 -#define ymm3_x xmm3 -#define ymm4_x xmm4 -#define ymm5_x xmm5 -#define ymm6_x xmm6 -#define ymm7_x xmm7 -#define ymm8_x xmm8 -#define ymm9_x xmm9 -#define ymm10_x xmm10 -#define ymm11_x xmm11 -#define ymm12_x xmm12 -#define ymm13_x xmm13 -#define ymm14_x xmm14 -#define ymm15_x xmm15 - -/********************************************************************** - 32-way camellia - **********************************************************************/ - -/* - * IN: - * x0..x7: byte-sliced AB state - * mem_cd: register pointer storing CD state - * key: index for key material - * OUT: - * x0..x7: new byte-sliced CD state - */ -#define roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, t0, t1, t2, t3, t4, t5, t6, \ - t7, mem_cd, key) \ - /* \ - * S-function with AES subbytes \ - */ \ - vbroadcasti128 .Linv_shift_row rRIP, t4; \ - vpbroadcastd .L0f0f0f0f rRIP, t7; \ - vbroadcasti128 .Lpre_tf_lo_s1 rRIP, t5; \ - vbroadcasti128 .Lpre_tf_hi_s1 rRIP, t6; \ - vbroadcasti128 .Lpre_tf_lo_s4 rRIP, t2; \ - vbroadcasti128 .Lpre_tf_hi_s4 rRIP, t3; \ - \ - /* AES inverse shift rows */ \ - vpshufb t4, x0, x0; \ - vpshufb t4, x7, x7; \ - vpshufb t4, x3, x3; \ - vpshufb t4, x6, x6; \ - vpshufb t4, x2, x2; \ - vpshufb t4, x5, x5; \ - vpshufb t4, x1, x1; \ - vpshufb t4, x4, x4; \ - \ - /* prefilter sboxes 1, 2 and 3 */ \ - /* prefilter sbox 4 */ \ - filter_8bit(x0, t5, t6, t7, t4); \ - filter_8bit(x7, t5, t6, t7, t4); \ - vextracti128 $1, x0, t0##_x; \ - vextracti128 $1, x7, t1##_x; \ - filter_8bit(x3, t2, t3, t7, t4); \ - filter_8bit(x6, t2, t3, t7, t4); \ - vextracti128 $1, x3, t3##_x; \ - vextracti128 $1, x6, t2##_x; \ - filter_8bit(x2, t5, t6, t7, t4); \ - filter_8bit(x5, t5, t6, t7, t4); \ - filter_8bit(x1, t5, t6, t7, t4); \ - filter_8bit(x4, t5, t6, t7, t4); \ - \ - vpxor t4##_x, t4##_x, t4##_x; \ - \ - /* AES subbytes + AES shift rows */ \ - vextracti128 $1, x2, t6##_x; \ - vextracti128 $1, x5, t5##_x; \ - vaesenclast t4##_x, x0##_x, x0##_x; \ - vaesenclast t4##_x, t0##_x, t0##_x; \ - vaesenclast t4##_x, x7##_x, x7##_x; \ - vaesenclast t4##_x, t1##_x, t1##_x; \ - vaesenclast t4##_x, x3##_x, x3##_x; \ - vaesenclast t4##_x, t3##_x, t3##_x; \ - vaesenclast t4##_x, x6##_x, x6##_x; \ - vaesenclast t4##_x, t2##_x, t2##_x; \ - vinserti128 $1, t0##_x, x0, x0; \ - vinserti128 $1, t1##_x, x7, x7; \ - vinserti128 $1, t3##_x, x3, x3; \ - vinserti128 $1, t2##_x, x6, x6; \ - vextracti128 $1, x1, t3##_x; \ - vextracti128 $1, x4, t2##_x; \ - vbroadcasti128 .Lpost_tf_lo_s1 rRIP, t0; \ - vbroadcasti128 .Lpost_tf_hi_s1 rRIP, t1; \ - vaesenclast t4##_x, x2##_x, x2##_x; \ - vaesenclast t4##_x, t6##_x, t6##_x; \ - vaesenclast t4##_x, x5##_x, x5##_x; \ - vaesenclast t4##_x, t5##_x, t5##_x; \ - vaesenclast t4##_x, x1##_x, x1##_x; \ - vaesenclast t4##_x, t3##_x, t3##_x; \ - vaesenclast t4##_x, x4##_x, x4##_x; \ - vaesenclast t4##_x, t2##_x, t2##_x; \ - vinserti128 $1, t6##_x, x2, x2; \ - vinserti128 $1, t5##_x, x5, x5; \ - vinserti128 $1, t3##_x, x1, x1; \ - vinserti128 $1, t2##_x, x4, x4; \ - \ - /* postfilter sboxes 1 and 4 */ \ - vbroadcasti128 .Lpost_tf_lo_s3 rRIP, t2; \ - vbroadcasti128 .Lpost_tf_hi_s3 rRIP, t3; \ - filter_8bit(x0, t0, t1, t7, t4); \ - filter_8bit(x7, t0, t1, t7, t4); \ - filter_8bit(x3, t0, t1, t7, t6); \ - filter_8bit(x6, t0, t1, t7, t6); \ - \ - /* postfilter sbox 3 */ \ - vbroadcasti128 .Lpost_tf_lo_s2 rRIP, t4; \ - vbroadcasti128 .Lpost_tf_hi_s2 rRIP, t5; \ - filter_8bit(x2, t2, t3, t7, t6); \ - filter_8bit(x5, t2, t3, t7, t6); \ - \ - vpbroadcastq key, t0; /* higher 64-bit duplicate ignored */ \ - \ - /* postfilter sbox 2 */ \ - filter_8bit(x1, t4, t5, t7, t2); \ - filter_8bit(x4, t4, t5, t7, t2); \ - vpxor t7, t7, t7; \ - \ - vpsrldq $1, t0, t1; \ - vpsrldq $2, t0, t2; \ - vpshufb t7, t1, t1; \ - vpsrldq $3, t0, t3; \ - \ - /* P-function */ \ - vpxor x5, x0, x0; \ - vpxor x6, x1, x1; \ - vpxor x7, x2, x2; \ - vpxor x4, x3, x3; \ - \ - vpshufb t7, t2, t2; \ - vpsrldq $4, t0, t4; \ - vpshufb t7, t3, t3; \ - vpsrldq $5, t0, t5; \ - vpshufb t7, t4, t4; \ - \ - vpxor x2, x4, x4; \ - vpxor x3, x5, x5; \ - vpxor x0, x6, x6; \ - vpxor x1, x7, x7; \ - \ - vpsrldq $6, t0, t6; \ - vpshufb t7, t5, t5; \ - vpshufb t7, t6, t6; \ - \ - vpxor x7, x0, x0; \ - vpxor x4, x1, x1; \ - vpxor x5, x2, x2; \ - vpxor x6, x3, x3; \ - \ - vpxor x3, x4, x4; \ - vpxor x0, x5, x5; \ - vpxor x1, x6, x6; \ - vpxor x2, x7, x7; /* note: high and low parts swapped */ \ - \ - /* Add key material and result to CD (x becomes new CD) */ \ - \ - vpxor t6, x1, x1; \ - vpxor 5 * 32(mem_cd), x1, x1; \ - \ - vpsrldq $7, t0, t6; \ - vpshufb t7, t0, t0; \ - vpshufb t7, t6, t7; \ - \ - vpxor t7, x0, x0; \ - vpxor 4 * 32(mem_cd), x0, x0; \ - \ - vpxor t5, x2, x2; \ - vpxor 6 * 32(mem_cd), x2, x2; \ - \ - vpxor t4, x3, x3; \ - vpxor 7 * 32(mem_cd), x3, x3; \ - \ - vpxor t3, x4, x4; \ - vpxor 0 * 32(mem_cd), x4, x4; \ - \ - vpxor t2, x5, x5; \ - vpxor 1 * 32(mem_cd), x5, x5; \ - \ - vpxor t1, x6, x6; \ - vpxor 2 * 32(mem_cd), x6, x6; \ - \ - vpxor t0, x7, x7; \ - vpxor 3 * 32(mem_cd), x7, x7; - -/* - * IN/OUT: - * x0..x7: byte-sliced AB state preloaded - * mem_ab: byte-sliced AB state in memory - * mem_cb: byte-sliced CD state in memory - */ -#define two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, i, dir, store_ab) \ - roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_cd, (key_table + (i) * 8)(CTX)); \ - \ - vmovdqu x0, 4 * 32(mem_cd); \ - vmovdqu x1, 5 * 32(mem_cd); \ - vmovdqu x2, 6 * 32(mem_cd); \ - vmovdqu x3, 7 * 32(mem_cd); \ - vmovdqu x4, 0 * 32(mem_cd); \ - vmovdqu x5, 1 * 32(mem_cd); \ - vmovdqu x6, 2 * 32(mem_cd); \ - vmovdqu x7, 3 * 32(mem_cd); \ - \ - roundsm32(x4, x5, x6, x7, x0, x1, x2, x3, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, (key_table + ((i) + (dir)) * 8)(CTX)); \ - \ - store_ab(x0, x1, x2, x3, x4, x5, x6, x7, mem_ab); - -#define dummy_store(x0, x1, x2, x3, x4, x5, x6, x7, mem_ab) /* do nothing */ - -#define store_ab_state(x0, x1, x2, x3, x4, x5, x6, x7, mem_ab) \ - /* Store new AB state */ \ - vmovdqu x4, 4 * 32(mem_ab); \ - vmovdqu x5, 5 * 32(mem_ab); \ - vmovdqu x6, 6 * 32(mem_ab); \ - vmovdqu x7, 7 * 32(mem_ab); \ - vmovdqu x0, 0 * 32(mem_ab); \ - vmovdqu x1, 1 * 32(mem_ab); \ - vmovdqu x2, 2 * 32(mem_ab); \ - vmovdqu x3, 3 * 32(mem_ab); - -#define enc_rounds32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, i) \ - two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, (i) + 2, 1, store_ab_state); \ - two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, (i) + 4, 1, store_ab_state); \ - two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, (i) + 6, 1, dummy_store); - -#define dec_rounds32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, i) \ - two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, (i) + 7, -1, store_ab_state); \ - two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, (i) + 5, -1, store_ab_state); \ - two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd, (i) + 3, -1, dummy_store); - -/* - * IN: - * v0..3: byte-sliced 32-bit integers - * OUT: - * v0..3: (IN <<< 1) - */ -#define rol32_1_32(v0, v1, v2, v3, t0, t1, t2, zero) \ - vpcmpgtb v0, zero, t0; \ - vpaddb v0, v0, v0; \ - vpabsb t0, t0; \ - \ - vpcmpgtb v1, zero, t1; \ - vpaddb v1, v1, v1; \ - vpabsb t1, t1; \ - \ - vpcmpgtb v2, zero, t2; \ - vpaddb v2, v2, v2; \ - vpabsb t2, t2; \ - \ - vpor t0, v1, v1; \ - \ - vpcmpgtb v3, zero, t0; \ - vpaddb v3, v3, v3; \ - vpabsb t0, t0; \ - \ - vpor t1, v2, v2; \ - vpor t2, v3, v3; \ - vpor t0, v0, v0; - -/* - * IN: - * r: byte-sliced AB state in memory - * l: byte-sliced CD state in memory - * OUT: - * x0..x7: new byte-sliced CD state - */ -#define fls32(l, l0, l1, l2, l3, l4, l5, l6, l7, r, t0, t1, t2, t3, tt0, \ - tt1, tt2, tt3, kll, klr, krl, krr) \ - /* \ - * t0 = kll; \ - * t0 &= ll; \ - * lr ^= rol32(t0, 1); \ - */ \ - vpbroadcastd kll, t0; /* only lowest 32-bit used */ \ - vpxor tt0, tt0, tt0; \ - vpshufb tt0, t0, t3; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t2; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t1; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t0; \ - \ - vpand l0, t0, t0; \ - vpand l1, t1, t1; \ - vpand l2, t2, t2; \ - vpand l3, t3, t3; \ - \ - rol32_1_32(t3, t2, t1, t0, tt1, tt2, tt3, tt0); \ - \ - vpxor l4, t0, l4; \ - vpbroadcastd krr, t0; /* only lowest 32-bit used */ \ - vmovdqu l4, 4 * 32(l); \ - vpxor l5, t1, l5; \ - vmovdqu l5, 5 * 32(l); \ - vpxor l6, t2, l6; \ - vmovdqu l6, 6 * 32(l); \ - vpxor l7, t3, l7; \ - vmovdqu l7, 7 * 32(l); \ - \ - /* \ - * t2 = krr; \ - * t2 |= rr; \ - * rl ^= t2; \ - */ \ - \ - vpshufb tt0, t0, t3; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t2; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t1; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t0; \ - \ - vpor 4 * 32(r), t0, t0; \ - vpor 5 * 32(r), t1, t1; \ - vpor 6 * 32(r), t2, t2; \ - vpor 7 * 32(r), t3, t3; \ - \ - vpxor 0 * 32(r), t0, t0; \ - vpxor 1 * 32(r), t1, t1; \ - vpxor 2 * 32(r), t2, t2; \ - vpxor 3 * 32(r), t3, t3; \ - vmovdqu t0, 0 * 32(r); \ - vpbroadcastd krl, t0; /* only lowest 32-bit used */ \ - vmovdqu t1, 1 * 32(r); \ - vmovdqu t2, 2 * 32(r); \ - vmovdqu t3, 3 * 32(r); \ - \ - /* \ - * t2 = krl; \ - * t2 &= rl; \ - * rr ^= rol32(t2, 1); \ - */ \ - vpshufb tt0, t0, t3; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t2; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t1; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t0; \ - \ - vpand 0 * 32(r), t0, t0; \ - vpand 1 * 32(r), t1, t1; \ - vpand 2 * 32(r), t2, t2; \ - vpand 3 * 32(r), t3, t3; \ - \ - rol32_1_32(t3, t2, t1, t0, tt1, tt2, tt3, tt0); \ - \ - vpxor 4 * 32(r), t0, t0; \ - vpxor 5 * 32(r), t1, t1; \ - vpxor 6 * 32(r), t2, t2; \ - vpxor 7 * 32(r), t3, t3; \ - vmovdqu t0, 4 * 32(r); \ - vpbroadcastd klr, t0; /* only lowest 32-bit used */ \ - vmovdqu t1, 5 * 32(r); \ - vmovdqu t2, 6 * 32(r); \ - vmovdqu t3, 7 * 32(r); \ - \ - /* \ - * t0 = klr; \ - * t0 |= lr; \ - * ll ^= t0; \ - */ \ - \ - vpshufb tt0, t0, t3; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t2; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t1; \ - vpsrldq $1, t0, t0; \ - vpshufb tt0, t0, t0; \ - \ - vpor l4, t0, t0; \ - vpor l5, t1, t1; \ - vpor l6, t2, t2; \ - vpor l7, t3, t3; \ - \ - vpxor l0, t0, l0; \ - vmovdqu l0, 0 * 32(l); \ - vpxor l1, t1, l1; \ - vmovdqu l1, 1 * 32(l); \ - vpxor l2, t2, l2; \ - vmovdqu l2, 2 * 32(l); \ - vpxor l3, t3, l3; \ - vmovdqu l3, 3 * 32(l); - -#define transpose_4x4(x0, x1, x2, x3, t1, t2) \ - vpunpckhdq x1, x0, t2; \ - vpunpckldq x1, x0, x0; \ - \ - vpunpckldq x3, x2, t1; \ - vpunpckhdq x3, x2, x2; \ - \ - vpunpckhqdq t1, x0, x1; \ - vpunpcklqdq t1, x0, x0; \ - \ - vpunpckhqdq x2, t2, x3; \ - vpunpcklqdq x2, t2, x2; - -#define byteslice_16x16b_fast(a0, b0, c0, d0, a1, b1, c1, d1, a2, b2, c2, d2, \ - a3, b3, c3, d3, st0, st1) \ - vmovdqu d2, st0; \ - vmovdqu d3, st1; \ - transpose_4x4(a0, a1, a2, a3, d2, d3); \ - transpose_4x4(b0, b1, b2, b3, d2, d3); \ - vmovdqu st0, d2; \ - vmovdqu st1, d3; \ - \ - vmovdqu a0, st0; \ - vmovdqu a1, st1; \ - transpose_4x4(c0, c1, c2, c3, a0, a1); \ - transpose_4x4(d0, d1, d2, d3, a0, a1); \ - \ - vbroadcasti128 .Lshufb_16x16b rRIP, a0; \ - vmovdqu st1, a1; \ - vpshufb a0, a2, a2; \ - vpshufb a0, a3, a3; \ - vpshufb a0, b0, b0; \ - vpshufb a0, b1, b1; \ - vpshufb a0, b2, b2; \ - vpshufb a0, b3, b3; \ - vpshufb a0, a1, a1; \ - vpshufb a0, c0, c0; \ - vpshufb a0, c1, c1; \ - vpshufb a0, c2, c2; \ - vpshufb a0, c3, c3; \ - vpshufb a0, d0, d0; \ - vpshufb a0, d1, d1; \ - vpshufb a0, d2, d2; \ - vpshufb a0, d3, d3; \ - vmovdqu d3, st1; \ - vmovdqu st0, d3; \ - vpshufb a0, d3, a0; \ - vmovdqu d2, st0; \ - \ - transpose_4x4(a0, b0, c0, d0, d2, d3); \ - transpose_4x4(a1, b1, c1, d1, d2, d3); \ - vmovdqu st0, d2; \ - vmovdqu st1, d3; \ - \ - vmovdqu b0, st0; \ - vmovdqu b1, st1; \ - transpose_4x4(a2, b2, c2, d2, b0, b1); \ - transpose_4x4(a3, b3, c3, d3, b0, b1); \ - vmovdqu st0, b0; \ - vmovdqu st1, b1; \ - /* does not adjust output bytes inside vectors */ - -/* load blocks to registers and apply pre-whitening */ -#define inpack32_pre(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, rio, key) \ - vpbroadcastq key, x0; \ - vpshufb .Lpack_bswap rRIP, x0, x0; \ - \ - vpxor 0 * 32(rio), x0, y7; \ - vpxor 1 * 32(rio), x0, y6; \ - vpxor 2 * 32(rio), x0, y5; \ - vpxor 3 * 32(rio), x0, y4; \ - vpxor 4 * 32(rio), x0, y3; \ - vpxor 5 * 32(rio), x0, y2; \ - vpxor 6 * 32(rio), x0, y1; \ - vpxor 7 * 32(rio), x0, y0; \ - vpxor 8 * 32(rio), x0, x7; \ - vpxor 9 * 32(rio), x0, x6; \ - vpxor 10 * 32(rio), x0, x5; \ - vpxor 11 * 32(rio), x0, x4; \ - vpxor 12 * 32(rio), x0, x3; \ - vpxor 13 * 32(rio), x0, x2; \ - vpxor 14 * 32(rio), x0, x1; \ - vpxor 15 * 32(rio), x0, x0; - -/* byteslice pre-whitened blocks and store to temporary memory */ -#define inpack32_post(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, mem_ab, mem_cd) \ - byteslice_16x16b_fast(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, \ - y4, y5, y6, y7, (mem_ab), (mem_cd)); \ - \ - vmovdqu x0, 0 * 32(mem_ab); \ - vmovdqu x1, 1 * 32(mem_ab); \ - vmovdqu x2, 2 * 32(mem_ab); \ - vmovdqu x3, 3 * 32(mem_ab); \ - vmovdqu x4, 4 * 32(mem_ab); \ - vmovdqu x5, 5 * 32(mem_ab); \ - vmovdqu x6, 6 * 32(mem_ab); \ - vmovdqu x7, 7 * 32(mem_ab); \ - vmovdqu y0, 0 * 32(mem_cd); \ - vmovdqu y1, 1 * 32(mem_cd); \ - vmovdqu y2, 2 * 32(mem_cd); \ - vmovdqu y3, 3 * 32(mem_cd); \ - vmovdqu y4, 4 * 32(mem_cd); \ - vmovdqu y5, 5 * 32(mem_cd); \ - vmovdqu y6, 6 * 32(mem_cd); \ - vmovdqu y7, 7 * 32(mem_cd); - -/* de-byteslice, apply post-whitening and store blocks */ -#define outunpack32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, \ - y5, y6, y7, key, stack_tmp0, stack_tmp1) \ - byteslice_16x16b_fast(y0, y4, x0, x4, y1, y5, x1, x5, y2, y6, x2, x6, \ - y3, y7, x3, x7, stack_tmp0, stack_tmp1); \ - \ - vmovdqu x0, stack_tmp0; \ - \ - vpbroadcastq key, x0; \ - vpshufb .Lpack_bswap rRIP, x0, x0; \ - \ - vpxor x0, y7, y7; \ - vpxor x0, y6, y6; \ - vpxor x0, y5, y5; \ - vpxor x0, y4, y4; \ - vpxor x0, y3, y3; \ - vpxor x0, y2, y2; \ - vpxor x0, y1, y1; \ - vpxor x0, y0, y0; \ - vpxor x0, x7, x7; \ - vpxor x0, x6, x6; \ - vpxor x0, x5, x5; \ - vpxor x0, x4, x4; \ - vpxor x0, x3, x3; \ - vpxor x0, x2, x2; \ - vpxor x0, x1, x1; \ - vpxor stack_tmp0, x0, x0; - -#define write_output(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ - y6, y7, rio) \ - vmovdqu x0, 0 * 32(rio); \ - vmovdqu x1, 1 * 32(rio); \ - vmovdqu x2, 2 * 32(rio); \ - vmovdqu x3, 3 * 32(rio); \ - vmovdqu x4, 4 * 32(rio); \ - vmovdqu x5, 5 * 32(rio); \ - vmovdqu x6, 6 * 32(rio); \ - vmovdqu x7, 7 * 32(rio); \ - vmovdqu y0, 8 * 32(rio); \ - vmovdqu y1, 9 * 32(rio); \ - vmovdqu y2, 10 * 32(rio); \ - vmovdqu y3, 11 * 32(rio); \ - vmovdqu y4, 12 * 32(rio); \ - vmovdqu y5, 13 * 32(rio); \ - vmovdqu y6, 14 * 32(rio); \ - vmovdqu y7, 15 * 32(rio); - -.text -.align 32 - -#define SHUFB_BYTES(idx) \ - 0 + (idx), 4 + (idx), 8 + (idx), 12 + (idx) - -.Lshufb_16x16b: - .byte SHUFB_BYTES(0), SHUFB_BYTES(1), SHUFB_BYTES(2), SHUFB_BYTES(3) - .byte SHUFB_BYTES(0), SHUFB_BYTES(1), SHUFB_BYTES(2), SHUFB_BYTES(3) - -.Lpack_bswap: - .long 0x00010203, 0x04050607, 0x80808080, 0x80808080 - .long 0x00010203, 0x04050607, 0x80808080, 0x80808080 - -/* For CTR-mode IV byteswap */ -.Lbswap128_mask: - .byte 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 - -/* - * pre-SubByte transform - * - * pre-lookup for sbox1, sbox2, sbox3: - * swap_bitendianness( - * isom_map_camellia_to_aes( - * camellia_f( - * swap_bitendianess(in) - * ) - * ) - * ) - * - * (note: '? 0xc5' inside camellia_f()) - */ -.Lpre_tf_lo_s1: - .byte 0x45, 0xe8, 0x40, 0xed, 0x2e, 0x83, 0x2b, 0x86 - .byte 0x4b, 0xe6, 0x4e, 0xe3, 0x20, 0x8d, 0x25, 0x88 -.Lpre_tf_hi_s1: - .byte 0x00, 0x51, 0xf1, 0xa0, 0x8a, 0xdb, 0x7b, 0x2a - .byte 0x09, 0x58, 0xf8, 0xa9, 0x83, 0xd2, 0x72, 0x23 - -/* - * pre-SubByte transform - * - * pre-lookup for sbox4: - * swap_bitendianness( - * isom_map_camellia_to_aes( - * camellia_f( - * swap_bitendianess(in <<< 1) - * ) - * ) - * ) - * - * (note: '? 0xc5' inside camellia_f()) - */ -.Lpre_tf_lo_s4: - .byte 0x45, 0x40, 0x2e, 0x2b, 0x4b, 0x4e, 0x20, 0x25 - .byte 0x14, 0x11, 0x7f, 0x7a, 0x1a, 0x1f, 0x71, 0x74 -.Lpre_tf_hi_s4: - .byte 0x00, 0xf1, 0x8a, 0x7b, 0x09, 0xf8, 0x83, 0x72 - .byte 0xad, 0x5c, 0x27, 0xd6, 0xa4, 0x55, 0x2e, 0xdf - -/* - * post-SubByte transform - * - * post-lookup for sbox1, sbox4: - * swap_bitendianness( - * camellia_h( - * isom_map_aes_to_camellia( - * swap_bitendianness( - * aes_inverse_affine_transform(in) - * ) - * ) - * ) - * ) - * - * (note: '? 0x6e' inside camellia_h()) - */ -.Lpost_tf_lo_s1: - .byte 0x3c, 0xcc, 0xcf, 0x3f, 0x32, 0xc2, 0xc1, 0x31 - .byte 0xdc, 0x2c, 0x2f, 0xdf, 0xd2, 0x22, 0x21, 0xd1 -.Lpost_tf_hi_s1: - .byte 0x00, 0xf9, 0x86, 0x7f, 0xd7, 0x2e, 0x51, 0xa8 - .byte 0xa4, 0x5d, 0x22, 0xdb, 0x73, 0x8a, 0xf5, 0x0c - -/* - * post-SubByte transform - * - * post-lookup for sbox2: - * swap_bitendianness( - * camellia_h( - * isom_map_aes_to_camellia( - * swap_bitendianness( - * aes_inverse_affine_transform(in) - * ) - * ) - * ) - * ) <<< 1 - * - * (note: '? 0x6e' inside camellia_h()) - */ -.Lpost_tf_lo_s2: - .byte 0x78, 0x99, 0x9f, 0x7e, 0x64, 0x85, 0x83, 0x62 - .byte 0xb9, 0x58, 0x5e, 0xbf, 0xa5, 0x44, 0x42, 0xa3 -.Lpost_tf_hi_s2: - .byte 0x00, 0xf3, 0x0d, 0xfe, 0xaf, 0x5c, 0xa2, 0x51 - .byte 0x49, 0xba, 0x44, 0xb7, 0xe6, 0x15, 0xeb, 0x18 - -/* - * post-SubByte transform - * - * post-lookup for sbox3: - * swap_bitendianness( - * camellia_h( - * isom_map_aes_to_camellia( - * swap_bitendianness( - * aes_inverse_affine_transform(in) - * ) - * ) - * ) - * ) >>> 1 - * - * (note: '? 0x6e' inside camellia_h()) - */ -.Lpost_tf_lo_s3: - .byte 0x1e, 0x66, 0xe7, 0x9f, 0x19, 0x61, 0xe0, 0x98 - .byte 0x6e, 0x16, 0x97, 0xef, 0x69, 0x11, 0x90, 0xe8 -.Lpost_tf_hi_s3: - .byte 0x00, 0xfc, 0x43, 0xbf, 0xeb, 0x17, 0xa8, 0x54 - .byte 0x52, 0xae, 0x11, 0xed, 0xb9, 0x45, 0xfa, 0x06 - -/* For isolating SubBytes from AESENCLAST, inverse shift row */ -.Linv_shift_row: - .byte 0x00, 0x0d, 0x0a, 0x07, 0x04, 0x01, 0x0e, 0x0b - .byte 0x08, 0x05, 0x02, 0x0f, 0x0c, 0x09, 0x06, 0x03 - -.align 4 -/* 4-bit mask */ -.L0f0f0f0f: - .long 0x0f0f0f0f - - -.align 8 -ELF(.type __camellia_enc_blk32, at function;) - -__camellia_enc_blk32: - /* input: - * %rdi: ctx, CTX - * %rax: temporary storage, 512 bytes - * %r8d: 24 for 16 byte key, 32 for larger - * %ymm0..%ymm15: 32 plaintext blocks - * output: - * %ymm0..%ymm15: 32 encrypted blocks, order swapped: - * 7, 8, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 - */ - CFI_STARTPROC(); - - leaq 8 * 32(%rax), %rcx; - - leaq (-8 * 8)(CTX, %r8, 8), %r8; - - inpack32_post(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, %rax, %rcx); - -.align 8 -.Lenc_loop: - enc_rounds32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, %rax, %rcx, 0); - - cmpq %r8, CTX; - je .Lenc_done; - leaq (8 * 8)(CTX), CTX; - - fls32(%rax, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %rcx, %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, - ((key_table) + 0)(CTX), - ((key_table) + 4)(CTX), - ((key_table) + 8)(CTX), - ((key_table) + 12)(CTX)); - jmp .Lenc_loop; - -.align 8 -.Lenc_done: - /* load CD for output */ - vmovdqu 0 * 32(%rcx), %ymm8; - vmovdqu 1 * 32(%rcx), %ymm9; - vmovdqu 2 * 32(%rcx), %ymm10; - vmovdqu 3 * 32(%rcx), %ymm11; - vmovdqu 4 * 32(%rcx), %ymm12; - vmovdqu 5 * 32(%rcx), %ymm13; - vmovdqu 6 * 32(%rcx), %ymm14; - vmovdqu 7 * 32(%rcx), %ymm15; - - outunpack32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, ((key_table) + 8 * 8)(%r8), (%rax), 1 * 32(%rax)); - - ret; - CFI_ENDPROC(); -ELF(.size __camellia_enc_blk32,.-__camellia_enc_blk32;) - -.align 8 -ELF(.type __camellia_dec_blk32, at function;) - -__camellia_dec_blk32: - /* input: - * %rdi: ctx, CTX - * %rax: temporary storage, 512 bytes - * %r8d: 24 for 16 byte key, 32 for larger - * %ymm0..%ymm15: 16 encrypted blocks - * output: - * %ymm0..%ymm15: 16 plaintext blocks, order swapped: - * 7, 8, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 - */ - CFI_STARTPROC(); - - movq %r8, %rcx; - movq CTX, %r8 - leaq (-8 * 8)(CTX, %rcx, 8), CTX; - - leaq 8 * 32(%rax), %rcx; - - inpack32_post(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, %rax, %rcx); - -.align 8 -.Ldec_loop: - dec_rounds32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, %rax, %rcx, 0); - - cmpq %r8, CTX; - je .Ldec_done; - - fls32(%rax, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %rcx, %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, - ((key_table) + 8)(CTX), - ((key_table) + 12)(CTX), - ((key_table) + 0)(CTX), - ((key_table) + 4)(CTX)); - - leaq (-8 * 8)(CTX), CTX; - jmp .Ldec_loop; - -.align 8 -.Ldec_done: - /* load CD for output */ - vmovdqu 0 * 32(%rcx), %ymm8; - vmovdqu 1 * 32(%rcx), %ymm9; - vmovdqu 2 * 32(%rcx), %ymm10; - vmovdqu 3 * 32(%rcx), %ymm11; - vmovdqu 4 * 32(%rcx), %ymm12; - vmovdqu 5 * 32(%rcx), %ymm13; - vmovdqu 6 * 32(%rcx), %ymm14; - vmovdqu 7 * 32(%rcx), %ymm15; - - outunpack32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, (key_table)(CTX), (%rax), 1 * 32(%rax)); - - ret; - CFI_ENDPROC(); -ELF(.size __camellia_dec_blk32,.-__camellia_dec_blk32;) - -#define inc_le128(x, minus_one, tmp) \ - vpcmpeqq minus_one, x, tmp; \ - vpsubq minus_one, x, x; \ - vpslldq $8, tmp, tmp; \ - vpsubq tmp, x, x; - -.align 8 -.globl _gcry_camellia_aesni_avx2_ctr_enc -ELF(.type _gcry_camellia_aesni_avx2_ctr_enc, at function;) - -_gcry_camellia_aesni_avx2_ctr_enc: - /* input: - * %rdi: ctx, CTX - * %rsi: dst (32 blocks) - * %rdx: src (32 blocks) - * %rcx: iv (big endian, 128bit) - */ - CFI_STARTPROC(); - - pushq %rbp; - CFI_PUSH(%rbp); - movq %rsp, %rbp; - CFI_DEF_CFA_REGISTER(%rbp); - - movq 8(%rcx), %r11; - bswapq %r11; - - vzeroupper; - - cmpl $128, key_bitlength(CTX); - movl $32, %r8d; - movl $24, %eax; - cmovel %eax, %r8d; /* max */ - - subq $(16 * 32), %rsp; - andq $~63, %rsp; - movq %rsp, %rax; - - vpcmpeqd %ymm15, %ymm15, %ymm15; - vpsrldq $8, %ymm15, %ymm15; /* ab: -1:0 ; cd: -1:0 */ - - /* load IV and byteswap */ - vmovdqu (%rcx), %xmm0; - vpshufb .Lbswap128_mask rRIP, %xmm0, %xmm0; - vmovdqa %xmm0, %xmm1; - inc_le128(%xmm0, %xmm15, %xmm14); - vbroadcasti128 .Lbswap128_mask rRIP, %ymm14; - vinserti128 $1, %xmm0, %ymm1, %ymm0; - vpshufb %ymm14, %ymm0, %ymm13; - vmovdqu %ymm13, 15 * 32(%rax); - - /* check need for handling 64-bit overflow and carry */ - cmpq $(0xffffffffffffffff - 32), %r11; - ja .Lload_ctr_carry; - - /* construct IVs */ - vpaddq %ymm15, %ymm15, %ymm15; /* ab: -2:0 ; cd: -2:0 */ - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm13; - vmovdqu %ymm13, 14 * 32(%rax); - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm13; - vmovdqu %ymm13, 13 * 32(%rax); - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm12; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm11; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm10; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm9; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm8; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm7; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm6; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm5; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm4; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm3; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm2; - vpsubq %ymm15, %ymm0, %ymm0; - vpshufb %ymm14, %ymm0, %ymm1; - vpsubq %ymm15, %ymm0, %ymm0; /* +30 ; +31 */ - vpsubq %xmm15, %xmm0, %xmm13; /* +32 */ - vpshufb %ymm14, %ymm0, %ymm0; - vpshufb %xmm14, %xmm13, %xmm13; - vmovdqu %xmm13, (%rcx); - - jmp .Lload_ctr_done; - -.align 4 -.Lload_ctr_carry: - /* construct IVs */ - inc_le128(%ymm0, %ymm15, %ymm13); /* ab: le1 ; cd: le2 */ - inc_le128(%ymm0, %ymm15, %ymm13); /* ab: le2 ; cd: le3 */ - vpshufb %ymm14, %ymm0, %ymm13; - vmovdqu %ymm13, 14 * 32(%rax); - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm13; - vmovdqu %ymm13, 13 * 32(%rax); - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm12; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm11; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm10; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm9; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm8; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm7; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm6; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm5; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm4; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm3; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm2; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vpshufb %ymm14, %ymm0, %ymm1; - inc_le128(%ymm0, %ymm15, %ymm13); - inc_le128(%ymm0, %ymm15, %ymm13); - vextracti128 $1, %ymm0, %xmm13; - vpshufb %ymm14, %ymm0, %ymm0; - inc_le128(%xmm13, %xmm15, %xmm14); - vpshufb .Lbswap128_mask rRIP, %xmm13, %xmm13; - vmovdqu %xmm13, (%rcx); - -.align 4 -.Lload_ctr_done: - /* inpack16_pre: */ - vpbroadcastq (key_table)(CTX), %ymm15; - vpshufb .Lpack_bswap rRIP, %ymm15, %ymm15; - vpxor %ymm0, %ymm15, %ymm0; - vpxor %ymm1, %ymm15, %ymm1; - vpxor %ymm2, %ymm15, %ymm2; - vpxor %ymm3, %ymm15, %ymm3; - vpxor %ymm4, %ymm15, %ymm4; - vpxor %ymm5, %ymm15, %ymm5; - vpxor %ymm6, %ymm15, %ymm6; - vpxor %ymm7, %ymm15, %ymm7; - vpxor %ymm8, %ymm15, %ymm8; - vpxor %ymm9, %ymm15, %ymm9; - vpxor %ymm10, %ymm15, %ymm10; - vpxor %ymm11, %ymm15, %ymm11; - vpxor %ymm12, %ymm15, %ymm12; - vpxor 13 * 32(%rax), %ymm15, %ymm13; - vpxor 14 * 32(%rax), %ymm15, %ymm14; - vpxor 15 * 32(%rax), %ymm15, %ymm15; - - call __camellia_enc_blk32; - - vpxor 0 * 32(%rdx), %ymm7, %ymm7; - vpxor 1 * 32(%rdx), %ymm6, %ymm6; - vpxor 2 * 32(%rdx), %ymm5, %ymm5; - vpxor 3 * 32(%rdx), %ymm4, %ymm4; - vpxor 4 * 32(%rdx), %ymm3, %ymm3; - vpxor 5 * 32(%rdx), %ymm2, %ymm2; - vpxor 6 * 32(%rdx), %ymm1, %ymm1; - vpxor 7 * 32(%rdx), %ymm0, %ymm0; - vpxor 8 * 32(%rdx), %ymm15, %ymm15; - vpxor 9 * 32(%rdx), %ymm14, %ymm14; - vpxor 10 * 32(%rdx), %ymm13, %ymm13; - vpxor 11 * 32(%rdx), %ymm12, %ymm12; - vpxor 12 * 32(%rdx), %ymm11, %ymm11; - vpxor 13 * 32(%rdx), %ymm10, %ymm10; - vpxor 14 * 32(%rdx), %ymm9, %ymm9; - vpxor 15 * 32(%rdx), %ymm8, %ymm8; - leaq 32 * 16(%rdx), %rdx; - - write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, - %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, - %ymm8, %rsi); - - vzeroall; - - leave; - CFI_LEAVE(); - ret; - CFI_ENDPROC(); -ELF(.size _gcry_camellia_aesni_avx2_ctr_enc,.-_gcry_camellia_aesni_avx2_ctr_enc;) - -.align 8 -.globl _gcry_camellia_aesni_avx2_cbc_dec -ELF(.type _gcry_camellia_aesni_avx2_cbc_dec, at function;) - -_gcry_camellia_aesni_avx2_cbc_dec: - /* input: - * %rdi: ctx, CTX - * %rsi: dst (32 blocks) - * %rdx: src (32 blocks) - * %rcx: iv - */ - CFI_STARTPROC(); - - pushq %rbp; - CFI_PUSH(%rbp); - movq %rsp, %rbp; - CFI_DEF_CFA_REGISTER(%rbp); - - vzeroupper; - - movq %rcx, %r9; - - cmpl $128, key_bitlength(CTX); - movl $32, %r8d; - movl $24, %eax; - cmovel %eax, %r8d; /* max */ - - subq $(16 * 32), %rsp; - andq $~63, %rsp; - movq %rsp, %rax; - - inpack32_pre(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, - %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, - %ymm15, %rdx, (key_table)(CTX, %r8, 8)); - - call __camellia_dec_blk32; - - /* XOR output with IV */ - vmovdqu %ymm8, (%rax); - vmovdqu (%r9), %xmm8; - vinserti128 $1, (%rdx), %ymm8, %ymm8; - vpxor %ymm8, %ymm7, %ymm7; - vmovdqu (%rax), %ymm8; - vpxor (0 * 32 + 16)(%rdx), %ymm6, %ymm6; - vpxor (1 * 32 + 16)(%rdx), %ymm5, %ymm5; - vpxor (2 * 32 + 16)(%rdx), %ymm4, %ymm4; - vpxor (3 * 32 + 16)(%rdx), %ymm3, %ymm3; - vpxor (4 * 32 + 16)(%rdx), %ymm2, %ymm2; - vpxor (5 * 32 + 16)(%rdx), %ymm1, %ymm1; - vpxor (6 * 32 + 16)(%rdx), %ymm0, %ymm0; - vpxor (7 * 32 + 16)(%rdx), %ymm15, %ymm15; - vpxor (8 * 32 + 16)(%rdx), %ymm14, %ymm14; - vpxor (9 * 32 + 16)(%rdx), %ymm13, %ymm13; - vpxor (10 * 32 + 16)(%rdx), %ymm12, %ymm12; - vpxor (11 * 32 + 16)(%rdx), %ymm11, %ymm11; - vpxor (12 * 32 + 16)(%rdx), %ymm10, %ymm10; - vpxor (13 * 32 + 16)(%rdx), %ymm9, %ymm9; - vpxor (14 * 32 + 16)(%rdx), %ymm8, %ymm8; - movq (15 * 32 + 16 + 0)(%rdx), %rax; - movq (15 * 32 + 16 + 8)(%rdx), %rcx; - - write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, - %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, - %ymm8, %rsi); - - /* store new IV */ - movq %rax, (0)(%r9); - movq %rcx, (8)(%r9); - - vzeroall; - - leave; - CFI_LEAVE(); - ret; - CFI_ENDPROC(); -ELF(.size _gcry_camellia_aesni_avx2_cbc_dec,.-_gcry_camellia_aesni_avx2_cbc_dec;) - -.align 8 -.globl _gcry_camellia_aesni_avx2_cfb_dec -ELF(.type _gcry_camellia_aesni_avx2_cfb_dec, at function;) - -_gcry_camellia_aesni_avx2_cfb_dec: - /* input: - * %rdi: ctx, CTX - * %rsi: dst (32 blocks) - * %rdx: src (32 blocks) - * %rcx: iv - */ - CFI_STARTPROC(); - - pushq %rbp; - CFI_PUSH(%rbp); - movq %rsp, %rbp; - CFI_DEF_CFA_REGISTER(%rbp); - - vzeroupper; - - cmpl $128, key_bitlength(CTX); - movl $32, %r8d; - movl $24, %eax; - cmovel %eax, %r8d; /* max */ - - subq $(16 * 32), %rsp; - andq $~63, %rsp; - movq %rsp, %rax; - - /* inpack16_pre: */ - vpbroadcastq (key_table)(CTX), %ymm0; - vpshufb .Lpack_bswap rRIP, %ymm0, %ymm0; - vmovdqu (%rcx), %xmm15; - vinserti128 $1, (%rdx), %ymm15, %ymm15; - vpxor %ymm15, %ymm0, %ymm15; - vmovdqu (15 * 32 + 16)(%rdx), %xmm1; - vmovdqu %xmm1, (%rcx); /* store new IV */ - vpxor (0 * 32 + 16)(%rdx), %ymm0, %ymm14; - vpxor (1 * 32 + 16)(%rdx), %ymm0, %ymm13; - vpxor (2 * 32 + 16)(%rdx), %ymm0, %ymm12; - vpxor (3 * 32 + 16)(%rdx), %ymm0, %ymm11; - vpxor (4 * 32 + 16)(%rdx), %ymm0, %ymm10; - vpxor (5 * 32 + 16)(%rdx), %ymm0, %ymm9; - vpxor (6 * 32 + 16)(%rdx), %ymm0, %ymm8; - vpxor (7 * 32 + 16)(%rdx), %ymm0, %ymm7; - vpxor (8 * 32 + 16)(%rdx), %ymm0, %ymm6; - vpxor (9 * 32 + 16)(%rdx), %ymm0, %ymm5; - vpxor (10 * 32 + 16)(%rdx), %ymm0, %ymm4; - vpxor (11 * 32 + 16)(%rdx), %ymm0, %ymm3; - vpxor (12 * 32 + 16)(%rdx), %ymm0, %ymm2; - vpxor (13 * 32 + 16)(%rdx), %ymm0, %ymm1; - vpxor (14 * 32 + 16)(%rdx), %ymm0, %ymm0; - - call __camellia_enc_blk32; - - vpxor 0 * 32(%rdx), %ymm7, %ymm7; - vpxor 1 * 32(%rdx), %ymm6, %ymm6; - vpxor 2 * 32(%rdx), %ymm5, %ymm5; - vpxor 3 * 32(%rdx), %ymm4, %ymm4; - vpxor 4 * 32(%rdx), %ymm3, %ymm3; - vpxor 5 * 32(%rdx), %ymm2, %ymm2; - vpxor 6 * 32(%rdx), %ymm1, %ymm1; - vpxor 7 * 32(%rdx), %ymm0, %ymm0; - vpxor 8 * 32(%rdx), %ymm15, %ymm15; - vpxor 9 * 32(%rdx), %ymm14, %ymm14; - vpxor 10 * 32(%rdx), %ymm13, %ymm13; - vpxor 11 * 32(%rdx), %ymm12, %ymm12; - vpxor 12 * 32(%rdx), %ymm11, %ymm11; - vpxor 13 * 32(%rdx), %ymm10, %ymm10; - vpxor 14 * 32(%rdx), %ymm9, %ymm9; - vpxor 15 * 32(%rdx), %ymm8, %ymm8; - - write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, - %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, - %ymm8, %rsi); - - vzeroall; - - leave; - CFI_LEAVE(); - ret; - CFI_ENDPROC(); -ELF(.size _gcry_camellia_aesni_avx2_cfb_dec,.-_gcry_camellia_aesni_avx2_cfb_dec;) - -.align 8 -.globl _gcry_camellia_aesni_avx2_ocb_enc -ELF(.type _gcry_camellia_aesni_avx2_ocb_enc, at function;) - -_gcry_camellia_aesni_avx2_ocb_enc: - /* input: - * %rdi: ctx, CTX - * %rsi: dst (32 blocks) - * %rdx: src (32 blocks) - * %rcx: offset - * %r8 : checksum - * %r9 : L pointers (void *L[32]) - */ - CFI_STARTPROC(); - - pushq %rbp; - CFI_PUSH(%rbp); - movq %rsp, %rbp; - CFI_DEF_CFA_REGISTER(%rbp); - - vzeroupper; - - subq $(16 * 32 + 4 * 8), %rsp; - andq $~63, %rsp; - movq %rsp, %rax; - - movq %r10, (16 * 32 + 0 * 8)(%rsp); - movq %r11, (16 * 32 + 1 * 8)(%rsp); - movq %r12, (16 * 32 + 2 * 8)(%rsp); - movq %r13, (16 * 32 + 3 * 8)(%rsp); - CFI_REG_ON_STACK(r10, 16 * 32 + 0 * 8); - CFI_REG_ON_STACK(r11, 16 * 32 + 1 * 8); - CFI_REG_ON_STACK(r12, 16 * 32 + 2 * 8); - CFI_REG_ON_STACK(r13, 16 * 32 + 3 * 8); - - vmovdqu (%rcx), %xmm14; - vmovdqu (%r8), %xmm13; - - /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ - /* Checksum_i = Checksum_{i-1} xor P_i */ - /* C_i = Offset_i xor ENCIPHER(K, P_i xor Offset_i) */ - -#define OCB_INPUT(n, l0reg, l1reg, yreg) \ - vmovdqu (n * 32)(%rdx), yreg; \ - vpxor (l0reg), %xmm14, %xmm15; \ - vpxor (l1reg), %xmm15, %xmm14; \ - vinserti128 $1, %xmm14, %ymm15, %ymm15; \ - vpxor yreg, %ymm13, %ymm13; \ - vpxor yreg, %ymm15, yreg; \ - vmovdqu %ymm15, (n * 32)(%rsi); - - movq (0 * 8)(%r9), %r10; - movq (1 * 8)(%r9), %r11; - movq (2 * 8)(%r9), %r12; - movq (3 * 8)(%r9), %r13; - OCB_INPUT(0, %r10, %r11, %ymm0); - vmovdqu %ymm0, (15 * 32)(%rax); - OCB_INPUT(1, %r12, %r13, %ymm0); - vmovdqu %ymm0, (14 * 32)(%rax); - movq (4 * 8)(%r9), %r10; - movq (5 * 8)(%r9), %r11; - movq (6 * 8)(%r9), %r12; - movq (7 * 8)(%r9), %r13; - OCB_INPUT(2, %r10, %r11, %ymm0); - vmovdqu %ymm0, (13 * 32)(%rax); - OCB_INPUT(3, %r12, %r13, %ymm12); - movq (8 * 8)(%r9), %r10; - movq (9 * 8)(%r9), %r11; - movq (10 * 8)(%r9), %r12; - movq (11 * 8)(%r9), %r13; - OCB_INPUT(4, %r10, %r11, %ymm11); - OCB_INPUT(5, %r12, %r13, %ymm10); - movq (12 * 8)(%r9), %r10; - movq (13 * 8)(%r9), %r11; - movq (14 * 8)(%r9), %r12; - movq (15 * 8)(%r9), %r13; - OCB_INPUT(6, %r10, %r11, %ymm9); - OCB_INPUT(7, %r12, %r13, %ymm8); - movq (16 * 8)(%r9), %r10; - movq (17 * 8)(%r9), %r11; - movq (18 * 8)(%r9), %r12; - movq (19 * 8)(%r9), %r13; - OCB_INPUT(8, %r10, %r11, %ymm7); - OCB_INPUT(9, %r12, %r13, %ymm6); - movq (20 * 8)(%r9), %r10; - movq (21 * 8)(%r9), %r11; - movq (22 * 8)(%r9), %r12; - movq (23 * 8)(%r9), %r13; - OCB_INPUT(10, %r10, %r11, %ymm5); - OCB_INPUT(11, %r12, %r13, %ymm4); - movq (24 * 8)(%r9), %r10; - movq (25 * 8)(%r9), %r11; - movq (26 * 8)(%r9), %r12; - movq (27 * 8)(%r9), %r13; - OCB_INPUT(12, %r10, %r11, %ymm3); - OCB_INPUT(13, %r12, %r13, %ymm2); - movq (28 * 8)(%r9), %r10; - movq (29 * 8)(%r9), %r11; - movq (30 * 8)(%r9), %r12; - movq (31 * 8)(%r9), %r13; - OCB_INPUT(14, %r10, %r11, %ymm1); - OCB_INPUT(15, %r12, %r13, %ymm0); -#undef OCB_INPUT - - vextracti128 $1, %ymm13, %xmm15; - vmovdqu %xmm14, (%rcx); - vpxor %xmm13, %xmm15, %xmm15; - vmovdqu %xmm15, (%r8); - - cmpl $128, key_bitlength(CTX); - movl $32, %r8d; - movl $24, %r10d; - cmovel %r10d, %r8d; /* max */ - - /* inpack16_pre: */ - vpbroadcastq (key_table)(CTX), %ymm15; - vpshufb .Lpack_bswap rRIP, %ymm15, %ymm15; - vpxor %ymm0, %ymm15, %ymm0; - vpxor %ymm1, %ymm15, %ymm1; - vpxor %ymm2, %ymm15, %ymm2; - vpxor %ymm3, %ymm15, %ymm3; - vpxor %ymm4, %ymm15, %ymm4; - vpxor %ymm5, %ymm15, %ymm5; - vpxor %ymm6, %ymm15, %ymm6; - vpxor %ymm7, %ymm15, %ymm7; - vpxor %ymm8, %ymm15, %ymm8; - vpxor %ymm9, %ymm15, %ymm9; - vpxor %ymm10, %ymm15, %ymm10; - vpxor %ymm11, %ymm15, %ymm11; - vpxor %ymm12, %ymm15, %ymm12; - vpxor 13 * 32(%rax), %ymm15, %ymm13; - vpxor 14 * 32(%rax), %ymm15, %ymm14; - vpxor 15 * 32(%rax), %ymm15, %ymm15; - - call __camellia_enc_blk32; - - vpxor 0 * 32(%rsi), %ymm7, %ymm7; - vpxor 1 * 32(%rsi), %ymm6, %ymm6; - vpxor 2 * 32(%rsi), %ymm5, %ymm5; - vpxor 3 * 32(%rsi), %ymm4, %ymm4; - vpxor 4 * 32(%rsi), %ymm3, %ymm3; - vpxor 5 * 32(%rsi), %ymm2, %ymm2; - vpxor 6 * 32(%rsi), %ymm1, %ymm1; - vpxor 7 * 32(%rsi), %ymm0, %ymm0; - vpxor 8 * 32(%rsi), %ymm15, %ymm15; - vpxor 9 * 32(%rsi), %ymm14, %ymm14; - vpxor 10 * 32(%rsi), %ymm13, %ymm13; - vpxor 11 * 32(%rsi), %ymm12, %ymm12; - vpxor 12 * 32(%rsi), %ymm11, %ymm11; - vpxor 13 * 32(%rsi), %ymm10, %ymm10; - vpxor 14 * 32(%rsi), %ymm9, %ymm9; - vpxor 15 * 32(%rsi), %ymm8, %ymm8; - - write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, - %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, - %ymm8, %rsi); - - vzeroall; - - movq (16 * 32 + 0 * 8)(%rsp), %r10; - movq (16 * 32 + 1 * 8)(%rsp), %r11; - movq (16 * 32 + 2 * 8)(%rsp), %r12; - movq (16 * 32 + 3 * 8)(%rsp), %r13; - CFI_RESTORE(%r10); - CFI_RESTORE(%r11); - CFI_RESTORE(%r12); - CFI_RESTORE(%r13); - - leave; - CFI_LEAVE(); - ret; - CFI_ENDPROC(); -ELF(.size _gcry_camellia_aesni_avx2_ocb_enc,.-_gcry_camellia_aesni_avx2_ocb_enc;) - -.align 8 -.globl _gcry_camellia_aesni_avx2_ocb_dec -ELF(.type _gcry_camellia_aesni_avx2_ocb_dec, at function;) - -_gcry_camellia_aesni_avx2_ocb_dec: - /* input: - * %rdi: ctx, CTX - * %rsi: dst (32 blocks) - * %rdx: src (32 blocks) - * %rcx: offset - * %r8 : checksum - * %r9 : L pointers (void *L[32]) - */ - CFI_STARTPROC(); - - pushq %rbp; - CFI_PUSH(%rbp); - movq %rsp, %rbp; - CFI_DEF_CFA_REGISTER(%rbp); - - vzeroupper; - - subq $(16 * 32 + 4 * 8), %rsp; - andq $~63, %rsp; - movq %rsp, %rax; - - movq %r10, (16 * 32 + 0 * 8)(%rsp); - movq %r11, (16 * 32 + 1 * 8)(%rsp); - movq %r12, (16 * 32 + 2 * 8)(%rsp); - movq %r13, (16 * 32 + 3 * 8)(%rsp); - CFI_REG_ON_STACK(r10, 16 * 32 + 0 * 8); - CFI_REG_ON_STACK(r11, 16 * 32 + 1 * 8); - CFI_REG_ON_STACK(r12, 16 * 32 + 2 * 8); - CFI_REG_ON_STACK(r13, 16 * 32 + 3 * 8); - - vmovdqu (%rcx), %xmm14; - - /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ - /* P_i = Offset_i xor DECIPHER(K, C_i xor Offset_i) */ - -#define OCB_INPUT(n, l0reg, l1reg, yreg) \ - vmovdqu (n * 32)(%rdx), yreg; \ - vpxor (l0reg), %xmm14, %xmm15; \ - vpxor (l1reg), %xmm15, %xmm14; \ - vinserti128 $1, %xmm14, %ymm15, %ymm15; \ - vpxor yreg, %ymm15, yreg; \ - vmovdqu %ymm15, (n * 32)(%rsi); - - movq (0 * 8)(%r9), %r10; - movq (1 * 8)(%r9), %r11; - movq (2 * 8)(%r9), %r12; - movq (3 * 8)(%r9), %r13; - OCB_INPUT(0, %r10, %r11, %ymm0); - vmovdqu %ymm0, (15 * 32)(%rax); - OCB_INPUT(1, %r12, %r13, %ymm0); - vmovdqu %ymm0, (14 * 32)(%rax); - movq (4 * 8)(%r9), %r10; - movq (5 * 8)(%r9), %r11; - movq (6 * 8)(%r9), %r12; - movq (7 * 8)(%r9), %r13; - OCB_INPUT(2, %r10, %r11, %ymm13); - OCB_INPUT(3, %r12, %r13, %ymm12); - movq (8 * 8)(%r9), %r10; - movq (9 * 8)(%r9), %r11; - movq (10 * 8)(%r9), %r12; - movq (11 * 8)(%r9), %r13; - OCB_INPUT(4, %r10, %r11, %ymm11); - OCB_INPUT(5, %r12, %r13, %ymm10); - movq (12 * 8)(%r9), %r10; - movq (13 * 8)(%r9), %r11; - movq (14 * 8)(%r9), %r12; - movq (15 * 8)(%r9), %r13; - OCB_INPUT(6, %r10, %r11, %ymm9); - OCB_INPUT(7, %r12, %r13, %ymm8); - movq (16 * 8)(%r9), %r10; - movq (17 * 8)(%r9), %r11; - movq (18 * 8)(%r9), %r12; - movq (19 * 8)(%r9), %r13; - OCB_INPUT(8, %r10, %r11, %ymm7); - OCB_INPUT(9, %r12, %r13, %ymm6); - movq (20 * 8)(%r9), %r10; - movq (21 * 8)(%r9), %r11; - movq (22 * 8)(%r9), %r12; - movq (23 * 8)(%r9), %r13; - OCB_INPUT(10, %r10, %r11, %ymm5); - OCB_INPUT(11, %r12, %r13, %ymm4); - movq (24 * 8)(%r9), %r10; - movq (25 * 8)(%r9), %r11; - movq (26 * 8)(%r9), %r12; - movq (27 * 8)(%r9), %r13; - OCB_INPUT(12, %r10, %r11, %ymm3); - OCB_INPUT(13, %r12, %r13, %ymm2); - movq (28 * 8)(%r9), %r10; - movq (29 * 8)(%r9), %r11; - movq (30 * 8)(%r9), %r12; - movq (31 * 8)(%r9), %r13; - OCB_INPUT(14, %r10, %r11, %ymm1); - OCB_INPUT(15, %r12, %r13, %ymm0); -#undef OCB_INPUT - - vmovdqu %xmm14, (%rcx); - - movq %r8, %r10; - - cmpl $128, key_bitlength(CTX); - movl $32, %r8d; - movl $24, %r9d; - cmovel %r9d, %r8d; /* max */ - - /* inpack16_pre: */ - vpbroadcastq (key_table)(CTX, %r8, 8), %ymm15; - vpshufb .Lpack_bswap rRIP, %ymm15, %ymm15; - vpxor %ymm0, %ymm15, %ymm0; - vpxor %ymm1, %ymm15, %ymm1; - vpxor %ymm2, %ymm15, %ymm2; - vpxor %ymm3, %ymm15, %ymm3; - vpxor %ymm4, %ymm15, %ymm4; - vpxor %ymm5, %ymm15, %ymm5; - vpxor %ymm6, %ymm15, %ymm6; - vpxor %ymm7, %ymm15, %ymm7; - vpxor %ymm8, %ymm15, %ymm8; - vpxor %ymm9, %ymm15, %ymm9; - vpxor %ymm10, %ymm15, %ymm10; - vpxor %ymm11, %ymm15, %ymm11; - vpxor %ymm12, %ymm15, %ymm12; - vpxor %ymm13, %ymm15, %ymm13; - vpxor 14 * 32(%rax), %ymm15, %ymm14; - vpxor 15 * 32(%rax), %ymm15, %ymm15; - - call __camellia_dec_blk32; - - vpxor 0 * 32(%rsi), %ymm7, %ymm7; - vpxor 1 * 32(%rsi), %ymm6, %ymm6; - vpxor 2 * 32(%rsi), %ymm5, %ymm5; - vpxor 3 * 32(%rsi), %ymm4, %ymm4; - vpxor 4 * 32(%rsi), %ymm3, %ymm3; - vpxor 5 * 32(%rsi), %ymm2, %ymm2; - vpxor 6 * 32(%rsi), %ymm1, %ymm1; - vpxor 7 * 32(%rsi), %ymm0, %ymm0; - vmovdqu %ymm7, (7 * 32)(%rax); - vmovdqu %ymm6, (6 * 32)(%rax); - vpxor 8 * 32(%rsi), %ymm15, %ymm15; - vpxor 9 * 32(%rsi), %ymm14, %ymm14; - vpxor 10 * 32(%rsi), %ymm13, %ymm13; - vpxor 11 * 32(%rsi), %ymm12, %ymm12; - vpxor 12 * 32(%rsi), %ymm11, %ymm11; - vpxor 13 * 32(%rsi), %ymm10, %ymm10; - vpxor 14 * 32(%rsi), %ymm9, %ymm9; - vpxor 15 * 32(%rsi), %ymm8, %ymm8; - - /* Checksum_i = Checksum_{i-1} xor P_i */ - - vpxor %ymm5, %ymm7, %ymm7; - vpxor %ymm4, %ymm6, %ymm6; - vpxor %ymm3, %ymm7, %ymm7; - vpxor %ymm2, %ymm6, %ymm6; - vpxor %ymm1, %ymm7, %ymm7; - vpxor %ymm0, %ymm6, %ymm6; - vpxor %ymm15, %ymm7, %ymm7; - vpxor %ymm14, %ymm6, %ymm6; - vpxor %ymm13, %ymm7, %ymm7; - vpxor %ymm12, %ymm6, %ymm6; - vpxor %ymm11, %ymm7, %ymm7; - vpxor %ymm10, %ymm6, %ymm6; - vpxor %ymm9, %ymm7, %ymm7; - vpxor %ymm8, %ymm6, %ymm6; - vpxor %ymm7, %ymm6, %ymm7; - - vextracti128 $1, %ymm7, %xmm6; - vpxor %xmm6, %xmm7, %xmm7; - vpxor (%r10), %xmm7, %xmm7; - vmovdqu %xmm7, (%r10); - - vmovdqu 7 * 32(%rax), %ymm7; - vmovdqu 6 * 32(%rax), %ymm6; - - write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, - %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, - %ymm8, %rsi); - - vzeroall; - - movq (16 * 32 + 0 * 8)(%rsp), %r10; - movq (16 * 32 + 1 * 8)(%rsp), %r11; - movq (16 * 32 + 2 * 8)(%rsp), %r12; - movq (16 * 32 + 3 * 8)(%rsp), %r13; - CFI_RESTORE(%r10); - CFI_RESTORE(%r11); - CFI_RESTORE(%r12); - CFI_RESTORE(%r13); - - leave; - CFI_LEAVE(); - ret; - CFI_ENDPROC(); -ELF(.size _gcry_camellia_aesni_avx2_ocb_dec,.-_gcry_camellia_aesni_avx2_ocb_dec;) - -.align 8 -.globl _gcry_camellia_aesni_avx2_ocb_auth -ELF(.type _gcry_camellia_aesni_avx2_ocb_auth, at function;) - -_gcry_camellia_aesni_avx2_ocb_auth: - /* input: - * %rdi: ctx, CTX - * %rsi: abuf (16 blocks) - * %rdx: offset - * %rcx: checksum - * %r8 : L pointers (void *L[16]) - */ - CFI_STARTPROC(); - - pushq %rbp; - CFI_PUSH(%rbp); - movq %rsp, %rbp; - CFI_DEF_CFA_REGISTER(%rbp); - - vzeroupper; - - subq $(16 * 32 + 4 * 8), %rsp; - andq $~63, %rsp; - movq %rsp, %rax; - - movq %r10, (16 * 32 + 0 * 8)(%rsp); - movq %r11, (16 * 32 + 1 * 8)(%rsp); - movq %r12, (16 * 32 + 2 * 8)(%rsp); - movq %r13, (16 * 32 + 3 * 8)(%rsp); - CFI_REG_ON_STACK(r10, 16 * 32 + 0 * 8); - CFI_REG_ON_STACK(r11, 16 * 32 + 1 * 8); - CFI_REG_ON_STACK(r12, 16 * 32 + 2 * 8); - CFI_REG_ON_STACK(r13, 16 * 32 + 3 * 8); - - vmovdqu (%rdx), %xmm14; - - /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ - /* Checksum_i = Checksum_{i-1} xor P_i */ - /* C_i = Offset_i xor ENCIPHER(K, P_i xor Offset_i) */ - -#define OCB_INPUT(n, l0reg, l1reg, yreg) \ - vmovdqu (n * 32)(%rsi), yreg; \ - vpxor (l0reg), %xmm14, %xmm15; \ - vpxor (l1reg), %xmm15, %xmm14; \ - vinserti128 $1, %xmm14, %ymm15, %ymm15; \ - vpxor yreg, %ymm15, yreg; - - movq (0 * 8)(%r8), %r10; - movq (1 * 8)(%r8), %r11; - movq (2 * 8)(%r8), %r12; - movq (3 * 8)(%r8), %r13; - OCB_INPUT(0, %r10, %r11, %ymm0); - vmovdqu %ymm0, (15 * 32)(%rax); - OCB_INPUT(1, %r12, %r13, %ymm0); - vmovdqu %ymm0, (14 * 32)(%rax); - movq (4 * 8)(%r8), %r10; - movq (5 * 8)(%r8), %r11; - movq (6 * 8)(%r8), %r12; - movq (7 * 8)(%r8), %r13; - OCB_INPUT(2, %r10, %r11, %ymm13); - OCB_INPUT(3, %r12, %r13, %ymm12); - movq (8 * 8)(%r8), %r10; - movq (9 * 8)(%r8), %r11; - movq (10 * 8)(%r8), %r12; - movq (11 * 8)(%r8), %r13; - OCB_INPUT(4, %r10, %r11, %ymm11); - OCB_INPUT(5, %r12, %r13, %ymm10); - movq (12 * 8)(%r8), %r10; - movq (13 * 8)(%r8), %r11; - movq (14 * 8)(%r8), %r12; - movq (15 * 8)(%r8), %r13; - OCB_INPUT(6, %r10, %r11, %ymm9); - OCB_INPUT(7, %r12, %r13, %ymm8); - movq (16 * 8)(%r8), %r10; - movq (17 * 8)(%r8), %r11; - movq (18 * 8)(%r8), %r12; - movq (19 * 8)(%r8), %r13; - OCB_INPUT(8, %r10, %r11, %ymm7); - OCB_INPUT(9, %r12, %r13, %ymm6); - movq (20 * 8)(%r8), %r10; - movq (21 * 8)(%r8), %r11; - movq (22 * 8)(%r8), %r12; - movq (23 * 8)(%r8), %r13; - OCB_INPUT(10, %r10, %r11, %ymm5); - OCB_INPUT(11, %r12, %r13, %ymm4); - movq (24 * 8)(%r8), %r10; - movq (25 * 8)(%r8), %r11; - movq (26 * 8)(%r8), %r12; - movq (27 * 8)(%r8), %r13; - OCB_INPUT(12, %r10, %r11, %ymm3); - OCB_INPUT(13, %r12, %r13, %ymm2); - movq (28 * 8)(%r8), %r10; - movq (29 * 8)(%r8), %r11; - movq (30 * 8)(%r8), %r12; - movq (31 * 8)(%r8), %r13; - OCB_INPUT(14, %r10, %r11, %ymm1); - OCB_INPUT(15, %r12, %r13, %ymm0); -#undef OCB_INPUT - - vmovdqu %xmm14, (%rdx); - - cmpl $128, key_bitlength(CTX); - movl $32, %r8d; - movl $24, %r10d; - cmovel %r10d, %r8d; /* max */ - - movq %rcx, %r10; - - /* inpack16_pre: */ - vpbroadcastq (key_table)(CTX), %ymm15; - vpshufb .Lpack_bswap rRIP, %ymm15, %ymm15; - vpxor %ymm0, %ymm15, %ymm0; - vpxor %ymm1, %ymm15, %ymm1; - vpxor %ymm2, %ymm15, %ymm2; - vpxor %ymm3, %ymm15, %ymm3; - vpxor %ymm4, %ymm15, %ymm4; - vpxor %ymm5, %ymm15, %ymm5; - vpxor %ymm6, %ymm15, %ymm6; - vpxor %ymm7, %ymm15, %ymm7; - vpxor %ymm8, %ymm15, %ymm8; - vpxor %ymm9, %ymm15, %ymm9; - vpxor %ymm10, %ymm15, %ymm10; - vpxor %ymm11, %ymm15, %ymm11; - vpxor %ymm12, %ymm15, %ymm12; - vpxor %ymm13, %ymm15, %ymm13; - vpxor 14 * 32(%rax), %ymm15, %ymm14; - vpxor 15 * 32(%rax), %ymm15, %ymm15; - - call __camellia_enc_blk32; - - vpxor %ymm7, %ymm6, %ymm6; - vpxor %ymm5, %ymm4, %ymm4; - vpxor %ymm3, %ymm2, %ymm2; - vpxor %ymm1, %ymm0, %ymm0; - vpxor %ymm15, %ymm14, %ymm14; - vpxor %ymm13, %ymm12, %ymm12; - vpxor %ymm11, %ymm10, %ymm10; - vpxor %ymm9, %ymm8, %ymm8; - - vpxor %ymm6, %ymm4, %ymm4; - vpxor %ymm2, %ymm0, %ymm0; - vpxor %ymm14, %ymm12, %ymm12; - vpxor %ymm10, %ymm8, %ymm8; - - vpxor %ymm4, %ymm0, %ymm0; - vpxor %ymm12, %ymm8, %ymm8; - - vpxor %ymm0, %ymm8, %ymm0; - - vextracti128 $1, %ymm0, %xmm1; - vpxor (%r10), %xmm0, %xmm0; - vpxor %xmm0, %xmm1, %xmm0; - vmovdqu %xmm0, (%r10); - - vzeroall; - - movq (16 * 32 + 0 * 8)(%rsp), %r10; - movq (16 * 32 + 1 * 8)(%rsp), %r11; - movq (16 * 32 + 2 * 8)(%rsp), %r12; - movq (16 * 32 + 3 * 8)(%rsp), %r13; - CFI_RESTORE(%r10); - CFI_RESTORE(%r11); - CFI_RESTORE(%r12); - CFI_RESTORE(%r13); - - leave; - CFI_LEAVE(); - ret; - CFI_ENDPROC(); -ELF(.size _gcry_camellia_aesni_avx2_ocb_auth,.-_gcry_camellia_aesni_avx2_ocb_auth;) - -#endif /*defined(ENABLE_AESNI_SUPPORT) && defined(ENABLE_AVX2_SUPPORT)*/ -#endif /*__x86_64*/ +#endif /* defined(ENABLE_AESNI_SUPPORT) && defined(ENABLE_AVX2_SUPPORT) */ +#endif /* __x86_64 */ diff --git a/cipher/camellia-aesni-avx2-amd64.h b/cipher/camellia-aesni-avx2-amd64.h new file mode 100644 index 00000000..be7bb0aa --- /dev/null +++ b/cipher/camellia-aesni-avx2-amd64.h @@ -0,0 +1,1794 @@ +/* camellia-aesni-avx2-amd64.h - AES-NI/VAES/AVX2 implementation of Camellia + * + * Copyright (C) 2013-2015,2020-2021 Jussi Kivilinna + * + * This file is part of Libgcrypt. + * + * Libgcrypt is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as + * published by the Free Software Foundation; either version 2.1 of + * the License, or (at your option) any later version. + * + * Libgcrypt is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . + */ + +#ifndef GCRY_CAMELLIA_AESNI_AVX2_AMD64_H +#define GCRY_CAMELLIA_AESNI_AVX2_AMD64_H + +#include "asm-common-amd64.h" + +#define CAMELLIA_TABLE_BYTE_LEN 272 + +/* struct CAMELLIA_context: */ +#define key_table 0 +#define key_bitlength CAMELLIA_TABLE_BYTE_LEN + +/* register macros */ +#define CTX %rdi +#define RIO %r8 + +/********************************************************************** + helper macros + **********************************************************************/ +#define filter_8bit(x, lo_t, hi_t, mask4bit, tmp0) \ + vpand x, mask4bit, tmp0; \ + vpandn x, mask4bit, x; \ + vpsrld $4, x, x; \ + \ + vpshufb tmp0, lo_t, tmp0; \ + vpshufb x, hi_t, x; \ + vpxor tmp0, x, x; + +#define ymm0_x xmm0 +#define ymm1_x xmm1 +#define ymm2_x xmm2 +#define ymm3_x xmm3 +#define ymm4_x xmm4 +#define ymm5_x xmm5 +#define ymm6_x xmm6 +#define ymm7_x xmm7 +#define ymm8_x xmm8 +#define ymm9_x xmm9 +#define ymm10_x xmm10 +#define ymm11_x xmm11 +#define ymm12_x xmm12 +#define ymm13_x xmm13 +#define ymm14_x xmm14 +#define ymm15_x xmm15 + +#ifdef CAMELLIA_VAES_BUILD +# define IF_AESNI(...) +# define IF_VAES(...) __VA_ARGS__ +#else +# define IF_AESNI(...) __VA_ARGS__ +# define IF_VAES(...) +#endif + +/********************************************************************** + 32-way camellia + **********************************************************************/ + +/* + * IN: + * x0..x7: byte-sliced AB state + * mem_cd: register pointer storing CD state + * key: index for key material + * OUT: + * x0..x7: new byte-sliced CD state + */ + +#define roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, t0, t1, t2, t3, t4, t5, \ + t6, t7, mem_cd, key) \ + /* \ + * S-function with AES subbytes \ + */ \ + vbroadcasti128 .Linv_shift_row rRIP, t4; \ + vpbroadcastd .L0f0f0f0f rRIP, t7; \ + vbroadcasti128 .Lpre_tf_lo_s1 rRIP, t5; \ + vbroadcasti128 .Lpre_tf_hi_s1 rRIP, t6; \ + vbroadcasti128 .Lpre_tf_lo_s4 rRIP, t2; \ + vbroadcasti128 .Lpre_tf_hi_s4 rRIP, t3; \ + \ + /* AES inverse shift rows */ \ + vpshufb t4, x0, x0; \ + vpshufb t4, x7, x7; \ + vpshufb t4, x3, x3; \ + vpshufb t4, x6, x6; \ + vpshufb t4, x2, x2; \ + vpshufb t4, x5, x5; \ + vpshufb t4, x1, x1; \ + vpshufb t4, x4, x4; \ + \ + /* prefilter sboxes 1, 2 and 3 */ \ + /* prefilter sbox 4 */ \ + filter_8bit(x0, t5, t6, t7, t4); \ + filter_8bit(x7, t5, t6, t7, t4); \ + IF_AESNI(vextracti128 $1, x0, t0##_x); \ + IF_AESNI(vextracti128 $1, x7, t1##_x); \ + filter_8bit(x3, t2, t3, t7, t4); \ + filter_8bit(x6, t2, t3, t7, t4); \ + IF_AESNI(vextracti128 $1, x3, t3##_x); \ + IF_AESNI(vextracti128 $1, x6, t2##_x); \ + filter_8bit(x2, t5, t6, t7, t4); \ + filter_8bit(x5, t5, t6, t7, t4); \ + filter_8bit(x1, t5, t6, t7, t4); \ + filter_8bit(x4, t5, t6, t7, t4); \ + \ + vpxor t4##_x, t4##_x, t4##_x; \ + \ + /* AES subbytes + AES shift rows */ \ + IF_AESNI(vextracti128 $1, x2, t6##_x; \ + vextracti128 $1, x5, t5##_x; \ + vaesenclast t4##_x, x0##_x, x0##_x; \ + vaesenclast t4##_x, t0##_x, t0##_x; \ + vaesenclast t4##_x, x7##_x, x7##_x; \ + vaesenclast t4##_x, t1##_x, t1##_x; \ + vaesenclast t4##_x, x3##_x, x3##_x; \ + vaesenclast t4##_x, t3##_x, t3##_x; \ + vaesenclast t4##_x, x6##_x, x6##_x; \ + vaesenclast t4##_x, t2##_x, t2##_x; \ + vinserti128 $1, t0##_x, x0, x0; \ + vinserti128 $1, t1##_x, x7, x7; \ + vinserti128 $1, t3##_x, x3, x3; \ + vinserti128 $1, t2##_x, x6, x6; \ + vextracti128 $1, x1, t3##_x; \ + vextracti128 $1, x4, t2##_x); \ + vbroadcasti128 .Lpost_tf_lo_s1 rRIP, t0; \ + vbroadcasti128 .Lpost_tf_hi_s1 rRIP, t1; \ + IF_AESNI(vaesenclast t4##_x, x2##_x, x2##_x; \ + vaesenclast t4##_x, t6##_x, t6##_x; \ + vaesenclast t4##_x, x5##_x, x5##_x; \ + vaesenclast t4##_x, t5##_x, t5##_x; \ + vaesenclast t4##_x, x1##_x, x1##_x; \ + vaesenclast t4##_x, t3##_x, t3##_x; \ + vaesenclast t4##_x, x4##_x, x4##_x; \ + vaesenclast t4##_x, t2##_x, t2##_x; \ + vinserti128 $1, t6##_x, x2, x2; \ + vinserti128 $1, t5##_x, x5, x5; \ + vinserti128 $1, t3##_x, x1, x1; \ + vinserti128 $1, t2##_x, x4, x4); \ + IF_VAES(vaesenclast t4, x0, x0; \ + vaesenclast t4, x7, x7; \ + vaesenclast t4, x3, x3; \ + vaesenclast t4, x6, x6; \ + vaesenclast t4, x2, x2; \ + vaesenclast t4, x5, x5; \ + vaesenclast t4, x1, x1; \ + vaesenclast t4, x4, x4); \ + \ + /* postfilter sboxes 1 and 4 */ \ + vbroadcasti128 .Lpost_tf_lo_s3 rRIP, t2; \ + vbroadcasti128 .Lpost_tf_hi_s3 rRIP, t3; \ + filter_8bit(x0, t0, t1, t7, t4); \ + filter_8bit(x7, t0, t1, t7, t4); \ + filter_8bit(x3, t0, t1, t7, t6); \ + filter_8bit(x6, t0, t1, t7, t6); \ + \ + /* postfilter sbox 3 */ \ + vbroadcasti128 .Lpost_tf_lo_s2 rRIP, t4; \ + vbroadcasti128 .Lpost_tf_hi_s2 rRIP, t5; \ + filter_8bit(x2, t2, t3, t7, t6); \ + filter_8bit(x5, t2, t3, t7, t6); \ + \ + vpbroadcastq key, t0; /* higher 64-bit duplicate ignored */ \ + \ + /* postfilter sbox 2 */ \ + filter_8bit(x1, t4, t5, t7, t2); \ + filter_8bit(x4, t4, t5, t7, t2); \ + vpxor t7, t7, t7; \ + \ + vpsrldq $1, t0, t1; \ + vpsrldq $2, t0, t2; \ + vpshufb t7, t1, t1; \ + vpsrldq $3, t0, t3; \ + \ + /* P-function */ \ + vpxor x5, x0, x0; \ + vpxor x6, x1, x1; \ + vpxor x7, x2, x2; \ + vpxor x4, x3, x3; \ + \ + vpshufb t7, t2, t2; \ + vpsrldq $4, t0, t4; \ + vpshufb t7, t3, t3; \ + vpsrldq $5, t0, t5; \ + vpshufb t7, t4, t4; \ + \ + vpxor x2, x4, x4; \ + vpxor x3, x5, x5; \ + vpxor x0, x6, x6; \ + vpxor x1, x7, x7; \ + \ + vpsrldq $6, t0, t6; \ + vpshufb t7, t5, t5; \ + vpshufb t7, t6, t6; \ + \ + vpxor x7, x0, x0; \ + vpxor x4, x1, x1; \ + vpxor x5, x2, x2; \ + vpxor x6, x3, x3; \ + \ + vpxor x3, x4, x4; \ + vpxor x0, x5, x5; \ + vpxor x1, x6, x6; \ + vpxor x2, x7, x7; /* note: high and low parts swapped */ \ + \ + /* Add key material and result to CD (x becomes new CD) */ \ + \ + vpxor t6, x1, x1; \ + vpxor 5 * 32(mem_cd), x1, x1; \ + \ + vpsrldq $7, t0, t6; \ + vpshufb t7, t0, t0; \ + vpshufb t7, t6, t7; \ + \ + vpxor t7, x0, x0; \ + vpxor 4 * 32(mem_cd), x0, x0; \ + \ + vpxor t5, x2, x2; \ + vpxor 6 * 32(mem_cd), x2, x2; \ + \ + vpxor t4, x3, x3; \ + vpxor 7 * 32(mem_cd), x3, x3; \ + \ + vpxor t3, x4, x4; \ + vpxor 0 * 32(mem_cd), x4, x4; \ + \ + vpxor t2, x5, x5; \ + vpxor 1 * 32(mem_cd), x5, x5; \ + \ + vpxor t1, x6, x6; \ + vpxor 2 * 32(mem_cd), x6, x6; \ + \ + vpxor t0, x7, x7; \ + vpxor 3 * 32(mem_cd), x7, x7; + +/* + * IN/OUT: + * x0..x7: byte-sliced AB state preloaded + * mem_ab: byte-sliced AB state in memory + * mem_cb: byte-sliced CD state in memory + */ +#define two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, i, dir, store_ab) \ + roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_cd, (key_table + (i) * 8)(CTX)); \ + \ + vmovdqu x0, 4 * 32(mem_cd); \ + vmovdqu x1, 5 * 32(mem_cd); \ + vmovdqu x2, 6 * 32(mem_cd); \ + vmovdqu x3, 7 * 32(mem_cd); \ + vmovdqu x4, 0 * 32(mem_cd); \ + vmovdqu x5, 1 * 32(mem_cd); \ + vmovdqu x6, 2 * 32(mem_cd); \ + vmovdqu x7, 3 * 32(mem_cd); \ + \ + roundsm32(x4, x5, x6, x7, x0, x1, x2, x3, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, (key_table + ((i) + (dir)) * 8)(CTX)); \ + \ + store_ab(x0, x1, x2, x3, x4, x5, x6, x7, mem_ab); + +#define dummy_store(x0, x1, x2, x3, x4, x5, x6, x7, mem_ab) /* do nothing */ + +#define store_ab_state(x0, x1, x2, x3, x4, x5, x6, x7, mem_ab) \ + /* Store new AB state */ \ + vmovdqu x4, 4 * 32(mem_ab); \ + vmovdqu x5, 5 * 32(mem_ab); \ + vmovdqu x6, 6 * 32(mem_ab); \ + vmovdqu x7, 7 * 32(mem_ab); \ + vmovdqu x0, 0 * 32(mem_ab); \ + vmovdqu x1, 1 * 32(mem_ab); \ + vmovdqu x2, 2 * 32(mem_ab); \ + vmovdqu x3, 3 * 32(mem_ab); + +#define enc_rounds32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, i) \ + two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, (i) + 2, 1, store_ab_state); \ + two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, (i) + 4, 1, store_ab_state); \ + two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, (i) + 6, 1, dummy_store); + +#define dec_rounds32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, i) \ + two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, (i) + 7, -1, store_ab_state); \ + two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, (i) + 5, -1, store_ab_state); \ + two_roundsm32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd, (i) + 3, -1, dummy_store); + +/* + * IN: + * v0..3: byte-sliced 32-bit integers + * OUT: + * v0..3: (IN <<< 1) + */ +#define rol32_1_32(v0, v1, v2, v3, t0, t1, t2, zero) \ + vpcmpgtb v0, zero, t0; \ + vpaddb v0, v0, v0; \ + vpabsb t0, t0; \ + \ + vpcmpgtb v1, zero, t1; \ + vpaddb v1, v1, v1; \ + vpabsb t1, t1; \ + \ + vpcmpgtb v2, zero, t2; \ + vpaddb v2, v2, v2; \ + vpabsb t2, t2; \ + \ + vpor t0, v1, v1; \ + \ + vpcmpgtb v3, zero, t0; \ + vpaddb v3, v3, v3; \ + vpabsb t0, t0; \ + \ + vpor t1, v2, v2; \ + vpor t2, v3, v3; \ + vpor t0, v0, v0; + +/* + * IN: + * r: byte-sliced AB state in memory + * l: byte-sliced CD state in memory + * OUT: + * x0..x7: new byte-sliced CD state + */ +#define fls32(l, l0, l1, l2, l3, l4, l5, l6, l7, r, t0, t1, t2, t3, tt0, \ + tt1, tt2, tt3, kll, klr, krl, krr) \ + /* \ + * t0 = kll; \ + * t0 &= ll; \ + * lr ^= rol32(t0, 1); \ + */ \ + vpbroadcastd kll, t0; /* only lowest 32-bit used */ \ + vpxor tt0, tt0, tt0; \ + vpshufb tt0, t0, t3; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t2; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t1; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t0; \ + \ + vpand l0, t0, t0; \ + vpand l1, t1, t1; \ + vpand l2, t2, t2; \ + vpand l3, t3, t3; \ + \ + rol32_1_32(t3, t2, t1, t0, tt1, tt2, tt3, tt0); \ + \ + vpxor l4, t0, l4; \ + vpbroadcastd krr, t0; /* only lowest 32-bit used */ \ + vmovdqu l4, 4 * 32(l); \ + vpxor l5, t1, l5; \ + vmovdqu l5, 5 * 32(l); \ + vpxor l6, t2, l6; \ + vmovdqu l6, 6 * 32(l); \ + vpxor l7, t3, l7; \ + vmovdqu l7, 7 * 32(l); \ + \ + /* \ + * t2 = krr; \ + * t2 |= rr; \ + * rl ^= t2; \ + */ \ + \ + vpshufb tt0, t0, t3; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t2; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t1; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t0; \ + \ + vpor 4 * 32(r), t0, t0; \ + vpor 5 * 32(r), t1, t1; \ + vpor 6 * 32(r), t2, t2; \ + vpor 7 * 32(r), t3, t3; \ + \ + vpxor 0 * 32(r), t0, t0; \ + vpxor 1 * 32(r), t1, t1; \ + vpxor 2 * 32(r), t2, t2; \ + vpxor 3 * 32(r), t3, t3; \ + vmovdqu t0, 0 * 32(r); \ + vpbroadcastd krl, t0; /* only lowest 32-bit used */ \ + vmovdqu t1, 1 * 32(r); \ + vmovdqu t2, 2 * 32(r); \ + vmovdqu t3, 3 * 32(r); \ + \ + /* \ + * t2 = krl; \ + * t2 &= rl; \ + * rr ^= rol32(t2, 1); \ + */ \ + vpshufb tt0, t0, t3; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t2; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t1; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t0; \ + \ + vpand 0 * 32(r), t0, t0; \ + vpand 1 * 32(r), t1, t1; \ + vpand 2 * 32(r), t2, t2; \ + vpand 3 * 32(r), t3, t3; \ + \ + rol32_1_32(t3, t2, t1, t0, tt1, tt2, tt3, tt0); \ + \ + vpxor 4 * 32(r), t0, t0; \ + vpxor 5 * 32(r), t1, t1; \ + vpxor 6 * 32(r), t2, t2; \ + vpxor 7 * 32(r), t3, t3; \ + vmovdqu t0, 4 * 32(r); \ + vpbroadcastd klr, t0; /* only lowest 32-bit used */ \ + vmovdqu t1, 5 * 32(r); \ + vmovdqu t2, 6 * 32(r); \ + vmovdqu t3, 7 * 32(r); \ + \ + /* \ + * t0 = klr; \ + * t0 |= lr; \ + * ll ^= t0; \ + */ \ + \ + vpshufb tt0, t0, t3; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t2; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t1; \ + vpsrldq $1, t0, t0; \ + vpshufb tt0, t0, t0; \ + \ + vpor l4, t0, t0; \ + vpor l5, t1, t1; \ + vpor l6, t2, t2; \ + vpor l7, t3, t3; \ + \ + vpxor l0, t0, l0; \ + vmovdqu l0, 0 * 32(l); \ + vpxor l1, t1, l1; \ + vmovdqu l1, 1 * 32(l); \ + vpxor l2, t2, l2; \ + vmovdqu l2, 2 * 32(l); \ + vpxor l3, t3, l3; \ + vmovdqu l3, 3 * 32(l); + +#define transpose_4x4(x0, x1, x2, x3, t1, t2) \ + vpunpckhdq x1, x0, t2; \ + vpunpckldq x1, x0, x0; \ + \ + vpunpckldq x3, x2, t1; \ + vpunpckhdq x3, x2, x2; \ + \ + vpunpckhqdq t1, x0, x1; \ + vpunpcklqdq t1, x0, x0; \ + \ + vpunpckhqdq x2, t2, x3; \ + vpunpcklqdq x2, t2, x2; + +#define byteslice_16x16b_fast(a0, b0, c0, d0, a1, b1, c1, d1, a2, b2, c2, d2, \ + a3, b3, c3, d3, st0, st1) \ + vmovdqu d2, st0; \ + vmovdqu d3, st1; \ + transpose_4x4(a0, a1, a2, a3, d2, d3); \ + transpose_4x4(b0, b1, b2, b3, d2, d3); \ + vmovdqu st0, d2; \ + vmovdqu st1, d3; \ + \ + vmovdqu a0, st0; \ + vmovdqu a1, st1; \ + transpose_4x4(c0, c1, c2, c3, a0, a1); \ + transpose_4x4(d0, d1, d2, d3, a0, a1); \ + \ + vbroadcasti128 .Lshufb_16x16b rRIP, a0; \ + vmovdqu st1, a1; \ + vpshufb a0, a2, a2; \ + vpshufb a0, a3, a3; \ + vpshufb a0, b0, b0; \ + vpshufb a0, b1, b1; \ + vpshufb a0, b2, b2; \ + vpshufb a0, b3, b3; \ + vpshufb a0, a1, a1; \ + vpshufb a0, c0, c0; \ + vpshufb a0, c1, c1; \ + vpshufb a0, c2, c2; \ + vpshufb a0, c3, c3; \ + vpshufb a0, d0, d0; \ + vpshufb a0, d1, d1; \ + vpshufb a0, d2, d2; \ + vpshufb a0, d3, d3; \ + vmovdqu d3, st1; \ + vmovdqu st0, d3; \ + vpshufb a0, d3, a0; \ + vmovdqu d2, st0; \ + \ + transpose_4x4(a0, b0, c0, d0, d2, d3); \ + transpose_4x4(a1, b1, c1, d1, d2, d3); \ + vmovdqu st0, d2; \ + vmovdqu st1, d3; \ + \ + vmovdqu b0, st0; \ + vmovdqu b1, st1; \ + transpose_4x4(a2, b2, c2, d2, b0, b1); \ + transpose_4x4(a3, b3, c3, d3, b0, b1); \ + vmovdqu st0, b0; \ + vmovdqu st1, b1; \ + /* does not adjust output bytes inside vectors */ + +/* load blocks to registers and apply pre-whitening */ +#define inpack32_pre(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, rio, key) \ + vpbroadcastq key, x0; \ + vpshufb .Lpack_bswap rRIP, x0, x0; \ + \ + vpxor 0 * 32(rio), x0, y7; \ + vpxor 1 * 32(rio), x0, y6; \ + vpxor 2 * 32(rio), x0, y5; \ + vpxor 3 * 32(rio), x0, y4; \ + vpxor 4 * 32(rio), x0, y3; \ + vpxor 5 * 32(rio), x0, y2; \ + vpxor 6 * 32(rio), x0, y1; \ + vpxor 7 * 32(rio), x0, y0; \ + vpxor 8 * 32(rio), x0, x7; \ + vpxor 9 * 32(rio), x0, x6; \ + vpxor 10 * 32(rio), x0, x5; \ + vpxor 11 * 32(rio), x0, x4; \ + vpxor 12 * 32(rio), x0, x3; \ + vpxor 13 * 32(rio), x0, x2; \ + vpxor 14 * 32(rio), x0, x1; \ + vpxor 15 * 32(rio), x0, x0; + +/* byteslice pre-whitened blocks and store to temporary memory */ +#define inpack32_post(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, mem_ab, mem_cd) \ + byteslice_16x16b_fast(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, \ + y4, y5, y6, y7, (mem_ab), (mem_cd)); \ + \ + vmovdqu x0, 0 * 32(mem_ab); \ + vmovdqu x1, 1 * 32(mem_ab); \ + vmovdqu x2, 2 * 32(mem_ab); \ + vmovdqu x3, 3 * 32(mem_ab); \ + vmovdqu x4, 4 * 32(mem_ab); \ + vmovdqu x5, 5 * 32(mem_ab); \ + vmovdqu x6, 6 * 32(mem_ab); \ + vmovdqu x7, 7 * 32(mem_ab); \ + vmovdqu y0, 0 * 32(mem_cd); \ + vmovdqu y1, 1 * 32(mem_cd); \ + vmovdqu y2, 2 * 32(mem_cd); \ + vmovdqu y3, 3 * 32(mem_cd); \ + vmovdqu y4, 4 * 32(mem_cd); \ + vmovdqu y5, 5 * 32(mem_cd); \ + vmovdqu y6, 6 * 32(mem_cd); \ + vmovdqu y7, 7 * 32(mem_cd); + +/* de-byteslice, apply post-whitening and store blocks */ +#define outunpack32(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, \ + y5, y6, y7, key, stack_tmp0, stack_tmp1) \ + byteslice_16x16b_fast(y0, y4, x0, x4, y1, y5, x1, x5, y2, y6, x2, x6, \ + y3, y7, x3, x7, stack_tmp0, stack_tmp1); \ + \ + vmovdqu x0, stack_tmp0; \ + \ + vpbroadcastq key, x0; \ + vpshufb .Lpack_bswap rRIP, x0, x0; \ + \ + vpxor x0, y7, y7; \ + vpxor x0, y6, y6; \ + vpxor x0, y5, y5; \ + vpxor x0, y4, y4; \ + vpxor x0, y3, y3; \ + vpxor x0, y2, y2; \ + vpxor x0, y1, y1; \ + vpxor x0, y0, y0; \ + vpxor x0, x7, x7; \ + vpxor x0, x6, x6; \ + vpxor x0, x5, x5; \ + vpxor x0, x4, x4; \ + vpxor x0, x3, x3; \ + vpxor x0, x2, x2; \ + vpxor x0, x1, x1; \ + vpxor stack_tmp0, x0, x0; + +#define write_output(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ + y6, y7, rio) \ + vmovdqu x0, 0 * 32(rio); \ + vmovdqu x1, 1 * 32(rio); \ + vmovdqu x2, 2 * 32(rio); \ + vmovdqu x3, 3 * 32(rio); \ + vmovdqu x4, 4 * 32(rio); \ + vmovdqu x5, 5 * 32(rio); \ + vmovdqu x6, 6 * 32(rio); \ + vmovdqu x7, 7 * 32(rio); \ + vmovdqu y0, 8 * 32(rio); \ + vmovdqu y1, 9 * 32(rio); \ + vmovdqu y2, 10 * 32(rio); \ + vmovdqu y3, 11 * 32(rio); \ + vmovdqu y4, 12 * 32(rio); \ + vmovdqu y5, 13 * 32(rio); \ + vmovdqu y6, 14 * 32(rio); \ + vmovdqu y7, 15 * 32(rio); + +.text +.align 32 + +#define SHUFB_BYTES(idx) \ + 0 + (idx), 4 + (idx), 8 + (idx), 12 + (idx) + +.Lshufb_16x16b: + .byte SHUFB_BYTES(0), SHUFB_BYTES(1), SHUFB_BYTES(2), SHUFB_BYTES(3) + .byte SHUFB_BYTES(0), SHUFB_BYTES(1), SHUFB_BYTES(2), SHUFB_BYTES(3) + +.Lpack_bswap: + .long 0x00010203, 0x04050607, 0x80808080, 0x80808080 + .long 0x00010203, 0x04050607, 0x80808080, 0x80808080 + +/* For CTR-mode IV byteswap */ +.Lbswap128_mask: + .byte 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 + +/* + * pre-SubByte transform + * + * pre-lookup for sbox1, sbox2, sbox3: + * swap_bitendianness( + * isom_map_camellia_to_aes( + * camellia_f( + * swap_bitendianess(in) + * ) + * ) + * ) + * + * (note: '? 0xc5' inside camellia_f()) + */ +.Lpre_tf_lo_s1: + .byte 0x45, 0xe8, 0x40, 0xed, 0x2e, 0x83, 0x2b, 0x86 + .byte 0x4b, 0xe6, 0x4e, 0xe3, 0x20, 0x8d, 0x25, 0x88 +.Lpre_tf_hi_s1: + .byte 0x00, 0x51, 0xf1, 0xa0, 0x8a, 0xdb, 0x7b, 0x2a + .byte 0x09, 0x58, 0xf8, 0xa9, 0x83, 0xd2, 0x72, 0x23 + +/* + * pre-SubByte transform + * + * pre-lookup for sbox4: + * swap_bitendianness( + * isom_map_camellia_to_aes( + * camellia_f( + * swap_bitendianess(in <<< 1) + * ) + * ) + * ) + * + * (note: '? 0xc5' inside camellia_f()) + */ +.Lpre_tf_lo_s4: + .byte 0x45, 0x40, 0x2e, 0x2b, 0x4b, 0x4e, 0x20, 0x25 + .byte 0x14, 0x11, 0x7f, 0x7a, 0x1a, 0x1f, 0x71, 0x74 +.Lpre_tf_hi_s4: + .byte 0x00, 0xf1, 0x8a, 0x7b, 0x09, 0xf8, 0x83, 0x72 + .byte 0xad, 0x5c, 0x27, 0xd6, 0xa4, 0x55, 0x2e, 0xdf + +/* + * post-SubByte transform + * + * post-lookup for sbox1, sbox4: + * swap_bitendianness( + * camellia_h( + * isom_map_aes_to_camellia( + * swap_bitendianness( + * aes_inverse_affine_transform(in) + * ) + * ) + * ) + * ) + * + * (note: '? 0x6e' inside camellia_h()) + */ +.Lpost_tf_lo_s1: + .byte 0x3c, 0xcc, 0xcf, 0x3f, 0x32, 0xc2, 0xc1, 0x31 + .byte 0xdc, 0x2c, 0x2f, 0xdf, 0xd2, 0x22, 0x21, 0xd1 +.Lpost_tf_hi_s1: + .byte 0x00, 0xf9, 0x86, 0x7f, 0xd7, 0x2e, 0x51, 0xa8 + .byte 0xa4, 0x5d, 0x22, 0xdb, 0x73, 0x8a, 0xf5, 0x0c + +/* + * post-SubByte transform + * + * post-lookup for sbox2: + * swap_bitendianness( + * camellia_h( + * isom_map_aes_to_camellia( + * swap_bitendianness( + * aes_inverse_affine_transform(in) + * ) + * ) + * ) + * ) <<< 1 + * + * (note: '? 0x6e' inside camellia_h()) + */ +.Lpost_tf_lo_s2: + .byte 0x78, 0x99, 0x9f, 0x7e, 0x64, 0x85, 0x83, 0x62 + .byte 0xb9, 0x58, 0x5e, 0xbf, 0xa5, 0x44, 0x42, 0xa3 +.Lpost_tf_hi_s2: + .byte 0x00, 0xf3, 0x0d, 0xfe, 0xaf, 0x5c, 0xa2, 0x51 + .byte 0x49, 0xba, 0x44, 0xb7, 0xe6, 0x15, 0xeb, 0x18 + +/* + * post-SubByte transform + * + * post-lookup for sbox3: + * swap_bitendianness( + * camellia_h( + * isom_map_aes_to_camellia( + * swap_bitendianness( + * aes_inverse_affine_transform(in) + * ) + * ) + * ) + * ) >>> 1 + * + * (note: '? 0x6e' inside camellia_h()) + */ +.Lpost_tf_lo_s3: + .byte 0x1e, 0x66, 0xe7, 0x9f, 0x19, 0x61, 0xe0, 0x98 + .byte 0x6e, 0x16, 0x97, 0xef, 0x69, 0x11, 0x90, 0xe8 +.Lpost_tf_hi_s3: + .byte 0x00, 0xfc, 0x43, 0xbf, 0xeb, 0x17, 0xa8, 0x54 + .byte 0x52, 0xae, 0x11, 0xed, 0xb9, 0x45, 0xfa, 0x06 + +/* For isolating SubBytes from AESENCLAST, inverse shift row */ +.Linv_shift_row: + .byte 0x00, 0x0d, 0x0a, 0x07, 0x04, 0x01, 0x0e, 0x0b + .byte 0x08, 0x05, 0x02, 0x0f, 0x0c, 0x09, 0x06, 0x03 + +.align 4 +/* 4-bit mask */ +.L0f0f0f0f: + .long 0x0f0f0f0f + + +.align 8 +ELF(.type __camellia_enc_blk32, at function;) + +__camellia_enc_blk32: + /* input: + * %rdi: ctx, CTX + * %rax: temporary storage, 512 bytes + * %r8d: 24 for 16 byte key, 32 for larger + * %ymm0..%ymm15: 32 plaintext blocks + * output: + * %ymm0..%ymm15: 32 encrypted blocks, order swapped: + * 7, 8, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 + */ + CFI_STARTPROC(); + + leaq 8 * 32(%rax), %rcx; + + leaq (-8 * 8)(CTX, %r8, 8), %r8; + + inpack32_post(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, %rax, %rcx); + +.align 8 +.Lenc_loop: + enc_rounds32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, %rax, %rcx, 0); + + cmpq %r8, CTX; + je .Lenc_done; + leaq (8 * 8)(CTX), CTX; + + fls32(%rax, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %rcx, %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, + ((key_table) + 0)(CTX), + ((key_table) + 4)(CTX), + ((key_table) + 8)(CTX), + ((key_table) + 12)(CTX)); + jmp .Lenc_loop; + +.align 8 +.Lenc_done: + /* load CD for output */ + vmovdqu 0 * 32(%rcx), %ymm8; + vmovdqu 1 * 32(%rcx), %ymm9; + vmovdqu 2 * 32(%rcx), %ymm10; + vmovdqu 3 * 32(%rcx), %ymm11; + vmovdqu 4 * 32(%rcx), %ymm12; + vmovdqu 5 * 32(%rcx), %ymm13; + vmovdqu 6 * 32(%rcx), %ymm14; + vmovdqu 7 * 32(%rcx), %ymm15; + + outunpack32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, ((key_table) + 8 * 8)(%r8), (%rax), 1 * 32(%rax)); + + ret; + CFI_ENDPROC(); +ELF(.size __camellia_enc_blk32,.-__camellia_enc_blk32;) + +.align 8 +ELF(.type __camellia_dec_blk32, at function;) + +__camellia_dec_blk32: + /* input: + * %rdi: ctx, CTX + * %rax: temporary storage, 512 bytes + * %r8d: 24 for 16 byte key, 32 for larger + * %ymm0..%ymm15: 16 encrypted blocks + * output: + * %ymm0..%ymm15: 16 plaintext blocks, order swapped: + * 7, 8, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 + */ + CFI_STARTPROC(); + + movq %r8, %rcx; + movq CTX, %r8 + leaq (-8 * 8)(CTX, %rcx, 8), CTX; + + leaq 8 * 32(%rax), %rcx; + + inpack32_post(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, %rax, %rcx); + +.align 8 +.Ldec_loop: + dec_rounds32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, %rax, %rcx, 0); + + cmpq %r8, CTX; + je .Ldec_done; + + fls32(%rax, %ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %rcx, %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, + ((key_table) + 8)(CTX), + ((key_table) + 12)(CTX), + ((key_table) + 0)(CTX), + ((key_table) + 4)(CTX)); + + leaq (-8 * 8)(CTX), CTX; + jmp .Ldec_loop; + +.align 8 +.Ldec_done: + /* load CD for output */ + vmovdqu 0 * 32(%rcx), %ymm8; + vmovdqu 1 * 32(%rcx), %ymm9; + vmovdqu 2 * 32(%rcx), %ymm10; + vmovdqu 3 * 32(%rcx), %ymm11; + vmovdqu 4 * 32(%rcx), %ymm12; + vmovdqu 5 * 32(%rcx), %ymm13; + vmovdqu 6 * 32(%rcx), %ymm14; + vmovdqu 7 * 32(%rcx), %ymm15; + + outunpack32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, (key_table)(CTX), (%rax), 1 * 32(%rax)); + + ret; + CFI_ENDPROC(); +ELF(.size __camellia_dec_blk32,.-__camellia_dec_blk32;) + +#define inc_le128(x, minus_one, tmp) \ + vpcmpeqq minus_one, x, tmp; \ + vpsubq minus_one, x, x; \ + vpslldq $8, tmp, tmp; \ + vpsubq tmp, x, x; + +.align 8 +.globl FUNC_NAME(ctr_enc) +ELF(.type FUNC_NAME(ctr_enc), at function;) + +FUNC_NAME(ctr_enc): + /* input: + * %rdi: ctx, CTX + * %rsi: dst (32 blocks) + * %rdx: src (32 blocks) + * %rcx: iv (big endian, 128bit) + */ + CFI_STARTPROC(); + + pushq %rbp; + CFI_PUSH(%rbp); + movq %rsp, %rbp; + CFI_DEF_CFA_REGISTER(%rbp); + + movq 8(%rcx), %r11; + bswapq %r11; + + vzeroupper; + + cmpl $128, key_bitlength(CTX); + movl $32, %r8d; + movl $24, %eax; + cmovel %eax, %r8d; /* max */ + + subq $(16 * 32), %rsp; + andq $~63, %rsp; + movq %rsp, %rax; + + vpcmpeqd %ymm15, %ymm15, %ymm15; + vpsrldq $8, %ymm15, %ymm15; /* ab: -1:0 ; cd: -1:0 */ + + /* load IV and byteswap */ + vmovdqu (%rcx), %xmm0; + vpshufb .Lbswap128_mask rRIP, %xmm0, %xmm0; + vmovdqa %xmm0, %xmm1; + inc_le128(%xmm0, %xmm15, %xmm14); + vbroadcasti128 .Lbswap128_mask rRIP, %ymm14; + vinserti128 $1, %xmm0, %ymm1, %ymm0; + vpshufb %ymm14, %ymm0, %ymm13; + vmovdqu %ymm13, 15 * 32(%rax); + + /* check need for handling 64-bit overflow and carry */ + cmpq $(0xffffffffffffffff - 32), %r11; + ja .Lload_ctr_carry; + + /* construct IVs */ + vpaddq %ymm15, %ymm15, %ymm15; /* ab: -2:0 ; cd: -2:0 */ + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm13; + vmovdqu %ymm13, 14 * 32(%rax); + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm13; + vmovdqu %ymm13, 13 * 32(%rax); + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm12; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm11; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm10; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm9; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm8; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm7; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm6; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm5; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm4; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm3; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm2; + vpsubq %ymm15, %ymm0, %ymm0; + vpshufb %ymm14, %ymm0, %ymm1; + vpsubq %ymm15, %ymm0, %ymm0; /* +30 ; +31 */ + vpsubq %xmm15, %xmm0, %xmm13; /* +32 */ + vpshufb %ymm14, %ymm0, %ymm0; + vpshufb %xmm14, %xmm13, %xmm13; + vmovdqu %xmm13, (%rcx); + + jmp .Lload_ctr_done; + +.align 4 +.Lload_ctr_carry: + /* construct IVs */ + inc_le128(%ymm0, %ymm15, %ymm13); /* ab: le1 ; cd: le2 */ + inc_le128(%ymm0, %ymm15, %ymm13); /* ab: le2 ; cd: le3 */ + vpshufb %ymm14, %ymm0, %ymm13; + vmovdqu %ymm13, 14 * 32(%rax); + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm13; + vmovdqu %ymm13, 13 * 32(%rax); + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm12; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm11; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm10; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm9; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm8; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm7; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm6; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm5; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm4; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm3; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm2; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vpshufb %ymm14, %ymm0, %ymm1; + inc_le128(%ymm0, %ymm15, %ymm13); + inc_le128(%ymm0, %ymm15, %ymm13); + vextracti128 $1, %ymm0, %xmm13; + vpshufb %ymm14, %ymm0, %ymm0; + inc_le128(%xmm13, %xmm15, %xmm14); + vpshufb .Lbswap128_mask rRIP, %xmm13, %xmm13; + vmovdqu %xmm13, (%rcx); + +.align 4 +.Lload_ctr_done: + /* inpack16_pre: */ + vpbroadcastq (key_table)(CTX), %ymm15; + vpshufb .Lpack_bswap rRIP, %ymm15, %ymm15; + vpxor %ymm0, %ymm15, %ymm0; + vpxor %ymm1, %ymm15, %ymm1; + vpxor %ymm2, %ymm15, %ymm2; + vpxor %ymm3, %ymm15, %ymm3; + vpxor %ymm4, %ymm15, %ymm4; + vpxor %ymm5, %ymm15, %ymm5; + vpxor %ymm6, %ymm15, %ymm6; + vpxor %ymm7, %ymm15, %ymm7; + vpxor %ymm8, %ymm15, %ymm8; + vpxor %ymm9, %ymm15, %ymm9; + vpxor %ymm10, %ymm15, %ymm10; + vpxor %ymm11, %ymm15, %ymm11; + vpxor %ymm12, %ymm15, %ymm12; + vpxor 13 * 32(%rax), %ymm15, %ymm13; + vpxor 14 * 32(%rax), %ymm15, %ymm14; + vpxor 15 * 32(%rax), %ymm15, %ymm15; + + call __camellia_enc_blk32; + + vpxor 0 * 32(%rdx), %ymm7, %ymm7; + vpxor 1 * 32(%rdx), %ymm6, %ymm6; + vpxor 2 * 32(%rdx), %ymm5, %ymm5; + vpxor 3 * 32(%rdx), %ymm4, %ymm4; + vpxor 4 * 32(%rdx), %ymm3, %ymm3; + vpxor 5 * 32(%rdx), %ymm2, %ymm2; + vpxor 6 * 32(%rdx), %ymm1, %ymm1; + vpxor 7 * 32(%rdx), %ymm0, %ymm0; + vpxor 8 * 32(%rdx), %ymm15, %ymm15; + vpxor 9 * 32(%rdx), %ymm14, %ymm14; + vpxor 10 * 32(%rdx), %ymm13, %ymm13; + vpxor 11 * 32(%rdx), %ymm12, %ymm12; + vpxor 12 * 32(%rdx), %ymm11, %ymm11; + vpxor 13 * 32(%rdx), %ymm10, %ymm10; + vpxor 14 * 32(%rdx), %ymm9, %ymm9; + vpxor 15 * 32(%rdx), %ymm8, %ymm8; + leaq 32 * 16(%rdx), %rdx; + + write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, + %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, + %ymm8, %rsi); + + vzeroall; + + leave; + CFI_LEAVE(); + ret; + CFI_ENDPROC(); +ELF(.size FUNC_NAME(ctr_enc),.-FUNC_NAME(ctr_enc);) + +.align 8 +.globl FUNC_NAME(cbc_dec) +ELF(.type FUNC_NAME(cbc_dec), at function;) + +FUNC_NAME(cbc_dec): + /* input: + * %rdi: ctx, CTX + * %rsi: dst (32 blocks) + * %rdx: src (32 blocks) + * %rcx: iv + */ + CFI_STARTPROC(); + + pushq %rbp; + CFI_PUSH(%rbp); + movq %rsp, %rbp; + CFI_DEF_CFA_REGISTER(%rbp); + + vzeroupper; + + movq %rcx, %r9; + + cmpl $128, key_bitlength(CTX); + movl $32, %r8d; + movl $24, %eax; + cmovel %eax, %r8d; /* max */ + + subq $(16 * 32), %rsp; + andq $~63, %rsp; + movq %rsp, %rax; + + inpack32_pre(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, + %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, + %ymm15, %rdx, (key_table)(CTX, %r8, 8)); + + call __camellia_dec_blk32; + + /* XOR output with IV */ + vmovdqu %ymm8, (%rax); + vmovdqu (%r9), %xmm8; + vinserti128 $1, (%rdx), %ymm8, %ymm8; + vpxor %ymm8, %ymm7, %ymm7; + vmovdqu (%rax), %ymm8; + vpxor (0 * 32 + 16)(%rdx), %ymm6, %ymm6; + vpxor (1 * 32 + 16)(%rdx), %ymm5, %ymm5; + vpxor (2 * 32 + 16)(%rdx), %ymm4, %ymm4; + vpxor (3 * 32 + 16)(%rdx), %ymm3, %ymm3; + vpxor (4 * 32 + 16)(%rdx), %ymm2, %ymm2; + vpxor (5 * 32 + 16)(%rdx), %ymm1, %ymm1; + vpxor (6 * 32 + 16)(%rdx), %ymm0, %ymm0; + vpxor (7 * 32 + 16)(%rdx), %ymm15, %ymm15; + vpxor (8 * 32 + 16)(%rdx), %ymm14, %ymm14; + vpxor (9 * 32 + 16)(%rdx), %ymm13, %ymm13; + vpxor (10 * 32 + 16)(%rdx), %ymm12, %ymm12; + vpxor (11 * 32 + 16)(%rdx), %ymm11, %ymm11; + vpxor (12 * 32 + 16)(%rdx), %ymm10, %ymm10; + vpxor (13 * 32 + 16)(%rdx), %ymm9, %ymm9; + vpxor (14 * 32 + 16)(%rdx), %ymm8, %ymm8; + movq (15 * 32 + 16 + 0)(%rdx), %rax; + movq (15 * 32 + 16 + 8)(%rdx), %rcx; + + write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, + %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, + %ymm8, %rsi); + + /* store new IV */ + movq %rax, (0)(%r9); + movq %rcx, (8)(%r9); + + vzeroall; + + leave; + CFI_LEAVE(); + ret; + CFI_ENDPROC(); +ELF(.size FUNC_NAME(cbc_dec),.-FUNC_NAME(cbc_dec);) + +.align 8 +.globl FUNC_NAME(cfb_dec) +ELF(.type FUNC_NAME(cfb_dec), at function;) + +FUNC_NAME(cfb_dec): + /* input: + * %rdi: ctx, CTX + * %rsi: dst (32 blocks) + * %rdx: src (32 blocks) + * %rcx: iv + */ + CFI_STARTPROC(); + + pushq %rbp; + CFI_PUSH(%rbp); + movq %rsp, %rbp; + CFI_DEF_CFA_REGISTER(%rbp); + + vzeroupper; + + cmpl $128, key_bitlength(CTX); + movl $32, %r8d; + movl $24, %eax; + cmovel %eax, %r8d; /* max */ + + subq $(16 * 32), %rsp; + andq $~63, %rsp; + movq %rsp, %rax; + + /* inpack16_pre: */ + vpbroadcastq (key_table)(CTX), %ymm0; + vpshufb .Lpack_bswap rRIP, %ymm0, %ymm0; + vmovdqu (%rcx), %xmm15; + vinserti128 $1, (%rdx), %ymm15, %ymm15; + vpxor %ymm15, %ymm0, %ymm15; + vmovdqu (15 * 32 + 16)(%rdx), %xmm1; + vmovdqu %xmm1, (%rcx); /* store new IV */ + vpxor (0 * 32 + 16)(%rdx), %ymm0, %ymm14; + vpxor (1 * 32 + 16)(%rdx), %ymm0, %ymm13; + vpxor (2 * 32 + 16)(%rdx), %ymm0, %ymm12; + vpxor (3 * 32 + 16)(%rdx), %ymm0, %ymm11; + vpxor (4 * 32 + 16)(%rdx), %ymm0, %ymm10; + vpxor (5 * 32 + 16)(%rdx), %ymm0, %ymm9; + vpxor (6 * 32 + 16)(%rdx), %ymm0, %ymm8; + vpxor (7 * 32 + 16)(%rdx), %ymm0, %ymm7; + vpxor (8 * 32 + 16)(%rdx), %ymm0, %ymm6; + vpxor (9 * 32 + 16)(%rdx), %ymm0, %ymm5; + vpxor (10 * 32 + 16)(%rdx), %ymm0, %ymm4; + vpxor (11 * 32 + 16)(%rdx), %ymm0, %ymm3; + vpxor (12 * 32 + 16)(%rdx), %ymm0, %ymm2; + vpxor (13 * 32 + 16)(%rdx), %ymm0, %ymm1; + vpxor (14 * 32 + 16)(%rdx), %ymm0, %ymm0; + + call __camellia_enc_blk32; + + vpxor 0 * 32(%rdx), %ymm7, %ymm7; + vpxor 1 * 32(%rdx), %ymm6, %ymm6; + vpxor 2 * 32(%rdx), %ymm5, %ymm5; + vpxor 3 * 32(%rdx), %ymm4, %ymm4; + vpxor 4 * 32(%rdx), %ymm3, %ymm3; + vpxor 5 * 32(%rdx), %ymm2, %ymm2; + vpxor 6 * 32(%rdx), %ymm1, %ymm1; + vpxor 7 * 32(%rdx), %ymm0, %ymm0; + vpxor 8 * 32(%rdx), %ymm15, %ymm15; + vpxor 9 * 32(%rdx), %ymm14, %ymm14; + vpxor 10 * 32(%rdx), %ymm13, %ymm13; + vpxor 11 * 32(%rdx), %ymm12, %ymm12; + vpxor 12 * 32(%rdx), %ymm11, %ymm11; + vpxor 13 * 32(%rdx), %ymm10, %ymm10; + vpxor 14 * 32(%rdx), %ymm9, %ymm9; + vpxor 15 * 32(%rdx), %ymm8, %ymm8; + + write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, + %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, + %ymm8, %rsi); + + vzeroall; + + leave; + CFI_LEAVE(); + ret; + CFI_ENDPROC(); +ELF(.size FUNC_NAME(cfb_dec),.-FUNC_NAME(cfb_dec);) + +.align 8 +.globl FUNC_NAME(ocb_enc) +ELF(.type FUNC_NAME(ocb_enc), at function;) + +FUNC_NAME(ocb_enc): + /* input: + * %rdi: ctx, CTX + * %rsi: dst (32 blocks) + * %rdx: src (32 blocks) + * %rcx: offset + * %r8 : checksum + * %r9 : L pointers (void *L[32]) + */ + CFI_STARTPROC(); + + pushq %rbp; + CFI_PUSH(%rbp); + movq %rsp, %rbp; + CFI_DEF_CFA_REGISTER(%rbp); + + vzeroupper; + + subq $(16 * 32 + 4 * 8), %rsp; + andq $~63, %rsp; + movq %rsp, %rax; + + movq %r10, (16 * 32 + 0 * 8)(%rsp); + movq %r11, (16 * 32 + 1 * 8)(%rsp); + movq %r12, (16 * 32 + 2 * 8)(%rsp); + movq %r13, (16 * 32 + 3 * 8)(%rsp); + CFI_REG_ON_STACK(r10, 16 * 32 + 0 * 8); + CFI_REG_ON_STACK(r11, 16 * 32 + 1 * 8); + CFI_REG_ON_STACK(r12, 16 * 32 + 2 * 8); + CFI_REG_ON_STACK(r13, 16 * 32 + 3 * 8); + + vmovdqu (%rcx), %xmm14; + vmovdqu (%r8), %xmm13; + + /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ + /* Checksum_i = Checksum_{i-1} xor P_i */ + /* C_i = Offset_i xor ENCIPHER(K, P_i xor Offset_i) */ + +#define OCB_INPUT(n, l0reg, l1reg, yreg) \ + vmovdqu (n * 32)(%rdx), yreg; \ + vpxor (l0reg), %xmm14, %xmm15; \ + vpxor (l1reg), %xmm15, %xmm14; \ + vinserti128 $1, %xmm14, %ymm15, %ymm15; \ + vpxor yreg, %ymm13, %ymm13; \ + vpxor yreg, %ymm15, yreg; \ + vmovdqu %ymm15, (n * 32)(%rsi); + + movq (0 * 8)(%r9), %r10; + movq (1 * 8)(%r9), %r11; + movq (2 * 8)(%r9), %r12; + movq (3 * 8)(%r9), %r13; + OCB_INPUT(0, %r10, %r11, %ymm0); + vmovdqu %ymm0, (15 * 32)(%rax); + OCB_INPUT(1, %r12, %r13, %ymm0); + vmovdqu %ymm0, (14 * 32)(%rax); + movq (4 * 8)(%r9), %r10; + movq (5 * 8)(%r9), %r11; + movq (6 * 8)(%r9), %r12; + movq (7 * 8)(%r9), %r13; + OCB_INPUT(2, %r10, %r11, %ymm0); + vmovdqu %ymm0, (13 * 32)(%rax); + OCB_INPUT(3, %r12, %r13, %ymm12); + movq (8 * 8)(%r9), %r10; + movq (9 * 8)(%r9), %r11; + movq (10 * 8)(%r9), %r12; + movq (11 * 8)(%r9), %r13; + OCB_INPUT(4, %r10, %r11, %ymm11); + OCB_INPUT(5, %r12, %r13, %ymm10); + movq (12 * 8)(%r9), %r10; + movq (13 * 8)(%r9), %r11; + movq (14 * 8)(%r9), %r12; + movq (15 * 8)(%r9), %r13; + OCB_INPUT(6, %r10, %r11, %ymm9); + OCB_INPUT(7, %r12, %r13, %ymm8); + movq (16 * 8)(%r9), %r10; + movq (17 * 8)(%r9), %r11; + movq (18 * 8)(%r9), %r12; + movq (19 * 8)(%r9), %r13; + OCB_INPUT(8, %r10, %r11, %ymm7); + OCB_INPUT(9, %r12, %r13, %ymm6); + movq (20 * 8)(%r9), %r10; + movq (21 * 8)(%r9), %r11; + movq (22 * 8)(%r9), %r12; + movq (23 * 8)(%r9), %r13; + OCB_INPUT(10, %r10, %r11, %ymm5); + OCB_INPUT(11, %r12, %r13, %ymm4); + movq (24 * 8)(%r9), %r10; + movq (25 * 8)(%r9), %r11; + movq (26 * 8)(%r9), %r12; + movq (27 * 8)(%r9), %r13; + OCB_INPUT(12, %r10, %r11, %ymm3); + OCB_INPUT(13, %r12, %r13, %ymm2); + movq (28 * 8)(%r9), %r10; + movq (29 * 8)(%r9), %r11; + movq (30 * 8)(%r9), %r12; + movq (31 * 8)(%r9), %r13; + OCB_INPUT(14, %r10, %r11, %ymm1); + OCB_INPUT(15, %r12, %r13, %ymm0); +#undef OCB_INPUT + + vextracti128 $1, %ymm13, %xmm15; + vmovdqu %xmm14, (%rcx); + vpxor %xmm13, %xmm15, %xmm15; + vmovdqu %xmm15, (%r8); + + cmpl $128, key_bitlength(CTX); + movl $32, %r8d; + movl $24, %r10d; + cmovel %r10d, %r8d; /* max */ + + /* inpack16_pre: */ + vpbroadcastq (key_table)(CTX), %ymm15; + vpshufb .Lpack_bswap rRIP, %ymm15, %ymm15; + vpxor %ymm0, %ymm15, %ymm0; + vpxor %ymm1, %ymm15, %ymm1; + vpxor %ymm2, %ymm15, %ymm2; + vpxor %ymm3, %ymm15, %ymm3; + vpxor %ymm4, %ymm15, %ymm4; + vpxor %ymm5, %ymm15, %ymm5; + vpxor %ymm6, %ymm15, %ymm6; + vpxor %ymm7, %ymm15, %ymm7; + vpxor %ymm8, %ymm15, %ymm8; + vpxor %ymm9, %ymm15, %ymm9; + vpxor %ymm10, %ymm15, %ymm10; + vpxor %ymm11, %ymm15, %ymm11; + vpxor %ymm12, %ymm15, %ymm12; + vpxor 13 * 32(%rax), %ymm15, %ymm13; + vpxor 14 * 32(%rax), %ymm15, %ymm14; + vpxor 15 * 32(%rax), %ymm15, %ymm15; + + call __camellia_enc_blk32; + + vpxor 0 * 32(%rsi), %ymm7, %ymm7; + vpxor 1 * 32(%rsi), %ymm6, %ymm6; + vpxor 2 * 32(%rsi), %ymm5, %ymm5; + vpxor 3 * 32(%rsi), %ymm4, %ymm4; + vpxor 4 * 32(%rsi), %ymm3, %ymm3; + vpxor 5 * 32(%rsi), %ymm2, %ymm2; + vpxor 6 * 32(%rsi), %ymm1, %ymm1; + vpxor 7 * 32(%rsi), %ymm0, %ymm0; + vpxor 8 * 32(%rsi), %ymm15, %ymm15; + vpxor 9 * 32(%rsi), %ymm14, %ymm14; + vpxor 10 * 32(%rsi), %ymm13, %ymm13; + vpxor 11 * 32(%rsi), %ymm12, %ymm12; + vpxor 12 * 32(%rsi), %ymm11, %ymm11; + vpxor 13 * 32(%rsi), %ymm10, %ymm10; + vpxor 14 * 32(%rsi), %ymm9, %ymm9; + vpxor 15 * 32(%rsi), %ymm8, %ymm8; + + write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, + %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, + %ymm8, %rsi); + + vzeroall; + + movq (16 * 32 + 0 * 8)(%rsp), %r10; + movq (16 * 32 + 1 * 8)(%rsp), %r11; + movq (16 * 32 + 2 * 8)(%rsp), %r12; + movq (16 * 32 + 3 * 8)(%rsp), %r13; + CFI_RESTORE(%r10); + CFI_RESTORE(%r11); + CFI_RESTORE(%r12); + CFI_RESTORE(%r13); + + leave; + CFI_LEAVE(); + ret; + CFI_ENDPROC(); +ELF(.size FUNC_NAME(ocb_enc),.-FUNC_NAME(ocb_enc);) + +.align 8 +.globl FUNC_NAME(ocb_dec) +ELF(.type FUNC_NAME(ocb_dec), at function;) + +FUNC_NAME(ocb_dec): + /* input: + * %rdi: ctx, CTX + * %rsi: dst (32 blocks) + * %rdx: src (32 blocks) + * %rcx: offset + * %r8 : checksum + * %r9 : L pointers (void *L[32]) + */ + CFI_STARTPROC(); + + pushq %rbp; + CFI_PUSH(%rbp); + movq %rsp, %rbp; + CFI_DEF_CFA_REGISTER(%rbp); + + vzeroupper; + + subq $(16 * 32 + 4 * 8), %rsp; + andq $~63, %rsp; + movq %rsp, %rax; + + movq %r10, (16 * 32 + 0 * 8)(%rsp); + movq %r11, (16 * 32 + 1 * 8)(%rsp); + movq %r12, (16 * 32 + 2 * 8)(%rsp); + movq %r13, (16 * 32 + 3 * 8)(%rsp); + CFI_REG_ON_STACK(r10, 16 * 32 + 0 * 8); + CFI_REG_ON_STACK(r11, 16 * 32 + 1 * 8); + CFI_REG_ON_STACK(r12, 16 * 32 + 2 * 8); + CFI_REG_ON_STACK(r13, 16 * 32 + 3 * 8); + + vmovdqu (%rcx), %xmm14; + + /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ + /* P_i = Offset_i xor DECIPHER(K, C_i xor Offset_i) */ + +#define OCB_INPUT(n, l0reg, l1reg, yreg) \ + vmovdqu (n * 32)(%rdx), yreg; \ + vpxor (l0reg), %xmm14, %xmm15; \ + vpxor (l1reg), %xmm15, %xmm14; \ + vinserti128 $1, %xmm14, %ymm15, %ymm15; \ + vpxor yreg, %ymm15, yreg; \ + vmovdqu %ymm15, (n * 32)(%rsi); + + movq (0 * 8)(%r9), %r10; + movq (1 * 8)(%r9), %r11; + movq (2 * 8)(%r9), %r12; + movq (3 * 8)(%r9), %r13; + OCB_INPUT(0, %r10, %r11, %ymm0); + vmovdqu %ymm0, (15 * 32)(%rax); + OCB_INPUT(1, %r12, %r13, %ymm0); + vmovdqu %ymm0, (14 * 32)(%rax); + movq (4 * 8)(%r9), %r10; + movq (5 * 8)(%r9), %r11; + movq (6 * 8)(%r9), %r12; + movq (7 * 8)(%r9), %r13; + OCB_INPUT(2, %r10, %r11, %ymm13); + OCB_INPUT(3, %r12, %r13, %ymm12); + movq (8 * 8)(%r9), %r10; + movq (9 * 8)(%r9), %r11; + movq (10 * 8)(%r9), %r12; + movq (11 * 8)(%r9), %r13; + OCB_INPUT(4, %r10, %r11, %ymm11); + OCB_INPUT(5, %r12, %r13, %ymm10); + movq (12 * 8)(%r9), %r10; + movq (13 * 8)(%r9), %r11; + movq (14 * 8)(%r9), %r12; + movq (15 * 8)(%r9), %r13; + OCB_INPUT(6, %r10, %r11, %ymm9); + OCB_INPUT(7, %r12, %r13, %ymm8); + movq (16 * 8)(%r9), %r10; + movq (17 * 8)(%r9), %r11; + movq (18 * 8)(%r9), %r12; + movq (19 * 8)(%r9), %r13; + OCB_INPUT(8, %r10, %r11, %ymm7); + OCB_INPUT(9, %r12, %r13, %ymm6); + movq (20 * 8)(%r9), %r10; + movq (21 * 8)(%r9), %r11; + movq (22 * 8)(%r9), %r12; + movq (23 * 8)(%r9), %r13; + OCB_INPUT(10, %r10, %r11, %ymm5); + OCB_INPUT(11, %r12, %r13, %ymm4); + movq (24 * 8)(%r9), %r10; + movq (25 * 8)(%r9), %r11; + movq (26 * 8)(%r9), %r12; + movq (27 * 8)(%r9), %r13; + OCB_INPUT(12, %r10, %r11, %ymm3); + OCB_INPUT(13, %r12, %r13, %ymm2); + movq (28 * 8)(%r9), %r10; + movq (29 * 8)(%r9), %r11; + movq (30 * 8)(%r9), %r12; + movq (31 * 8)(%r9), %r13; + OCB_INPUT(14, %r10, %r11, %ymm1); + OCB_INPUT(15, %r12, %r13, %ymm0); +#undef OCB_INPUT + + vmovdqu %xmm14, (%rcx); + + movq %r8, %r10; + + cmpl $128, key_bitlength(CTX); + movl $32, %r8d; + movl $24, %r9d; + cmovel %r9d, %r8d; /* max */ + + /* inpack16_pre: */ + vpbroadcastq (key_table)(CTX, %r8, 8), %ymm15; + vpshufb .Lpack_bswap rRIP, %ymm15, %ymm15; + vpxor %ymm0, %ymm15, %ymm0; + vpxor %ymm1, %ymm15, %ymm1; + vpxor %ymm2, %ymm15, %ymm2; + vpxor %ymm3, %ymm15, %ymm3; + vpxor %ymm4, %ymm15, %ymm4; + vpxor %ymm5, %ymm15, %ymm5; + vpxor %ymm6, %ymm15, %ymm6; + vpxor %ymm7, %ymm15, %ymm7; + vpxor %ymm8, %ymm15, %ymm8; + vpxor %ymm9, %ymm15, %ymm9; + vpxor %ymm10, %ymm15, %ymm10; + vpxor %ymm11, %ymm15, %ymm11; + vpxor %ymm12, %ymm15, %ymm12; + vpxor %ymm13, %ymm15, %ymm13; + vpxor 14 * 32(%rax), %ymm15, %ymm14; + vpxor 15 * 32(%rax), %ymm15, %ymm15; + + call __camellia_dec_blk32; + + vpxor 0 * 32(%rsi), %ymm7, %ymm7; + vpxor 1 * 32(%rsi), %ymm6, %ymm6; + vpxor 2 * 32(%rsi), %ymm5, %ymm5; + vpxor 3 * 32(%rsi), %ymm4, %ymm4; + vpxor 4 * 32(%rsi), %ymm3, %ymm3; + vpxor 5 * 32(%rsi), %ymm2, %ymm2; + vpxor 6 * 32(%rsi), %ymm1, %ymm1; + vpxor 7 * 32(%rsi), %ymm0, %ymm0; + vmovdqu %ymm7, (7 * 32)(%rax); + vmovdqu %ymm6, (6 * 32)(%rax); + vpxor 8 * 32(%rsi), %ymm15, %ymm15; + vpxor 9 * 32(%rsi), %ymm14, %ymm14; + vpxor 10 * 32(%rsi), %ymm13, %ymm13; + vpxor 11 * 32(%rsi), %ymm12, %ymm12; + vpxor 12 * 32(%rsi), %ymm11, %ymm11; + vpxor 13 * 32(%rsi), %ymm10, %ymm10; + vpxor 14 * 32(%rsi), %ymm9, %ymm9; + vpxor 15 * 32(%rsi), %ymm8, %ymm8; + + /* Checksum_i = Checksum_{i-1} xor P_i */ + + vpxor %ymm5, %ymm7, %ymm7; + vpxor %ymm4, %ymm6, %ymm6; + vpxor %ymm3, %ymm7, %ymm7; + vpxor %ymm2, %ymm6, %ymm6; + vpxor %ymm1, %ymm7, %ymm7; + vpxor %ymm0, %ymm6, %ymm6; + vpxor %ymm15, %ymm7, %ymm7; + vpxor %ymm14, %ymm6, %ymm6; + vpxor %ymm13, %ymm7, %ymm7; + vpxor %ymm12, %ymm6, %ymm6; + vpxor %ymm11, %ymm7, %ymm7; + vpxor %ymm10, %ymm6, %ymm6; + vpxor %ymm9, %ymm7, %ymm7; + vpxor %ymm8, %ymm6, %ymm6; + vpxor %ymm7, %ymm6, %ymm7; + + vextracti128 $1, %ymm7, %xmm6; + vpxor %xmm6, %xmm7, %xmm7; + vpxor (%r10), %xmm7, %xmm7; + vmovdqu %xmm7, (%r10); + + vmovdqu 7 * 32(%rax), %ymm7; + vmovdqu 6 * 32(%rax), %ymm6; + + write_output(%ymm7, %ymm6, %ymm5, %ymm4, %ymm3, %ymm2, %ymm1, %ymm0, + %ymm15, %ymm14, %ymm13, %ymm12, %ymm11, %ymm10, %ymm9, + %ymm8, %rsi); + + vzeroall; + + movq (16 * 32 + 0 * 8)(%rsp), %r10; + movq (16 * 32 + 1 * 8)(%rsp), %r11; + movq (16 * 32 + 2 * 8)(%rsp), %r12; + movq (16 * 32 + 3 * 8)(%rsp), %r13; + CFI_RESTORE(%r10); + CFI_RESTORE(%r11); + CFI_RESTORE(%r12); + CFI_RESTORE(%r13); + + leave; + CFI_LEAVE(); + ret; + CFI_ENDPROC(); +ELF(.size FUNC_NAME(ocb_dec),.-FUNC_NAME(ocb_dec);) + +.align 8 +.globl FUNC_NAME(ocb_auth) +ELF(.type FUNC_NAME(ocb_auth), at function;) + +FUNC_NAME(ocb_auth): + /* input: + * %rdi: ctx, CTX + * %rsi: abuf (16 blocks) + * %rdx: offset + * %rcx: checksum + * %r8 : L pointers (void *L[16]) + */ + CFI_STARTPROC(); + + pushq %rbp; + CFI_PUSH(%rbp); + movq %rsp, %rbp; + CFI_DEF_CFA_REGISTER(%rbp); + + vzeroupper; + + subq $(16 * 32 + 4 * 8), %rsp; + andq $~63, %rsp; + movq %rsp, %rax; + + movq %r10, (16 * 32 + 0 * 8)(%rsp); + movq %r11, (16 * 32 + 1 * 8)(%rsp); + movq %r12, (16 * 32 + 2 * 8)(%rsp); + movq %r13, (16 * 32 + 3 * 8)(%rsp); + CFI_REG_ON_STACK(r10, 16 * 32 + 0 * 8); + CFI_REG_ON_STACK(r11, 16 * 32 + 1 * 8); + CFI_REG_ON_STACK(r12, 16 * 32 + 2 * 8); + CFI_REG_ON_STACK(r13, 16 * 32 + 3 * 8); + + vmovdqu (%rdx), %xmm14; + + /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ + /* Checksum_i = Checksum_{i-1} xor P_i */ + /* C_i = Offset_i xor ENCIPHER(K, P_i xor Offset_i) */ + +#define OCB_INPUT(n, l0reg, l1reg, yreg) \ + vmovdqu (n * 32)(%rsi), yreg; \ + vpxor (l0reg), %xmm14, %xmm15; \ + vpxor (l1reg), %xmm15, %xmm14; \ + vinserti128 $1, %xmm14, %ymm15, %ymm15; \ + vpxor yreg, %ymm15, yreg; + + movq (0 * 8)(%r8), %r10; + movq (1 * 8)(%r8), %r11; + movq (2 * 8)(%r8), %r12; + movq (3 * 8)(%r8), %r13; + OCB_INPUT(0, %r10, %r11, %ymm0); + vmovdqu %ymm0, (15 * 32)(%rax); + OCB_INPUT(1, %r12, %r13, %ymm0); + vmovdqu %ymm0, (14 * 32)(%rax); + movq (4 * 8)(%r8), %r10; + movq (5 * 8)(%r8), %r11; + movq (6 * 8)(%r8), %r12; + movq (7 * 8)(%r8), %r13; + OCB_INPUT(2, %r10, %r11, %ymm13); + OCB_INPUT(3, %r12, %r13, %ymm12); + movq (8 * 8)(%r8), %r10; + movq (9 * 8)(%r8), %r11; + movq (10 * 8)(%r8), %r12; + movq (11 * 8)(%r8), %r13; + OCB_INPUT(4, %r10, %r11, %ymm11); + OCB_INPUT(5, %r12, %r13, %ymm10); + movq (12 * 8)(%r8), %r10; + movq (13 * 8)(%r8), %r11; + movq (14 * 8)(%r8), %r12; + movq (15 * 8)(%r8), %r13; + OCB_INPUT(6, %r10, %r11, %ymm9); + OCB_INPUT(7, %r12, %r13, %ymm8); + movq (16 * 8)(%r8), %r10; + movq (17 * 8)(%r8), %r11; + movq (18 * 8)(%r8), %r12; + movq (19 * 8)(%r8), %r13; + OCB_INPUT(8, %r10, %r11, %ymm7); + OCB_INPUT(9, %r12, %r13, %ymm6); + movq (20 * 8)(%r8), %r10; + movq (21 * 8)(%r8), %r11; + movq (22 * 8)(%r8), %r12; + movq (23 * 8)(%r8), %r13; + OCB_INPUT(10, %r10, %r11, %ymm5); + OCB_INPUT(11, %r12, %r13, %ymm4); + movq (24 * 8)(%r8), %r10; + movq (25 * 8)(%r8), %r11; + movq (26 * 8)(%r8), %r12; + movq (27 * 8)(%r8), %r13; + OCB_INPUT(12, %r10, %r11, %ymm3); + OCB_INPUT(13, %r12, %r13, %ymm2); + movq (28 * 8)(%r8), %r10; + movq (29 * 8)(%r8), %r11; + movq (30 * 8)(%r8), %r12; + movq (31 * 8)(%r8), %r13; + OCB_INPUT(14, %r10, %r11, %ymm1); + OCB_INPUT(15, %r12, %r13, %ymm0); +#undef OCB_INPUT + + vmovdqu %xmm14, (%rdx); + + cmpl $128, key_bitlength(CTX); + movl $32, %r8d; + movl $24, %r10d; + cmovel %r10d, %r8d; /* max */ + + movq %rcx, %r10; + + /* inpack16_pre: */ + vpbroadcastq (key_table)(CTX), %ymm15; + vpshufb .Lpack_bswap rRIP, %ymm15, %ymm15; + vpxor %ymm0, %ymm15, %ymm0; + vpxor %ymm1, %ymm15, %ymm1; + vpxor %ymm2, %ymm15, %ymm2; + vpxor %ymm3, %ymm15, %ymm3; + vpxor %ymm4, %ymm15, %ymm4; + vpxor %ymm5, %ymm15, %ymm5; + vpxor %ymm6, %ymm15, %ymm6; + vpxor %ymm7, %ymm15, %ymm7; + vpxor %ymm8, %ymm15, %ymm8; + vpxor %ymm9, %ymm15, %ymm9; + vpxor %ymm10, %ymm15, %ymm10; + vpxor %ymm11, %ymm15, %ymm11; + vpxor %ymm12, %ymm15, %ymm12; + vpxor %ymm13, %ymm15, %ymm13; + vpxor 14 * 32(%rax), %ymm15, %ymm14; + vpxor 15 * 32(%rax), %ymm15, %ymm15; + + call __camellia_enc_blk32; + + vpxor %ymm7, %ymm6, %ymm6; + vpxor %ymm5, %ymm4, %ymm4; + vpxor %ymm3, %ymm2, %ymm2; + vpxor %ymm1, %ymm0, %ymm0; + vpxor %ymm15, %ymm14, %ymm14; + vpxor %ymm13, %ymm12, %ymm12; + vpxor %ymm11, %ymm10, %ymm10; + vpxor %ymm9, %ymm8, %ymm8; + + vpxor %ymm6, %ymm4, %ymm4; + vpxor %ymm2, %ymm0, %ymm0; + vpxor %ymm14, %ymm12, %ymm12; + vpxor %ymm10, %ymm8, %ymm8; + + vpxor %ymm4, %ymm0, %ymm0; + vpxor %ymm12, %ymm8, %ymm8; + + vpxor %ymm0, %ymm8, %ymm0; + + vextracti128 $1, %ymm0, %xmm1; + vpxor (%r10), %xmm0, %xmm0; + vpxor %xmm0, %xmm1, %xmm0; + vmovdqu %xmm0, (%r10); + + vzeroall; + + movq (16 * 32 + 0 * 8)(%rsp), %r10; + movq (16 * 32 + 1 * 8)(%rsp), %r11; + movq (16 * 32 + 2 * 8)(%rsp), %r12; + movq (16 * 32 + 3 * 8)(%rsp), %r13; + CFI_RESTORE(%r10); + CFI_RESTORE(%r11); + CFI_RESTORE(%r12); + CFI_RESTORE(%r13); + + leave; + CFI_LEAVE(); + ret; + CFI_ENDPROC(); +ELF(.size FUNC_NAME(ocb_auth),.-FUNC_NAME(ocb_auth);) + +#endif /* GCRY_CAMELLIA_AESNI_AVX2_AMD64_H */ diff --git a/cipher/camellia-glue.c b/cipher/camellia-glue.c index 6577b651..cfafdfc5 100644 --- a/cipher/camellia-glue.c +++ b/cipher/camellia-glue.c @@ -91,6 +91,12 @@ # endif #endif +/* USE_VAES_AVX2 inidicates whether to compile with Intel VAES/AVX2 code. */ +#undef USE_VAES_AVX2 +#if defined(USE_AESNI_AVX2) && defined(HAVE_GCC_INLINE_ASM_VAES) +# define USE_VAES_AVX2 1 +#endif + typedef struct { KEY_TABLE_TYPE keytable; @@ -100,6 +106,7 @@ typedef struct #endif /*USE_AESNI_AVX*/ #ifdef USE_AESNI_AVX2 unsigned int use_aesni_avx2:1;/* AES-NI/AVX2 implementation shall be used. */ + unsigned int use_vaes_avx2:1; /* VAES/AVX2 implementation shall be used. */ #endif /*USE_AESNI_AVX2*/ } CAMELLIA_context; @@ -201,6 +208,46 @@ extern void _gcry_camellia_aesni_avx2_ocb_auth(CAMELLIA_context *ctx, const u64 Ls[32]) ASM_FUNC_ABI; #endif +#ifdef USE_VAES_AVX2 +/* Assembler implementations of Camellia using VAES and AVX2. Process data + in 32 block same time. + */ +extern void _gcry_camellia_vaes_avx2_ctr_enc(CAMELLIA_context *ctx, + unsigned char *out, + const unsigned char *in, + unsigned char *ctr) ASM_FUNC_ABI; + +extern void _gcry_camellia_vaes_avx2_cbc_dec(CAMELLIA_context *ctx, + unsigned char *out, + const unsigned char *in, + unsigned char *iv) ASM_FUNC_ABI; + +extern void _gcry_camellia_vaes_avx2_cfb_dec(CAMELLIA_context *ctx, + unsigned char *out, + const unsigned char *in, + unsigned char *iv) ASM_FUNC_ABI; + +extern void _gcry_camellia_vaes_avx2_ocb_enc(CAMELLIA_context *ctx, + unsigned char *out, + const unsigned char *in, + unsigned char *offset, + unsigned char *checksum, + const u64 Ls[32]) ASM_FUNC_ABI; + +extern void _gcry_camellia_vaes_avx2_ocb_dec(CAMELLIA_context *ctx, + unsigned char *out, + const unsigned char *in, + unsigned char *offset, + unsigned char *checksum, + const u64 Ls[32]) ASM_FUNC_ABI; + +extern void _gcry_camellia_vaes_avx2_ocb_auth(CAMELLIA_context *ctx, + const unsigned char *abuf, + unsigned char *offset, + unsigned char *checksum, + const u64 Ls[32]) ASM_FUNC_ABI; +#endif + static const char *selftest(void); static void _gcry_camellia_ctr_enc (void *context, unsigned char *ctr, @@ -225,7 +272,7 @@ camellia_setkey(void *c, const byte *key, unsigned keylen, CAMELLIA_context *ctx=c; static int initialized=0; static const char *selftest_failed=NULL; -#if defined(USE_AESNI_AVX) || defined(USE_AESNI_AVX2) +#if defined(USE_AESNI_AVX) || defined(USE_AESNI_AVX2) || defined(USE_VAES_AVX2) unsigned int hwf = _gcry_get_hw_features (); #endif @@ -248,6 +295,10 @@ camellia_setkey(void *c, const byte *key, unsigned keylen, #endif #ifdef USE_AESNI_AVX2 ctx->use_aesni_avx2 = (hwf & HWF_INTEL_AESNI) && (hwf & HWF_INTEL_AVX2); + ctx->use_vaes_avx2 = 0; +#endif +#ifdef USE_VAES_AVX2 + ctx->use_vaes_avx2 = (hwf & HWF_INTEL_VAES) && (hwf & HWF_INTEL_AVX2); #endif ctx->keybitlength=keylen*8; @@ -389,11 +440,19 @@ _gcry_camellia_ctr_enc(void *context, unsigned char *ctr, if (ctx->use_aesni_avx2) { int did_use_aesni_avx2 = 0; +#ifdef USE_VAES_AVX2 + int use_vaes = ctx->use_vaes_avx2; +#endif /* Process data in 32 block chunks. */ while (nblocks >= 32) { - _gcry_camellia_aesni_avx2_ctr_enc(ctx, outbuf, inbuf, ctr); +#ifdef USE_VAES_AVX2 + if (use_vaes) + _gcry_camellia_vaes_avx2_ctr_enc(ctx, outbuf, inbuf, ctr); + else +#endif + _gcry_camellia_aesni_avx2_ctr_enc(ctx, outbuf, inbuf, ctr); nblocks -= 32; outbuf += 32 * CAMELLIA_BLOCK_SIZE; @@ -478,11 +537,19 @@ _gcry_camellia_cbc_dec(void *context, unsigned char *iv, if (ctx->use_aesni_avx2) { int did_use_aesni_avx2 = 0; +#ifdef USE_VAES_AVX2 + int use_vaes = ctx->use_vaes_avx2; +#endif /* Process data in 32 block chunks. */ while (nblocks >= 32) { - _gcry_camellia_aesni_avx2_cbc_dec(ctx, outbuf, inbuf, iv); +#ifdef USE_VAES_AVX2 + if (use_vaes) + _gcry_camellia_vaes_avx2_cbc_dec(ctx, outbuf, inbuf, iv); + else +#endif + _gcry_camellia_aesni_avx2_cbc_dec(ctx, outbuf, inbuf, iv); nblocks -= 32; outbuf += 32 * CAMELLIA_BLOCK_SIZE; @@ -564,11 +631,19 @@ _gcry_camellia_cfb_dec(void *context, unsigned char *iv, if (ctx->use_aesni_avx2) { int did_use_aesni_avx2 = 0; +#ifdef USE_VAES_AVX2 + int use_vaes = ctx->use_vaes_avx2; +#endif /* Process data in 32 block chunks. */ while (nblocks >= 32) { - _gcry_camellia_aesni_avx2_cfb_dec(ctx, outbuf, inbuf, iv); +#ifdef USE_VAES_AVX2 + if (use_vaes) + _gcry_camellia_vaes_avx2_cfb_dec(ctx, outbuf, inbuf, iv); + else +#endif + _gcry_camellia_aesni_avx2_cfb_dec(ctx, outbuf, inbuf, iv); nblocks -= 32; outbuf += 32 * CAMELLIA_BLOCK_SIZE; @@ -654,6 +729,10 @@ _gcry_camellia_ocb_crypt (gcry_cipher_hd_t c, void *outbuf_arg, if (ctx->use_aesni_avx2) { int did_use_aesni_avx2 = 0; +#ifdef USE_VAES_AVX2 + int encrypt_use_vaes = encrypt && ctx->use_vaes_avx2; + int decrypt_use_vaes = !encrypt && ctx->use_vaes_avx2; +#endif u64 Ls[32]; unsigned int n = 32 - (blkn % 32); u64 *l; @@ -685,7 +764,16 @@ _gcry_camellia_ocb_crypt (gcry_cipher_hd_t c, void *outbuf_arg, blkn += 32; *l = (uintptr_t)(void *)ocb_get_l(c, blkn - blkn % 32); - if (encrypt) + if (0) {} +#ifdef USE_VAES_AVX2 + if (encrypt_use_vaes) + _gcry_camellia_vaes_avx2_ocb_enc(ctx, outbuf, inbuf, c->u_iv.iv, + c->u_ctr.ctr, Ls); + else if (decrypt_use_vaes) + _gcry_camellia_vaes_avx2_ocb_dec(ctx, outbuf, inbuf, c->u_iv.iv, + c->u_ctr.ctr, Ls); +#endif + else if (encrypt) _gcry_camellia_aesni_avx2_ocb_enc(ctx, outbuf, inbuf, c->u_iv.iv, c->u_ctr.ctr, Ls); else @@ -803,6 +891,9 @@ _gcry_camellia_ocb_auth (gcry_cipher_hd_t c, const void *abuf_arg, if (ctx->use_aesni_avx2) { int did_use_aesni_avx2 = 0; +#ifdef USE_VAES_AVX2 + int use_vaes = ctx->use_vaes_avx2; +#endif u64 Ls[32]; unsigned int n = 32 - (blkn % 32); u64 *l; @@ -834,9 +925,16 @@ _gcry_camellia_ocb_auth (gcry_cipher_hd_t c, const void *abuf_arg, blkn += 32; *l = (uintptr_t)(void *)ocb_get_l(c, blkn - blkn % 32); - _gcry_camellia_aesni_avx2_ocb_auth(ctx, abuf, - c->u_mode.ocb.aad_offset, - c->u_mode.ocb.aad_sum, Ls); +#ifdef USE_VAES_AVX2 + if (use_vaes) + _gcry_camellia_vaes_avx2_ocb_auth(ctx, abuf, + c->u_mode.ocb.aad_offset, + c->u_mode.ocb.aad_sum, Ls); + else +#endif + _gcry_camellia_aesni_avx2_ocb_auth(ctx, abuf, + c->u_mode.ocb.aad_offset, + c->u_mode.ocb.aad_sum, Ls); nblocks -= 32; abuf += 32 * CAMELLIA_BLOCK_SIZE; diff --git a/cipher/camellia-vaes-avx2-amd64.S b/cipher/camellia-vaes-avx2-amd64.S new file mode 100644 index 00000000..542a59e8 --- /dev/null +++ b/cipher/camellia-vaes-avx2-amd64.S @@ -0,0 +1,35 @@ +/* camellia-vaes-avx2-amd64.S - VAES/AVX2 implementation of Camellia cipher + * + * Copyright (C) 2021 Jussi Kivilinna + * + * This file is part of Libgcrypt. + * + * Libgcrypt is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as + * published by the Free Software Foundation; either version 2.1 of + * the License, or (at your option) any later version. + * + * Libgcrypt is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . + */ + +#include + +#ifdef __x86_64 +#if (defined(HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS) || \ + defined(HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS)) && \ + defined(ENABLE_AESNI_SUPPORT) && defined(ENABLE_AVX2_SUPPORT) && \ + defined(HAVE_GCC_INLINE_ASM_VAES) + +#define CAMELLIA_VAES_BUILD 1 +#define FUNC_NAME(func) _gcry_camellia_vaes_avx2_ ## func + +#include "camellia-aesni-avx2-amd64.h" + +#endif /* defined(ENABLE_AESNI_SUPPORT) && defined(ENABLE_AVX2_SUPPORT) */ +#endif /* __x86_64 */ diff --git a/configure.ac b/configure.ac index e4a10b78..f74056f2 100644 --- a/configure.ac +++ b/configure.ac @@ -1608,6 +1608,29 @@ if test "$gcry_cv_gcc_inline_asm_avx2" = "yes" ; then fi +# +# Check whether GCC inline assembler supports VAES instructions +# +AC_CACHE_CHECK([whether GCC inline assembler supports VAES instructions], + [gcry_cv_gcc_inline_asm_vaes], + [if test "$mpi_cpu_arch" != "x86" || + test "$try_asm_modules" != "yes" ; then + gcry_cv_gcc_inline_asm_vaes="n/a" + else + gcry_cv_gcc_inline_asm_vaes=no + AC_LINK_IFELSE([AC_LANG_PROGRAM( + [[void a(void) { + __asm__("vaesenclast %%ymm7,%%ymm7,%%ymm1\n\t":::"cc");/*256-bit*/ + __asm__("vaesenclast %%zmm7,%%zmm7,%%zmm1\n\t":::"cc");/*512-bit*/ + }]], [ a(); ] )], + [gcry_cv_gcc_inline_asm_vaes=yes]) + fi]) +if test "$gcry_cv_gcc_inline_asm_vaes" = "yes" ; then + AC_DEFINE(HAVE_GCC_INLINE_ASM_VAES,1, + [Defined if inline assembler supports VAES instructions]) +fi + + # # Check whether GCC inline assembler supports BMI2 instructions # @@ -2678,6 +2701,9 @@ if test "$found" = "1" ; then if test x"$aesnisupport" = xyes ; then # Build with the AES-NI/AVX2 implementation GCRYPT_CIPHERS="$GCRYPT_CIPHERS camellia-aesni-avx2-amd64.lo" + + # Build with the VAES/AVX2 implementation + GCRYPT_CIPHERS="$GCRYPT_CIPHERS camellia-vaes-avx2-amd64.lo" fi fi fi diff --git a/src/g10lib.h b/src/g10lib.h index 243997eb..55fd7515 100644 --- a/src/g10lib.h +++ b/src/g10lib.h @@ -237,6 +237,7 @@ char **_gcry_strtokenize (const char *string, const char *delim); #define HWF_INTEL_FAST_VPGATHER (1 << 14) #define HWF_INTEL_RDTSC (1 << 15) #define HWF_INTEL_SHAEXT (1 << 16) +#define HWF_INTEL_VAES (1 << 17) #elif defined(HAVE_CPU_ARCH_ARM) diff --git a/src/hwf-x86.c b/src/hwf-x86.c index 9a9ed6d3..a5538502 100644 --- a/src/hwf-x86.c +++ b/src/hwf-x86.c @@ -372,7 +372,7 @@ detect_x86_gnuc (void) if (max_cpuid_level >= 7 && (features & 0x00000001)) { /* Get CPUID:7 contains further Intel feature flags. */ - get_cpuid(7, NULL, &features, NULL, NULL); + get_cpuid(7, NULL, &features, &features2, NULL); /* Test bit 8 for BMI2. */ if (features & 0x00000100) @@ -390,7 +390,13 @@ detect_x86_gnuc (void) /* Test bit 29 for SHA Extensions. */ if (features & (1 << 29)) - result |= HWF_INTEL_SHAEXT; + result |= HWF_INTEL_SHAEXT; + +#if defined(ENABLE_AVX2_SUPPORT) && defined(ENABLE_AESNI_SUPPORT) + /* Test bit 9 for VAES */ + if (features2 & 0x00000200) + result |= HWF_INTEL_VAES; +#endif } return result; diff --git a/src/hwfeatures.c b/src/hwfeatures.c index db58d2a3..de027404 100644 --- a/src/hwfeatures.c +++ b/src/hwfeatures.c @@ -60,6 +60,7 @@ static struct { HWF_INTEL_FAST_VPGATHER, "intel-fast-vpgather" }, { HWF_INTEL_RDTSC, "intel-rdtsc" }, { HWF_INTEL_SHAEXT, "intel-shaext" }, + { HWF_INTEL_VAES, "intel-vaes" }, #elif defined(HAVE_CPU_ARCH_ARM) { HWF_ARM_NEON, "arm-neon" }, { HWF_ARM_AES, "arm-aes" }, -- 2.27.0 From wk at gnupg.org Tue Jan 26 08:27:23 2021 From: wk at gnupg.org (Werner Koch) Date: Tue, 26 Jan 2021 08:27:23 +0100 Subject: [PATCH] cipher/sha512: Fix non-NEON ARM assembly implementation In-Reply-To: (Andreas Metzler's message of "Sun, 24 Jan 2021 08:01:02 +0100") References: <87o8hidsa5.fsf@gmail.com> <87o8hh5iev.fsf@wheatstone.g10code.de> Message-ID: <87zh0wyx7o.fsf@wheatstone.g10code.de> On Sun, 24 Jan 2021 08:01, Andreas Metzler said: > I only see 3 commits in the published repo after tag/ibgcrypt-1.9.0: > * Post release updates > * doc: Fix wrong CVE id in NEWS > * Merge branch 'LIBGCRYPT-1.9-BRANCH' We are currently working on master and it is likely that we merge everything for 1.9.1 Shalom-Salam, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From guidovranken at gmail.com Tue Jan 26 08:40:53 2021 From: guidovranken at gmail.com (Guido Vranken) Date: Tue, 26 Jan 2021 08:40:53 +0100 Subject: gcry_mpi_invm succeeds if the inverse does not exist In-Reply-To: References: <87zhbeh921.fsf@iwagami.gniibe.org> <51fc5b09-7b2f-ce2e-bb8c-f653ba907446@iki.fi> <87tv0k9vwj.fsf@jumper.gniibe.org> <6f38bc98-85fe-12b1-9cce-8e96e699e378@iki.fi> <87d06jmdzh.fsf@iwagami.gniibe.org> Message-ID: Reminder that invmod is still broken and has been for a long time. On Thu, Sep 3, 2020 at 2:19 PM Guido Vranken wrote: > The following inputs to gcry_mpi_invm(): > > > 36fb5bdb5daa9864113ad8a49a41722fc7003a40b02a13daca6997859c2d8534192ff6c02447 > 25c88352cfa171fc728503df037c355a6d5588b22e3510b08f10848ad7c0980b400 > > produces the number: > > 66CAF1A9A03478A288760C2E05E237F11432BA70BECEE56D942ACCD337470E5D77 > > But this is incorrect (another library reports the modular inverse does > not exist). > > ---------- > > The following inputs to gcry_mpi_invm(): > > 12cf3a8ca3d97bea2f080362600cee355 > 1c3fddf62aee0be2f6dc2ef8471f1be2e > > produces the number: > > 60A6520F494E6EE6EE436283FB34B945 > > but it should produce: > > 1339462644931fd624528ea6b3fb1f985 > > On Mon, Jun 1, 2020 at 9:39 AM NIIBE Yutaka wrote: > >> Jussi Kivilinna wrote: >> > Cryptofuzz is reporting another heap-buffer-overflow issue in >> > _gcry_mpi_invm. I've attached reproducer, original from Guido and >> > as patch applied to tests/basic.c. >> >> My fix of 69b55f87053ce2494cd4b38dc600f867bc4355be was not enough. >> I just push another change: >> >> 6f8b1d4cb798375e6d830fd6b73c71da93ee5f3f >> >> Thank you for your report. >> -- >> >> _______________________________________________ >> Gcrypt-devel mailing list >> Gcrypt-devel at gnupg.org >> http://lists.gnupg.org/mailman/listinfo/gcrypt-devel >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gniibe at fsij.org Wed Jan 27 04:17:40 2021 From: gniibe at fsij.org (NIIBE Yutaka) Date: Wed, 27 Jan 2021 12:17:40 +0900 Subject: gcry_mpi_invm succeeds if the inverse does not exist In-Reply-To: References: <87zhbeh921.fsf@iwagami.gniibe.org> <51fc5b09-7b2f-ce2e-bb8c-f653ba907446@iki.fi> <87tv0k9vwj.fsf@jumper.gniibe.org> <6f38bc98-85fe-12b1-9cce-8e96e699e378@iki.fi> <87d06jmdzh.fsf@iwagami.gniibe.org> Message-ID: <87zh0v9igb.fsf@iwagami.gniibe.org> Guido Vranken writes: > Reminder that invmod is still broken and has been for a long time. Thanks a lot. I overlooked your email on September. I created the task: https://dev.gnupg.org/T5269 And push a fix commit: https://dev.gnupg.org/rCf06ff4e31c8e162f4a59986241c7ab43d5085927 -- From wk at gnupg.org Fri Jan 29 09:01:44 2021 From: wk at gnupg.org (Werner Koch) Date: Fri, 29 Jan 2021 09:01:44 +0100 Subject: [urgent] Stop using Libgcrypt 1.9.0 ! Message-ID: <87v9bgw4rb.fsf@wheatstone.g10code.de> Hi! A severe bug was reported yesterday evening against Libgcrypt 1.9.0 which we released last week. A new version to fix this as weel as a couple of build problems will be released today. In the meantime please stop using 1.9.0. It seems that Fedora 34 and Gentoo are already using 1.9.0 . Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From ametzler at bebt.de Fri Jan 29 20:23:13 2021 From: ametzler at bebt.de (Andreas Metzler) Date: Fri, 29 Jan 2021 20:23:13 +0100 Subject: 1.9.1 build error (cross-build for Windows) Message-ID: Hello, 1.9.1 does not cross-build for Windows (on Linux): libtool: compile: i686-w64-mingw32-gcc -DHAVE_CONFIG_H -I. -I../../random -I.. -I../src -I../../src -I/usr/i686-w64-mingw32/include -g -O0 -fno-delete-null-pointer-checks -Wall -c ../../random/rndjent.c -DDLL_EXPORT -DPIC -o .libs/rndjent.o ../../random/rndjent.c: In function 'is_rng_available': ../../random/rndjent.c:240:40: error: 'HWF_INTEL_RDTSC' undeclared (first use in this function) libgcrypt is configureed with: LDFLAGS="-Xlinker --no-insert-timestamp" CFLAGS="-g -Os" CPPFLAGS= ../configure \ --prefix=/usr/i686-w64-mingw32 \ --with-libgpg-error-prefix=/usr/i686-w64-mingw32 \ --disable-padlock-support --disable-asm \ --enable-static \ --host i686-w64-mingw32 [...] configure:16746: checking architecture and mpi assembler functions configure:16753: result: disabled [...] cu Andreas -- `What a good friend you are to him, Dr. Maturin. His other friends are so grateful to you.' `I sew his ears on from time to time, sure' From jussi.kivilinna at iki.fi Sat Jan 30 12:09:29 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sat, 30 Jan 2021 13:09:29 +0200 Subject: 1.9.1 build error (cross-build for Windows) In-Reply-To: References: Message-ID: Hello, On 29.1.2021 21.23, Andreas Metzler wrote: > Hello, > > 1.9.1 does not cross-build for Windows (on Linux): > > libtool: compile: i686-w64-mingw32-gcc -DHAVE_CONFIG_H -I. -I../../random -I.. -I../src -I../../src -I/usr/i686-w64-mingw32/include -g -O0 -fno-delete-null-pointer-checks -Wall -c ../../random/rndjent.c -DDLL_EXPORT -DPIC -o .libs/rndjent.o > ../../random/rndjent.c: In function 'is_rng_available': > ../../random/rndjent.c:240:40: error: 'HWF_INTEL_RDTSC' undeclared (first use in this function) > > libgcrypt is configureed with: > LDFLAGS="-Xlinker --no-insert-timestamp" CFLAGS="-g -Os" CPPFLAGS= ../configure \ > --prefix=/usr/i686-w64-mingw32 \ > --with-libgpg-error-prefix=/usr/i686-w64-mingw32 \ > --disable-padlock-support --disable-asm \ > --enable-static \ > --host i686-w64-mingw32 > [...] > configure:16746: checking architecture and mpi assembler functions > configure:16753: result: disabled > [...] > Building with --disable-asm is broken in 1.9.1. You can try building without --disable-asm or trying attached patch. Sorry for the inconvenience. -Jussi > > cu Andreas > -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Revert-Define-HW-feature-flags-per-architecture.patch Type: text/x-patch Size: 4977 bytes Desc: not available URL: From wk at gnupg.org Sat Jan 30 12:24:21 2021 From: wk at gnupg.org (Werner Koch) Date: Sat, 30 Jan 2021 12:24:21 +0100 Subject: 1.9.1 build error (cross-build for Windows) In-Reply-To: (Andreas Metzler's message of "Fri, 29 Jan 2021 20:23:13 +0100") References: Message-ID: <8735yivfa2.fsf@wheatstone.g10code.de> On Fri, 29 Jan 2021 20:23, Andreas Metzler said: > 1.9.1 does not cross-build for Windows (on Linux): Actually Windows is our mayor test platform and cross compiling is mandatory for this. Please use the standard options and don't invent your own: ./autogen.sh --build-w32 or ./autogen.sh --build-w64 is the correct way to run configure. Salam-Shalom, Werner -- Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From ametzler at bebt.de Sat Jan 30 12:40:42 2021 From: ametzler at bebt.de (Andreas Metzler) Date: Sat, 30 Jan 2021 12:40:42 +0100 Subject: 1.9.1 build error (cross-build for Windows) In-Reply-To: References: Message-ID: On 2021-01-30 Jussi Kivilinna wrote: > On 29.1.2021 21.23, Andreas Metzler wrote: > > 1.9.1 does not cross-build for Windows (on Linux): [...] > Building with --disable-asm is broken in 1.9.1. You can try building without --disable-asm or trying attached patch. [...] Thank you for providing a workaround immediately. Works for me. cu Andreas From guidovranken at gmail.com Sun Jan 31 10:52:53 2021 From: guidovranken at gmail.com (Guido Vranken) Date: Sun, 31 Jan 2021 10:52:53 +0100 Subject: ECDSA verification succeeds when it shouldn't Message-ID: My fuzzer found this: ecc curve: secp256r1 public key X: 4534198767316794591643245143622298809742628679895448054572722918996032022405 public key Y: 107839128084157537346759045080774377135290251058561962283882310383644151460337 cleartext: {0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbc, 0xe6, 0xfa, 0xad, 0xa7, 0x17, 0x9e, 0x84, 0xf3, 0xb9, 0xca, 0xc2, 0xfc, 0x63, 0x25, 0x51} (32 bytes) signature R: 4534198767316794591643245143622298809742628679895448054572722918996032022405 signature S: 4534198767316794591643245143622298809742628679895448054572722918996032022405 where 'cleartext' is the data passed as-is (unhashed) to the verification function. gcry_pk_verify() returns GPG_ERR_NO_ERROR for these parameters but other libraries return failure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ametzler at bebt.de Sun Jan 31 14:24:52 2021 From: ametzler at bebt.de (Andreas Metzler) Date: Sun, 31 Jan 2021 14:24:52 +0100 Subject: 1.9.1 build error (cross-build for Windows) In-Reply-To: <8735yivfa2.fsf@wheatstone.g10code.de> References: <8735yivfa2.fsf@wheatstone.g10code.de> Message-ID: On 2021-01-30 Werner Koch wrote: > On Fri, 29 Jan 2021 20:23, Andreas Metzler said: > > 1.9.1 does not cross-build for Windows (on Linux): > Actually Windows is our mayor test platform and cross compiling is > mandatory for this. Please use the standard options and don't invent > your own: > ./autogen.sh --build-w32 > or > ./autogen.sh --build-w64 > is the correct way to run configure. [...] Hello, I cannot use that since I need an out-of-tree-build. Looking at how ./configure is invoked by the abovementioned command --enable-maintainer-mode --prefix=/home/ametzler/w32root --host=i686-w64-mingw32 --build=x86_64-pc-linux-gnu SYSROOT=/home/ametzler/w32root PKG_CONFIG_LIBDIR=/home/ametzler/w32root/lib/pkgconfig I will test with simpler ./configure flags (host/build/prefix), letting upstream choose optimization and assembly enablement. cu Andreas -- `What a good friend you are to him, Dr. Maturin. His other friends are so grateful to you.' `I sew his ears on from time to time, sure' From jussi.kivilinna at iki.fi Sun Jan 31 17:01:33 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Jan 2021 18:01:33 +0200 Subject: [PATCH 1/8] md: clear bctx.count at final function Message-ID: <20210131160140.618273-1-jussi.kivilinna@iki.fi> * cipher/md4.c (md4_final): Set bctx.count zero after finalizing. * cipher/md5.c (md5_final): Ditto. * cipher/rmd160.c (rmd160_final): Ditto. * cipher/sha1.c (sha1_final): Ditto. * cipher/sha256.c (sha256_final): Ditto. * cipher/sha512.c (sha512_final): Ditto. * cipher/sm3.c (sm3_final): Ditto. * cipher/stribog.c (stribog_final): Ditto. * cipher/tiger.c (tiger_final): Ditto. -- Final functions used to use _gcry_md_block_write for passing final blocks to transform function and thus set bctx.count to zero in _gcry_md_block_write. Final functions were then changed to use transform functions directly, but bctx.count was not set zero after this change. Then later optimization to final functions to pass two blocks to transform functions in one call also changed values set to bctx.count, causing bctx.count getting value larger than block-size of digest algorithm. Signed-off-by: Jussi Kivilinna --- cipher/md4.c | 4 ++-- cipher/md5.c | 4 ++-- cipher/rmd160.c | 4 ++-- cipher/sha1.c | 4 ++-- cipher/sha256.c | 4 ++-- cipher/sha512.c | 3 ++- cipher/sm3.c | 4 ++-- cipher/stribog.c | 2 ++ cipher/tiger.c | 5 +++-- 9 files changed, 19 insertions(+), 15 deletions(-) diff --git a/cipher/md4.c b/cipher/md4.c index 24986c27..b55443a8 100644 --- a/cipher/md4.c +++ b/cipher/md4.c @@ -237,7 +237,6 @@ md4_final( void *context ) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad */ if (hd->bctx.count < 56) memset (&hd->bctx.buf[hd->bctx.count], 0, 56 - hd->bctx.count); - hd->bctx.count = 56; /* append the 64 bit count */ buf_put_le32(hd->bctx.buf + 56, lsb); @@ -249,7 +248,6 @@ md4_final( void *context ) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad character */ /* fill pad and next block with zeroes */ memset (&hd->bctx.buf[hd->bctx.count], 0, 64 - hd->bctx.count + 56); - hd->bctx.count = 64 + 56; /* append the 64 bit count */ buf_put_le32(hd->bctx.buf + 64 + 56, lsb); @@ -265,6 +263,8 @@ md4_final( void *context ) X(D); #undef X + hd->bctx.count = 0; + _gcry_burn_stack (burn); } diff --git a/cipher/md5.c b/cipher/md5.c index 6859d566..32cb535a 100644 --- a/cipher/md5.c +++ b/cipher/md5.c @@ -261,7 +261,6 @@ md5_final( void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad */ if (hd->bctx.count < 56) memset (&hd->bctx.buf[hd->bctx.count], 0, 56 - hd->bctx.count); - hd->bctx.count = 56; /* append the 64 bit count */ buf_put_le32(hd->bctx.buf + 56, lsb); @@ -273,7 +272,6 @@ md5_final( void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad character */ /* fill pad and next block with zeroes */ memset (&hd->bctx.buf[hd->bctx.count], 0, 64 - hd->bctx.count + 56); - hd->bctx.count = 64 + 56; /* append the 64 bit count */ buf_put_le32(hd->bctx.buf + 64 + 56, lsb); @@ -289,6 +287,8 @@ md5_final( void *context) X(D); #undef X + hd->bctx.count = 0; + _gcry_burn_stack (burn); } diff --git a/cipher/rmd160.c b/cipher/rmd160.c index 0608f74c..e12ff017 100644 --- a/cipher/rmd160.c +++ b/cipher/rmd160.c @@ -434,7 +434,6 @@ rmd160_final( void *context ) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad */ if (hd->bctx.count < 56) memset (&hd->bctx.buf[hd->bctx.count], 0, 56 - hd->bctx.count); - hd->bctx.count = 56; /* append the 64 bit count */ buf_put_le32(hd->bctx.buf + 56, lsb); @@ -446,7 +445,6 @@ rmd160_final( void *context ) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad character */ /* fill pad and next block with zeroes */ memset (&hd->bctx.buf[hd->bctx.count], 0, 64 - hd->bctx.count + 56); - hd->bctx.count = 64 + 56; /* append the 64 bit count */ buf_put_le32(hd->bctx.buf + 64 + 56, lsb); @@ -463,6 +461,8 @@ rmd160_final( void *context ) X(4); #undef X + hd->bctx.count = 0; + _gcry_burn_stack (burn); } diff --git a/cipher/sha1.c b/cipher/sha1.c index 287bd826..35f7376c 100644 --- a/cipher/sha1.c +++ b/cipher/sha1.c @@ -591,7 +591,6 @@ sha1_final(void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad */ if (hd->bctx.count < 56) memset (&hd->bctx.buf[hd->bctx.count], 0, 56 - hd->bctx.count); - hd->bctx.count = 56; /* append the 64 bit count */ buf_put_be32(hd->bctx.buf + 56, msb); @@ -603,7 +602,6 @@ sha1_final(void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad character */ /* fill pad and next block with zeroes */ memset (&hd->bctx.buf[hd->bctx.count], 0, 64 - hd->bctx.count + 56); - hd->bctx.count = 64 + 56; /* append the 64 bit count */ buf_put_be32(hd->bctx.buf + 64 + 56, msb); @@ -620,6 +618,8 @@ sha1_final(void *context) X(4); #undef X + hd->bctx.count = 0; + _gcry_burn_stack (burn); } diff --git a/cipher/sha256.c b/cipher/sha256.c index 5c761b20..93505891 100644 --- a/cipher/sha256.c +++ b/cipher/sha256.c @@ -584,7 +584,6 @@ sha256_final(void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad */ if (hd->bctx.count < 56) memset (&hd->bctx.buf[hd->bctx.count], 0, 56 - hd->bctx.count); - hd->bctx.count = 56; /* append the 64 bit count */ buf_put_be32(hd->bctx.buf + 56, msb); @@ -596,7 +595,6 @@ sha256_final(void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad character */ /* fill pad and next block with zeroes */ memset (&hd->bctx.buf[hd->bctx.count], 0, 64 - hd->bctx.count + 56); - hd->bctx.count = 64 + 56; /* append the 64 bit count */ buf_put_be32(hd->bctx.buf + 64 + 56, msb); @@ -616,6 +614,8 @@ sha256_final(void *context) X(7); #undef X + hd->bctx.count = 0; + _gcry_burn_stack (burn); } diff --git a/cipher/sha512.c b/cipher/sha512.c index 0f4c304f..bc4657a8 100644 --- a/cipher/sha512.c +++ b/cipher/sha512.c @@ -818,7 +818,6 @@ sha512_final (void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad */ if (hd->bctx.count < 112) memset (&hd->bctx.buf[hd->bctx.count], 0, 112 - hd->bctx.count); - hd->bctx.count = 112; } else { @@ -850,6 +849,8 @@ sha512_final (void *context) X (7); #undef X + hd->bctx.count = 0; + _gcry_burn_stack (burn); } diff --git a/cipher/sm3.c b/cipher/sm3.c index aee94987..0f9bae3b 100644 --- a/cipher/sm3.c +++ b/cipher/sm3.c @@ -294,7 +294,6 @@ sm3_final(void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad */ if (hd->bctx.count < 56) memset (&hd->bctx.buf[hd->bctx.count], 0, 56 - hd->bctx.count); - hd->bctx.count = 56; /* append the 64 bit count */ buf_put_be32(hd->bctx.buf + 56, msb); @@ -306,7 +305,6 @@ sm3_final(void *context) hd->bctx.buf[hd->bctx.count++] = 0x80; /* pad character */ /* fill pad and next block with zeroes */ memset (&hd->bctx.buf[hd->bctx.count], 0, 64 - hd->bctx.count + 56); - hd->bctx.count = 64 + 56; /* append the 64 bit count */ buf_put_be32(hd->bctx.buf + 64 + 56, msb); @@ -326,6 +324,8 @@ sm3_final(void *context) X(7); #undef X + hd->bctx.count = 0; + _gcry_burn_stack (burn); } diff --git a/cipher/stribog.c b/cipher/stribog.c index c919182a..f8776a3e 100644 --- a/cipher/stribog.c +++ b/cipher/stribog.c @@ -1304,6 +1304,8 @@ stribog_final (void *context) for (i = 0; i < 8; i++) hd->h[i] = le_bswap64(hd->h[i]); + hd->bctx.count = 0; + _gcry_burn_stack (768); } diff --git a/cipher/tiger.c b/cipher/tiger.c index b2f16677..4039b22b 100644 --- a/cipher/tiger.c +++ b/cipher/tiger.c @@ -760,7 +760,7 @@ tiger_final( void *context ) hd->bctx.buf[hd->bctx.count++] = pad; if (hd->bctx.count < 56) memset (&hd->bctx.buf[hd->bctx.count], 0, 56 - hd->bctx.count); - hd->bctx.count = 56; + /* append the 64 bit count */ buf_put_le32(hd->bctx.buf + 56, lsb); buf_put_le32(hd->bctx.buf + 60, msb); @@ -771,7 +771,6 @@ tiger_final( void *context ) hd->bctx.buf[hd->bctx.count++] = pad; /* pad character */ /* fill pad and next block with zeroes */ memset (&hd->bctx.buf[hd->bctx.count], 0, 64 - hd->bctx.count + 56); - hd->bctx.count = 64 + 56; /* append the 64 bit count */ buf_put_le32(hd->bctx.buf + 64 + 56, lsb); @@ -797,6 +796,8 @@ tiger_final( void *context ) #undef X #undef Y + hd->bctx.count = 0; + _gcry_burn_stack (burn); } -- 2.27.0 From jussi.kivilinna at iki.fi Sun Jan 31 17:01:36 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Jan 2021 18:01:36 +0200 Subject: [PATCH 4/8] jent: silence ubsan warning about signed overflow In-Reply-To: <20210131160140.618273-1-jussi.kivilinna@iki.fi> References: <20210131160140.618273-1-jussi.kivilinna@iki.fi> Message-ID: <20210131160140.618273-4-jussi.kivilinna@iki.fi> * random/jitterentropy-base.c (jent_stuck): Cast 'delta2' values to 'uint64_t' for calculation. -- Signed-off-by: Jussi Kivilinna --- random/jitterentropy-base.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/random/jitterentropy-base.c b/random/jitterentropy-base.c index 32fdea46..ba435e1b 100644 --- a/random/jitterentropy-base.c +++ b/random/jitterentropy-base.c @@ -306,7 +306,7 @@ static unsigned int jent_memaccess(struct rand_data *ec, uint64_t loop_cnt) static int jent_stuck(struct rand_data *ec, uint64_t current_delta) { int64_t delta2 = ec->last_delta - current_delta; - int64_t delta3 = delta2 - ec->last_delta2; + int64_t delta3 = (uint64_t)delta2 - (uint64_t)ec->last_delta2; ec->last_delta = current_delta; ec->last_delta2 = delta2; -- 2.27.0 From jussi.kivilinna at iki.fi Sun Jan 31 17:01:39 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Jan 2021 18:01:39 +0200 Subject: [PATCH 7/8] tests: allow running 'make check' with ASAN In-Reply-To: <20210131160140.618273-1-jussi.kivilinna@iki.fi> References: <20210131160140.618273-1-jussi.kivilinna@iki.fi> Message-ID: <20210131160140.618273-7-jussi.kivilinna@iki.fi> * tests/t-secmem.c (main): Skip test if environment variable GCRYPT_IN_ASAN_TEST is defined. * tests/t-sexp.c (main): Do not initialize secmem if environment variable GCRYPT_IN_ASAN_TEST is defined. -- ASAN and mlock are incompatible, so add GCRYPT_IN_ASAN_TEST environment variant for skipping tests failing as result. This allows easier automation of ASAN checks. Signed-off-by: Jussi Kivilinna --- tests/t-secmem.c | 8 ++++++++ tests/t-sexp.c | 9 ++++++++- 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/tests/t-secmem.c b/tests/t-secmem.c index c4d8c66d..2b769134 100644 --- a/tests/t-secmem.c +++ b/tests/t-secmem.c @@ -120,6 +120,14 @@ main (int argc, char **argv) long int pgsize_val = -1; size_t pgsize; + if (getenv ("GCRYPT_IN_ASAN_TEST")) + { + /* 'mlock' is not available when build with address sanitizer, + * so skip test. */ + fputs ("Note: " PGM " skipped because running with ASAN.\n", stdout); + return 0; + } + #if HAVE_MMAP # if defined(HAVE_SYSCONF) && defined(_SC_PAGESIZE) pgsize_val = sysconf (_SC_PAGESIZE); diff --git a/tests/t-sexp.c b/tests/t-sexp.c index 4285ffd8..96d5f97e 100644 --- a/tests/t-sexp.c +++ b/tests/t-sexp.c @@ -1312,7 +1312,14 @@ main (int argc, char **argv) if (debug) xgcry_control ((GCRYCTL_SET_DEBUG_FLAGS, 1u, 0)); xgcry_control ((GCRYCTL_DISABLE_SECMEM_WARN)); - xgcry_control ((GCRYCTL_INIT_SECMEM, 16384, 0)); + if (getenv ("GCRYPT_IN_ASAN_TEST")) + { + fputs ("Note: " PGM " not using secmem as running with ASAN.\n", stdout); + } + else + { + xgcry_control ((GCRYCTL_INIT_SECMEM, 16384, 0)); + } if (!gcry_check_version (GCRYPT_VERSION)) die ("version mismatch"); /* #include "../src/gcrypt-int.h" indicates that internal interfaces -- 2.27.0 From jussi.kivilinna at iki.fi Sun Jan 31 17:01:34 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Jan 2021 18:01:34 +0200 Subject: [PATCH 2/8] Fix building with --disable-asm on x86 In-Reply-To: <20210131160140.618273-1-jussi.kivilinna@iki.fi> References: <20210131160140.618273-1-jussi.kivilinna@iki.fi> Message-ID: <20210131160140.618273-2-jussi.kivilinna@iki.fi> * cipher/keccak.c (USE_64BIT_BMI2, USE_64BIT_SHLD) (USE_32BIT_BMI2): Depend also on HAVE_CPU_ARCH_X86. * random/rndjent.c [__i386__ || __x86_64__] (USE_JENT): Depend also on HAVE_CPU_ARCH_X86. -- Signed-off-by: Jussi Kivilinna --- cipher/keccak.c | 9 ++++++--- random/rndjent.c | 2 +- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/cipher/keccak.c b/cipher/keccak.c index 87a47ac3..795a02e5 100644 --- a/cipher/keccak.c +++ b/cipher/keccak.c @@ -40,21 +40,24 @@ /* USE_64BIT_BMI2 indicates whether to compile with 64-bit Intel BMI2 code. */ #undef USE_64BIT_BMI2 -#if defined(USE_64BIT) && defined(HAVE_GCC_INLINE_ASM_BMI2) +#if defined(USE_64BIT) && defined(HAVE_GCC_INLINE_ASM_BMI2) && \ + defined(HAVE_CPU_ARCH_X86) # define USE_64BIT_BMI2 1 #endif /* USE_64BIT_SHLD indicates whether to compile with 64-bit Intel SHLD code. */ #undef USE_64BIT_SHLD -#if defined(USE_64BIT) && defined (__GNUC__) && defined(__x86_64__) +#if defined(USE_64BIT) && defined (__GNUC__) && defined(__x86_64__) && \ + defined(HAVE_CPU_ARCH_X86) # define USE_64BIT_SHLD 1 #endif /* USE_32BIT_BMI2 indicates whether to compile with 32-bit Intel BMI2 code. */ #undef USE_32BIT_BMI2 -#if defined(USE_32BIT) && defined(HAVE_GCC_INLINE_ASM_BMI2) +#if defined(USE_32BIT) && defined(HAVE_GCC_INLINE_ASM_BMI2) && \ + defined(HAVE_CPU_ARCH_X86) # define USE_32BIT_BMI2 1 #endif diff --git a/random/rndjent.c b/random/rndjent.c index 3d01290f..56648a87 100644 --- a/random/rndjent.c +++ b/random/rndjent.c @@ -57,7 +57,7 @@ #define JENT_USES_GETTIME 2 #define JENT_USES_READ_REAL_TIME 3 #ifdef ENABLE_JENT_SUPPORT -# if defined (__i386__) || defined(__x86_64__) +# if (defined (__i386__) || defined(__x86_64__)) && defined(HAVE_CPU_ARCH_X86) # define USE_JENT JENT_USES_RDTSC # elif defined (HAVE_CLOCK_GETTIME) # if _AIX -- 2.27.0 From jussi.kivilinna at iki.fi Sun Jan 31 17:01:38 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Jan 2021 18:01:38 +0200 Subject: [PATCH 6/8] global: make sure that bulk config string is null-terminated In-Reply-To: <20210131160140.618273-1-jussi.kivilinna@iki.fi> References: <20210131160140.618273-1-jussi.kivilinna@iki.fi> Message-ID: <20210131160140.618273-6-jussi.kivilinna@iki.fi> * src/global.c (_gcry_get_config): Append null-terminator to output in the 'what == NULL' case. -- Config string was not being explicitly null-terminated which resulted garbage output from tests/version with ASAN enabled builds. Signed-off-by: Jussi Kivilinna --- src/global.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/src/global.c b/src/global.c index 57f7329b..8940cea0 100644 --- a/src/global.c +++ b/src/global.c @@ -434,6 +434,13 @@ _gcry_get_config (int mode, const char *what) return NULL; print_config (what, fp); + + if (!what) + { + /* Null-terminate bulk output. */ + gpgrt_fwrite ("\0", 1, 1, fp); + } + if (gpgrt_ferror (fp)) { save_errno = errno; -- 2.27.0 From jussi.kivilinna at iki.fi Sun Jan 31 17:01:40 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Jan 2021 18:01:40 +0200 Subject: [PATCH 8/8] ecc-ecdh: fix memory leak In-Reply-To: <20210131160140.618273-1-jussi.kivilinna@iki.fi> References: <20210131160140.618273-1-jussi.kivilinna@iki.fi> Message-ID: <20210131160140.618273-8-jussi.kivilinna@iki.fi> * cipher/ecc-ecdh.c (_gcry_ecc_mul_point): Free 'ec' at function exit. -- Signed-off-by: Jussi Kivilinna --- cipher/ecc-ecdh.c | 1 + 1 file changed, 1 insertion(+) diff --git a/cipher/ecc-ecdh.c b/cipher/ecc-ecdh.c index 43eb731a..d6b8991a 100644 --- a/cipher/ecc-ecdh.c +++ b/cipher/ecc-ecdh.c @@ -122,5 +122,6 @@ _gcry_ecc_mul_point (int curveid, unsigned char *result, _gcry_mpi_release (x); point_free (&Q); _gcry_mpi_release (mpi_k); + _gcry_mpi_ec_free (ec); return err; } -- 2.27.0 From jussi.kivilinna at iki.fi Sun Jan 31 17:01:37 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Jan 2021 18:01:37 +0200 Subject: [PATCH 5/8] Add handling for -Og with O-flag munging In-Reply-To: <20210131160140.618273-1-jussi.kivilinna@iki.fi> References: <20210131160140.618273-1-jussi.kivilinna@iki.fi> Message-ID: <20210131160140.618273-5-jussi.kivilinna@iki.fi> * cipher/Makefile.am (o_flag_munging): Add handling for '-Og'. * random/Makefile.am (o_flag_munging): Add handling for '-Og'. -- Signed-off-by: Jussi Kivilinna --- cipher/Makefile.am | 2 +- random/Makefile.am | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/cipher/Makefile.am b/cipher/Makefile.am index 6d3ec35e..d6440056 100644 --- a/cipher/Makefile.am +++ b/cipher/Makefile.am @@ -147,7 +147,7 @@ gost-s-box: gost-s-box.c if ENABLE_O_FLAG_MUNGING -o_flag_munging = sed -e 's/-O\([2-9s][2-9s]*\)/-O1/' -e 's/-Ofast/-O1/g' +o_flag_munging = sed -e 's/-O\([2-9sg][2-9sg]*\)/-O1/' -e 's/-Ofast/-O1/g' else o_flag_munging = cat endif diff --git a/random/Makefile.am b/random/Makefile.am index 60af5b4a..7e6e6f03 100644 --- a/random/Makefile.am +++ b/random/Makefile.am @@ -55,7 +55,7 @@ jitterentropy-base.c jitterentropy.h jitterentropy-base-user.h # The rndjent module needs to be compiled without optimization. */ if ENABLE_O_FLAG_MUNGING -o_flag_munging = sed -e 's/-O\([1-9s][1-9s]*\)/-O0/g' -e 's/-Ofast/-O0/g' +o_flag_munging = sed -e 's/-O\([1-9sg][1-9sg]*\)/-O0/g' -e 's/-Ofast/-O0/g' else o_flag_munging = cat endif -- 2.27.0 From jussi.kivilinna at iki.fi Sun Jan 31 17:01:35 2021 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Jan 2021 18:01:35 +0200 Subject: [PATCH 3/8] Fix ubsan warnings for i386 build In-Reply-To: <20210131160140.618273-1-jussi.kivilinna@iki.fi> References: <20210131160140.618273-1-jussi.kivilinna@iki.fi> Message-ID: <20210131160140.618273-3-jussi.kivilinna@iki.fi> * mpi/mpicoder.c (_gcry_mpi_set_buffer) [BYTES_PER_MPI_LIMB == 4]: Cast "*p--" values to mpi_limb_t before left shifting. * tests/t-lock.c (main): Cast 'time(NULL)' to unsigned type. -- Signed-off-by: Jussi Kivilinna --- mpi/mpicoder.c | 16 ++++++++-------- tests/t-lock.c | 2 +- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/mpi/mpicoder.c b/mpi/mpicoder.c index a133421e..f61f777f 100644 --- a/mpi/mpicoder.c +++ b/mpi/mpicoder.c @@ -354,10 +354,10 @@ _gcry_mpi_set_buffer (gcry_mpi_t a, const void *buffer_arg, for (i=0, p = buffer+nbytes-1; p >= buffer+BYTES_PER_MPI_LIMB; ) { #if BYTES_PER_MPI_LIMB == 4 - alimb = *p-- ; - alimb |= *p-- << 8 ; - alimb |= *p-- << 16 ; - alimb |= *p-- << 24 ; + alimb = (mpi_limb_t)*p-- ; + alimb |= (mpi_limb_t)*p-- << 8 ; + alimb |= (mpi_limb_t)*p-- << 16 ; + alimb |= (mpi_limb_t)*p-- << 24 ; #elif BYTES_PER_MPI_LIMB == 8 alimb = (mpi_limb_t)*p-- ; alimb |= (mpi_limb_t)*p-- << 8 ; @@ -375,13 +375,13 @@ _gcry_mpi_set_buffer (gcry_mpi_t a, const void *buffer_arg, if ( p >= buffer ) { #if BYTES_PER_MPI_LIMB == 4 - alimb = *p--; + alimb = (mpi_limb_t)*p--; if (p >= buffer) - alimb |= *p-- << 8; + alimb |= (mpi_limb_t)*p-- << 8; if (p >= buffer) - alimb |= *p-- << 16; + alimb |= (mpi_limb_t)*p-- << 16; if (p >= buffer) - alimb |= *p-- << 24; + alimb |= (mpi_limb_t)*p-- << 24; #elif BYTES_PER_MPI_LIMB == 8 alimb = (mpi_limb_t)*p--; if (p >= buffer) diff --git a/tests/t-lock.c b/tests/t-lock.c index e263aff2..cacc3835 100644 --- a/tests/t-lock.c +++ b/tests/t-lock.c @@ -433,7 +433,7 @@ main (int argc, char **argv) } } - srand (time(NULL)*getpid()); + srand ((unsigned int)time(NULL)*getpid()); if (debug) xgcry_control ((GCRYCTL_SET_DEBUG_FLAGS, 1u, 0)); -- 2.27.0