From yumkam at gmail.com Sun Mar 10 09:38:37 2019 From: yumkam at gmail.com (Yuriy M. Kaminskiy) Date: Sun, 10 Mar 2019 11:38:37 +0300 Subject: FYI: fast gcm/ghash for arm neon Message-ID: <80b05ff1-079f-a6e5-0e01-5961e5dd1a2b@gmail.com> Currently ghash/gcm performance on arm in both gcrypt and nettle is a bit abysmal: === bench-slopes-nettle === GCM auth | 28.43 ns/B 33.54 MiB/s 39.81 c/B 1400.2 === bench-slopes-gcrypt === GCM auth | 21.86 ns/B 43.62 MiB/s 30.52 c/B 1396.0 === bench-slopes-openssl [1.1.1a] === GCM auth | 5.99 ns/B 159.3 MiB/s 8.38 c/B 1399.6 === cut === Current openssl/cryptograms code is based on ideas from https://hal.inria.fr/hal-01506572 (licensed CC BY 4.0) and there are linked implementation https://conradoplg.cryptoland.net/software/ecc-and-ae-for-arm-neon/ (licensed LGPL 2.1+), which I guess should be acceptable to borrow. Very preliminary patch for nettle will be posted as reply (passes nettle regression test, but needs more extensive testing); === bench-slopes-nettle [w/ patched nettle 3.3] === aes128 | nanosecs/byte mebibytes/sec cycles/byte GCM auth | 7.07 ns/B 134.9 MiB/s 9.90 c/B === cut === (And not only it is notably faster, it should be completely free of all cache/timing leaks). From jussi.kivilinna at iki.fi Mon Mar 11 18:05:06 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Mon, 11 Mar 2019 19:05:06 +0200 Subject: FYI: fast gcm/ghash for arm neon In-Reply-To: <80b05ff1-079f-a6e5-0e01-5961e5dd1a2b@gmail.com> References: <80b05ff1-079f-a6e5-0e01-5961e5dd1a2b@gmail.com> Message-ID: <4462d545-da9c-7ace-e0ba-7258af78fa49@iki.fi> Hello, On 10.3.2019 10.38, Yuriy M. Kaminskiy wrote: > Currently ghash/gcm performance on arm in both gcrypt and nettle is a bit abysmal: > === bench-slopes-nettle === > GCM auth | 28.43 ns/B 33.54 MiB/s 39.81 c/B 1400.2 > === bench-slopes-gcrypt === > GCM auth | 21.86 ns/B 43.62 MiB/s 30.52 c/B 1396.0 > === bench-slopes-openssl [1.1.1a] === > GCM auth | 5.99 ns/B 159.3 MiB/s 8.38 c/B 1399.6 > === cut ===> Current openssl/cryptograms code is based on ideas from > https://hal.inria.fr/hal-01506572 (licensed CC BY 4.0) > and there are linked implementation > https://conradoplg.cryptoland.net/software/ecc-and-ae-for-arm-neon/ > (licensed LGPL 2.1+), which I guess should be acceptable to borrow. Thanks for providing link to these. My focus for AES/GCM has been on ARM crypto extension instruction set so I hadn't look into ARM/NEON implementation. When CPU has support for crypto instructions, gcrypt performs significantly better and gives results similar to openssl: Cortex-A53, 32-bit: bench-slope-gcrypt: libgcrypt: 1.8.3 AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz GCM enc | 2.65 ns/B 360.3 MiB/s 2.16 c/B 816.0 GCM dec | 2.65 ns/B 360.1 MiB/s 2.16 c/B 816.0 GCM auth | 1.08 ns/B 885.9 MiB/s 0.878 c/B 816.0 bench-slope-openssl: OpenSSL 1.1.1 11 Sep 2018 aes-128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz GCM enc | 3.05 ns/B 313.1 MiB/s 2.49 c/B 816.0 GCM dec | 3.04 ns/B 313.3 MiB/s 2.48 c/B 816.0 GCM auth | 1.23 ns/B 777.2 MiB/s 1.00 c/B 816.0 Cortex-A53, 64-bit: bench-slope-gcrypt: libgcrypt: 1.8.3 AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz GCM enc | 2.69 ns/B 354.4 MiB/s 2.20 c/B 816.0 GCM dec | 2.70 ns/B 353.8 MiB/s 2.20 c/B 816.0 GCM auth | 1.24 ns/B 771.1 MiB/s 1.01 c/B 816.0 bench-slope-openssl: OpenSSL 1.1.1 11 Sep 2018 aes-128 | nanosecs/byte mebibytes/sec cycles/byte auto Mhz GCM enc | 2.86 ns/B 333.7 MiB/s 2.33 c/B 816.0 GCM dec | 2.86 ns/B 333.6 MiB/s 2.33 c/B 816.0 GCM auth | 1.06 ns/B 903.3 MiB/s 0.861 c/B 816.0 Adding ARM/NEON implementation would be make sense for low-end ARM CPUs since those do not provide these crypto instructions. -Jussi From jussi.kivilinna at iki.fi Mon Mar 11 18:13:20 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Mon, 11 Mar 2019 19:13:20 +0200 Subject: FYI: fast gcm/ghash for arm neon In-Reply-To: <4462d545-da9c-7ace-e0ba-7258af78fa49@iki.fi> References: <80b05ff1-079f-a6e5-0e01-5961e5dd1a2b@gmail.com> <4462d545-da9c-7ace-e0ba-7258af78fa49@iki.fi> Message-ID: <7901e562-bd34-7498-ecca-ff1b7507b753@iki.fi> On 11.3.2019 19.05, Jussi Kivilinna wrote: > Hello, > > On 10.3.2019 10.38, Yuriy M. Kaminskiy wrote: >> Currently ghash/gcm performance on arm in both gcrypt and nettle is a bit abysmal: >> === bench-slopes-nettle === >> GCM auth | 28.43 ns/B 33.54 MiB/s 39.81 c/B 1400.2 >> === bench-slopes-gcrypt === >> GCM auth | 21.86 ns/B 43.62 MiB/s 30.52 c/B 1396.0 >> === bench-slopes-openssl [1.1.1a] === >> GCM auth | 5.99 ns/B 159.3 MiB/s 8.38 c/B 1399.6 >> === cut ===> Current openssl/cryptograms code is based on ideas from >> https://hal.inria.fr/hal-01506572 (licensed CC BY 4.0) >> and there are linked implementation >> https://conradoplg.cryptoland.net/software/ecc-and-ae-for-arm-neon/ >> (licensed LGPL 2.1+), which I guess should be acceptable to borrow. > > Thanks for providing link to these. My focus for AES/GCM has been on > ARM crypto extension instruction set so I hadn't look into ARM/NEON > implementation. When CPU has support for crypto instructions, gcrypt > performs significantly better and gives results similar to openssl: Forgot to mention that gcrypt ARM-CE/GCM implementation is based on paper "Gouv?a, C. P. L. & L?pez, J. Implementing GCM on ARMv8. Topics in Cryptology ? CT-RSA 2015", https://conradoplg.cryptoland.net/publications/ -Jussi From jussi.kivilinna at iki.fi Tue Mar 19 21:09:09 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 19 Mar 2019 22:09:09 +0200 Subject: [PATCH] doc/gcrypt.texi: update HW feature list Message-ID: <155302614921.25089.12345185173850937369.stgit@localhost.localdomain> * doc/gcrypt.texi: Update FW feature list. -- Signed-off-by: Jussi Kivilinna --- doc/gcrypt.texi | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/doc/gcrypt.texi b/doc/gcrypt.texi index 5f20a54d6..ea57dfb92 100644 --- a/doc/gcrypt.texi +++ b/doc/gcrypt.texi @@ -569,13 +569,20 @@ are @item intel-fast-shld @item intel-bmi2 @item intel-ssse3 + at item intel-sse4.1 @item intel-pclmul @item intel-aesni @item intel-rdrand @item intel-avx @item intel-avx2 + at item intel-fast-vpgather @item intel-rdtsc + at item intel-shaext @item arm-neon + at item arm-aes + at item arm-sha1 + at item arm-sha2 + at item arm-pmull @end table To disable a feature for all processes using Libgcrypt 1.6 or newer, From jussi.kivilinna at iki.fi Tue Mar 19 21:09:55 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 19 Mar 2019 22:09:55 +0200 Subject: [PATCH] sha1-avx: use vmovdqa instead of movdqa Message-ID: <155302619521.27467.3802810407765273448.stgit@localhost.localdomain> * cipher/sha1-avx-amd64.S: Replace 'movdqa' with 'vmovdqa'. * cipher/sha1-avx-bmi2-amd64.S: Replace 'movdqa' with 'vmovdqa'. -- Replace SSE instruction 'movdqa' with AVX instruction 'vmovdqa' as mixing SSE and AVX instructions can lead to bad performance. Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/sha1-avx-amd64.S b/cipher/sha1-avx-amd64.S index b14603bf6..5f5b9c0e4 100644 --- a/cipher/sha1-avx-amd64.S +++ b/cipher/sha1-avx-amd64.S @@ -248,7 +248,7 @@ _gcry_sha1_transform_amd64_avx: movl state_h3(RSTATE), d; movl state_h4(RSTATE), e; - movdqa .Lbswap_shufb_ctl RIP, BSWAP_REG; + vmovdqa .Lbswap_shufb_ctl RIP, BSWAP_REG; /* Precalc 0-15. */ W_PRECALC_00_15_0(0, W0, Wtmp0); diff --git a/cipher/sha1-avx-bmi2-amd64.S b/cipher/sha1-avx-bmi2-amd64.S index b267693f4..8292c3afb 100644 --- a/cipher/sha1-avx-bmi2-amd64.S +++ b/cipher/sha1-avx-bmi2-amd64.S @@ -246,7 +246,7 @@ _gcry_sha1_transform_amd64_avx_bmi2: movl state_h3(RSTATE), d; movl state_h4(RSTATE), e; - movdqa .Lbswap_shufb_ctl RIP, BSWAP_REG; + vmovdqa .Lbswap_shufb_ctl RIP, BSWAP_REG; /* Precalc 0-15. */ W_PRECALC_00_15_0(0, W0, Wtmp0); From jussi.kivilinna at iki.fi Thu Mar 21 20:36:20 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 21 Mar 2019 21:36:20 +0200 Subject: [PATCH 1/3] Reduce overhead on generic hash write function Message-ID: <155319697993.29766.3011414883942029205.stgit@localhost.localdomain> * cipher/hash-common.c (_gcry_md_block_write): Remove recursive function call; Use buf_cpy for copying buffers; Burn stack only once. -- Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/hash-common.c b/cipher/hash-common.c index a750d6443..74675d49f 100644 --- a/cipher/hash-common.c +++ b/cipher/hash-common.c @@ -26,6 +26,7 @@ #endif #include "g10lib.h" +#include "bufhelp.h" #include "hash-common.h" @@ -121,8 +122,10 @@ _gcry_md_block_write (void *context, const void *inbuf_arg, size_t inlen) const unsigned char *inbuf = inbuf_arg; gcry_md_block_ctx_t *hd = context; unsigned int stack_burn = 0; + unsigned int nburn; const unsigned int blocksize = hd->blocksize; size_t inblocks; + size_t copylen; if (sizeof(hd->buf) < blocksize) BUG(); @@ -130,38 +133,53 @@ _gcry_md_block_write (void *context, const void *inbuf_arg, size_t inlen) if (!hd->bwrite) return; - if (hd->count == blocksize) /* Flush the buffer. */ + while (hd->count) { - stack_burn = hd->bwrite (hd, hd->buf, 1); - _gcry_burn_stack (stack_burn); - stack_burn = 0; - hd->count = 0; - if (!++hd->nblocks) - hd->nblocks_high++; - } - if (!inbuf) - return; + if (hd->count == blocksize) /* Flush the buffer. */ + { + nburn = hd->bwrite (hd, hd->buf, 1); + stack_burn = nburn > stack_burn ? nburn : stack_burn; + hd->count = 0; + if (!++hd->nblocks) + hd->nblocks_high++; + } + else + { + copylen = inlen; + if (copylen > blocksize - hd->count) + copylen = blocksize - hd->count; - if (hd->count) - { - for (; inlen && hd->count < blocksize; inlen--) - hd->buf[hd->count++] = *inbuf++; - _gcry_md_block_write (hd, NULL, 0); - if (!inlen) - return; + if (copylen == 0) + break; + + buf_cpy (&hd->buf[hd->count], inbuf, copylen); + hd->count += copylen; + inbuf += copylen; + inlen -= copylen; + } } + if (inlen == 0) + return; + if (inlen >= blocksize) { inblocks = inlen / blocksize; - stack_burn = hd->bwrite (hd, inbuf, inblocks); + nburn = hd->bwrite (hd, inbuf, inblocks); + stack_burn = nburn > stack_burn ? nburn : stack_burn; hd->count = 0; hd->nblocks_high += (hd->nblocks + inblocks < inblocks); hd->nblocks += inblocks; inlen -= inblocks * blocksize; inbuf += inblocks * blocksize; } - _gcry_burn_stack (stack_burn); - for (; inlen && hd->count < blocksize; inlen--) - hd->buf[hd->count++] = *inbuf++; + + if (inlen) + { + buf_cpy (hd->buf, inbuf, inlen); + hd->count = inlen; + } + + if (stack_burn > 0) + _gcry_burn_stack (stack_burn); } From jussi.kivilinna at iki.fi Thu Mar 21 20:36:25 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 21 Mar 2019 21:36:25 +0200 Subject: [PATCH 2/3] Use buf_cpy instead of copying buffers byte by byte In-Reply-To: <155319697993.29766.3011414883942029205.stgit@localhost.localdomain> References: <155319697993.29766.3011414883942029205.stgit@localhost.localdomain> Message-ID: <155319698513.29766.6641367928506356192.stgit@localhost.localdomain> * cipher/bufhelp.h (buf_cpy): Skip memcpy if length is zero. * cipher/cipher-ccm.c (do_cbc_mac): Replace buffer copy loops with buf_cpy call. * cipher/cipher-cmac.c (_gcry_cmac_write): Ditto. * cipher/cipher-ocb.c (_gcry_cipher_ocb_authenticate): Ditto. -- Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/bufhelp.h b/cipher/bufhelp.h index 0e8f5991c..5043d8b04 100644 --- a/cipher/bufhelp.h +++ b/cipher/bufhelp.h @@ -207,6 +207,8 @@ buf_cpy(void *_dst, const void *_src, size_t len) #if __GNUC__ >= 4 if (!__builtin_constant_p (len)) { + if (len == 0) + return; memcpy(_dst, _src, len); return; } diff --git a/cipher/cipher-ccm.c b/cipher/cipher-ccm.c index fd284caa5..3bacb6b16 100644 --- a/cipher/cipher-ccm.c +++ b/cipher/cipher-ccm.c @@ -44,6 +44,7 @@ do_cbc_mac (gcry_cipher_hd_t c, const unsigned char *inbuf, size_t inlen, unsigned int burn = 0; unsigned int unused = c->u_mode.ccm.mac_unused; size_t nblocks; + size_t n; if (inlen == 0 && (unused == 0 || !do_padding)) return 0; @@ -52,8 +53,12 @@ do_cbc_mac (gcry_cipher_hd_t c, const unsigned char *inbuf, size_t inlen, { if (inlen + unused < blocksize || unused > 0) { - for (; inlen && unused < blocksize; inlen--) - c->u_mode.ccm.macbuf[unused++] = *inbuf++; + n = (inlen > blocksize - unused) ? blocksize - unused : inlen; + + buf_cpy (&c->u_mode.ccm.macbuf[unused], inbuf, n); + unused += n; + inlen -= n; + inbuf += n; } if (!inlen) { diff --git a/cipher/cipher-cmac.c b/cipher/cipher-cmac.c index da550c372..4efd1e19b 100644 --- a/cipher/cipher-cmac.c +++ b/cipher/cipher-cmac.c @@ -43,6 +43,7 @@ _gcry_cmac_write (gcry_cipher_hd_t c, gcry_cmac_context_t *ctx, byte outbuf[MAX_BLOCKSIZE]; unsigned int burn = 0; unsigned int nblocks; + size_t n; if (ctx->tag) return GPG_ERR_INV_STATE; @@ -56,15 +57,24 @@ _gcry_cmac_write (gcry_cipher_hd_t c, gcry_cmac_context_t *ctx, /* Last block is needed for cmac_final. */ if (ctx->mac_unused + inlen <= blocksize) { - for (; inlen && ctx->mac_unused < blocksize; inlen--) - ctx->macbuf[ctx->mac_unused++] = *inbuf++; + buf_cpy (&ctx->macbuf[ctx->mac_unused], inbuf, inlen); + ctx->mac_unused += inlen; + inbuf += inlen; + inlen -= inlen; + return 0; } if (ctx->mac_unused) { - for (; inlen && ctx->mac_unused < blocksize; inlen--) - ctx->macbuf[ctx->mac_unused++] = *inbuf++; + n = inlen; + if (n > blocksize - ctx->mac_unused) + n = blocksize - ctx->mac_unused; + + buf_cpy (&ctx->macbuf[ctx->mac_unused], inbuf, n); + ctx->mac_unused += n; + inbuf += n; + inlen -= n; cipher_block_xor (ctx->u_iv.iv, ctx->u_iv.iv, ctx->macbuf, blocksize); set_burn (burn, enc_fn (&c->context.c, ctx->u_iv.iv, ctx->u_iv.iv)); @@ -96,8 +106,14 @@ _gcry_cmac_write (gcry_cipher_hd_t c, gcry_cmac_context_t *ctx, if (inlen == 0) BUG (); - for (; inlen && ctx->mac_unused < blocksize; inlen--) - ctx->macbuf[ctx->mac_unused++] = *inbuf++; + n = inlen; + if (n > blocksize - ctx->mac_unused) + n = blocksize - ctx->mac_unused; + + buf_cpy (&ctx->macbuf[ctx->mac_unused], inbuf, n); + ctx->mac_unused += n; + inbuf += n; + inlen -= n; if (burn) _gcry_burn_stack (burn + 4 * sizeof (void *)); diff --git a/cipher/cipher-ocb.c b/cipher/cipher-ocb.c index 308b04952..f1c94db0c 100644 --- a/cipher/cipher-ocb.c +++ b/cipher/cipher-ocb.c @@ -254,6 +254,7 @@ _gcry_cipher_ocb_authenticate (gcry_cipher_hd_t c, const unsigned char *abuf, unsigned char l_tmp[OCB_BLOCK_LEN]; unsigned int burn = 0; unsigned int nburn; + size_t n; /* Check that a nonce and thus a key has been set and that we have not yet computed the tag. We also return an error if the aad has @@ -268,9 +269,15 @@ _gcry_cipher_ocb_authenticate (gcry_cipher_hd_t c, const unsigned char *abuf, /* Process remaining data from the last call first. */ if (c->u_mode.ocb.aad_nleftover) { - for (; abuflen && c->u_mode.ocb.aad_nleftover < OCB_BLOCK_LEN; - abuf++, abuflen--) - c->u_mode.ocb.aad_leftover[c->u_mode.ocb.aad_nleftover++] = *abuf; + n = abuflen; + if (n > OCB_BLOCK_LEN - c->u_mode.ocb.aad_nleftover) + n = OCB_BLOCK_LEN - c->u_mode.ocb.aad_nleftover; + + buf_cpy (&c->u_mode.ocb.aad_leftover[c->u_mode.ocb.aad_nleftover], + abuf, n); + c->u_mode.ocb.aad_nleftover += n; + abuf += n; + abuflen -= n; if (c->u_mode.ocb.aad_nleftover == OCB_BLOCK_LEN) { @@ -383,9 +390,19 @@ _gcry_cipher_ocb_authenticate (gcry_cipher_hd_t c, const unsigned char *abuf, } /* Store away the remaining data. */ - for (; abuflen && c->u_mode.ocb.aad_nleftover < OCB_BLOCK_LEN; - abuf++, abuflen--) - c->u_mode.ocb.aad_leftover[c->u_mode.ocb.aad_nleftover++] = *abuf; + if (abuflen) + { + n = abuflen; + if (n > OCB_BLOCK_LEN - c->u_mode.ocb.aad_nleftover) + n = OCB_BLOCK_LEN - c->u_mode.ocb.aad_nleftover; + + buf_cpy (&c->u_mode.ocb.aad_leftover[c->u_mode.ocb.aad_nleftover], + abuf, n); + c->u_mode.ocb.aad_nleftover += n; + abuf += n; + abuflen -= n; + } + gcry_assert (!abuflen); if (burn > 0) From jussi.kivilinna at iki.fi Thu Mar 21 20:36:30 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 21 Mar 2019 21:36:30 +0200 Subject: [PATCH 3/3] Use memset instead of setting buffers byte by byte In-Reply-To: <155319697993.29766.3011414883942029205.stgit@localhost.localdomain> References: <155319697993.29766.3011414883942029205.stgit@localhost.localdomain> Message-ID: <155319699030.29766.444450356048693488.stgit@localhost.localdomain> * cipher/cipher-ccm.c (do_cbc_mac): Replace buffer setting loop with memset call. * cipher/cipher-gcm.c (do_ghash_buf): Ditto. * cipher/poly1305.c (poly1305_final): Ditto. -- Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/cipher-ccm.c b/cipher/cipher-ccm.c index 3bacb6b16..dcb268d08 100644 --- a/cipher/cipher-ccm.c +++ b/cipher/cipher-ccm.c @@ -65,8 +65,12 @@ do_cbc_mac (gcry_cipher_hd_t c, const unsigned char *inbuf, size_t inlen, if (!do_padding) break; - while (unused < blocksize) - c->u_mode.ccm.macbuf[unused++] = 0; + n = blocksize - unused; + if (n > 0) + { + memset (&c->u_mode.ccm.macbuf[unused], 0, n); + unused = blocksize; + } } if (unused > 0) diff --git a/cipher/cipher-gcm.c b/cipher/cipher-gcm.c index f9ddbc568..4fdd61207 100644 --- a/cipher/cipher-gcm.c +++ b/cipher/cipher-gcm.c @@ -525,8 +525,12 @@ do_ghash_buf(gcry_cipher_hd_t c, byte *hash, const byte *buf, if (!do_padding) break; - while (unused < blocksize) - c->u_mode.gcm.macbuf[unused++] = 0; + n = blocksize - unused; + if (n > 0) + { + memset (&c->u_mode.gcm.macbuf[unused], 0, n); + unused = blocksize; + } } if (unused > 0) diff --git a/cipher/poly1305.c b/cipher/poly1305.c index 8de6cd5e6..cded7cb2e 100644 --- a/cipher/poly1305.c +++ b/cipher/poly1305.c @@ -202,8 +202,12 @@ static unsigned int poly1305_final (poly1305_context_t *ctx, if (ctx->leftover) { ctx->buffer[ctx->leftover++] = 1; - for (; ctx->leftover < POLY1305_BLOCKSIZE; ctx->leftover++) - ctx->buffer[ctx->leftover] = 0; + if (ctx->leftover < POLY1305_BLOCKSIZE) + { + memset (&ctx->buffer[ctx->leftover], 0, + POLY1305_BLOCKSIZE - ctx->leftover); + ctx->leftover = POLY1305_BLOCKSIZE; + } burn = poly1305_blocks (ctx, ctx->buffer, POLY1305_BLOCKSIZE, 0); } @@ -398,8 +402,12 @@ static unsigned int poly1305_final (poly1305_context_t *ctx, if (ctx->leftover) { ctx->buffer[ctx->leftover++] = 1; - for (; ctx->leftover < POLY1305_BLOCKSIZE; ctx->leftover++) - ctx->buffer[ctx->leftover] = 0; + if (ctx->leftover < POLY1305_BLOCKSIZE) + { + memset (&ctx->buffer[ctx->leftover], 0, + POLY1305_BLOCKSIZE - ctx->leftover); + ctx->leftover = POLY1305_BLOCKSIZE; + } burn = poly1305_blocks (ctx, ctx->buffer, POLY1305_BLOCKSIZE, 0); } From jussi.kivilinna at iki.fi Sat Mar 23 21:11:42 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sat, 23 Mar 2019 22:11:42 +0200 Subject: [PATCH] Add ARMv7/NEON accelerated GCM implementation Message-ID: <155337190217.29273.9893586202060637452.stgit@localhost.localdomain> * cipher/Makefile.am: Add 'cipher-gcm-armv7-neon.S'. * cipher/cipher-gcm-armv7-neon.S: New. * cipher/cipher-gcm.c [GCM_USE_ARM_NEON] (_gcry_ghash_setup_armv7_neon) (_gcry_ghash_armv7_neon, ghash_setup_armv7_neon) (ghash_armv7_neon): New. (setupM) [GCM_USE_ARM_NEON]: Use armv7/neon implementation if have HWF_ARM_NEON. * cipher/cipher-internal.h (GCM_USE_ARM_NEON): New. -- Benchmark on Cortex-A53 (816 Mhz): Before: | nanosecs/byte mebibytes/sec cycles/byte GMAC_AES | 34.81 ns/B 27.40 MiB/s 28.41 c/B After (3.0x faster): | nanosecs/byte mebibytes/sec cycles/byte GMAC_AES | 11.49 ns/B 82.99 MiB/s 9.38 c/B Reported-by: Yuriy M. Kaminskiy Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/Makefile.am b/cipher/Makefile.am index 16066bfc6..1e67771e5 100644 --- a/cipher/Makefile.am +++ b/cipher/Makefile.am @@ -48,7 +48,7 @@ libcipher_la_SOURCES = \ cipher-aeswrap.c \ cipher-ccm.c \ cipher-cmac.c \ - cipher-gcm.c cipher-gcm-intel-pclmul.c \ + cipher-gcm.c cipher-gcm-intel-pclmul.c cipher-gcm-armv7-neon.S \ cipher-gcm-armv8-aarch32-ce.S cipher-gcm-armv8-aarch64-ce.S \ cipher-poly1305.c \ cipher-ocb.c \ diff --git a/cipher/cipher-gcm-armv7-neon.S b/cipher/cipher-gcm-armv7-neon.S new file mode 100644 index 000000000..a801a5e57 --- /dev/null +++ b/cipher/cipher-gcm-armv7-neon.S @@ -0,0 +1,341 @@ +/* cipher-gcm-armv7-neon.S - ARM/NEON accelerated GHASH + * Copyright (C) 2019 Jussi Kivilinna + * + * This file is part of Libgcrypt. + * + * Libgcrypt is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as + * published by the Free Software Foundation; either version 2.1 of + * the License, or (at your option) any later version. + * + * Libgcrypt is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see . + */ + +#include + +#if defined(HAVE_ARM_ARCH_V6) && defined(__ARMEL__) && \ + defined(HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS) && \ + defined(HAVE_GCC_INLINE_ASM_NEON) + +.syntax unified +.fpu neon +.arm + +.text + +#ifdef __PIC__ +# define GET_DATA_POINTER(reg, name, rtmp) \ + ldr reg, 1f; \ + ldr rtmp, 2f; \ + b 3f; \ + 1: .word _GLOBAL_OFFSET_TABLE_-(3f+8); \ + 2: .word name(GOT); \ + 3: add reg, pc, reg; \ + ldr reg, [reg, rtmp]; +#else +# define GET_DATA_POINTER(reg, name, rtmp) ldr reg, =name +#endif + + +/* Constants */ + +.align 4 +gcry_gcm_reduction_constant: +.Lrconst64: + .quad 0xc200000000000000 + +/* Register macros */ + +#define rhash q0 +#define rhash_l d0 +#define rhash_h d1 + +#define rh1 q1 +#define rh1_l d2 +#define rh1_h d3 + +#define rbuf q2 +#define rbuf_l d4 +#define rbuf_h d5 + +#define rbuf1 q3 +#define rbuf1_l d6 +#define rbuf1_h d7 + +#define t0q q4 +#define t0l d8 +#define t0h d9 + +#define t1q q5 +#define t1l d10 +#define t1h d11 + +#define t2q q6 +#define t2l d12 +#define t2h d13 + +#define t3q q7 +#define t3l d14 +#define t3h d15 + +/* q8 */ +#define k16 d16 +#define k32 d17 + +/* q9 */ +#define k48 d18 + +#define k0 q10 + +#define rr0 q11 +#define rr0_l d22 +#define rr0_h d23 + +#define rr1 q12 +#define rr1_l d24 +#define rr1_h d25 + +#define rt0 q13 +#define rt0_l d26 +#define rt0_h d27 + +#define rt1 q14 +#define rt1_l d28 +#define rt1_h d29 + +#define rrconst q15 +#define rrconst_l d30 +#define rrconst_h d31 + +/* Macro for 64x64=>128 carry-less multiplication using vmull.p8 instruction. + * + * From "C?mara, D.; Gouv?a, C. P. L.; L?pez, J. & Dahab, R. Fast Software + * Polynomial Multiplication on ARM Processors using the NEON Engine. The + * Second International Workshop on Modern Cryptography and Security + * Engineering ? MoCrySEn, 2013". */ + +#define vmull_p64(rq, rl, rh, ad, bd) \ + vext.8 t0l, ad, ad, $1; \ + vmull.p8 t0q, t0l, bd; \ + vext.8 rl, bd, bd, $1; \ + vmull.p8 rq, ad, rl; \ + vext.8 t1l, ad, ad, $2; \ + vmull.p8 t1q, t1l, bd; \ + vext.8 t3l, bd, bd, $2; \ + vmull.p8 t3q, ad, t3l; \ + vext.8 t2l, ad, ad, $3; \ + vmull.p8 t2q, t2l, bd; \ + veor t0q, t0q, rq; \ + vext.8 rl, bd, bd, $3; \ + vmull.p8 rq, ad, rl; \ + veor t1q, t1q, t3q; \ + vext.8 t3l, bd, bd, $4; \ + vmull.p8 t3q, ad, t3l; \ + veor t0l, t0l, t0h; \ + vand t0h, t0h, k48; \ + veor t1l, t1l, t1h; \ + vand t1h, t1h, k32; \ + veor t2q, t2q, rq; \ + veor t0l, t0l, t0h; \ + veor t1l, t1l, t1h; \ + veor t2l, t2l, t2h; \ + vand t2h, t2h, k16; \ + veor t3l, t3l, t3h; \ + vmov.i64 t3h, $0; \ + vext.8 t0q, t0q, t0q, $15; \ + veor t2l, t2l, t2h; \ + vext.8 t1q, t1q, t1q, $14; \ + vmull.p8 rq, ad, bd; \ + vext.8 t2q, t2q, t2q, $13; \ + vext.8 t3q, t3q, t3q, $12; \ + veor t0q, t0q, t1q; \ + veor t2q, t2q, t3q; \ + veor rq, rq, t0q; \ + veor rq, rq, t2q; + +/* GHASH macros. + * + * See "Gouv?a, C. P. L. & L?pez, J. Implementing GCM on ARMv8. Topics in + * Cryptology ? CT-RSA 2015" for details. + */ + +/* Input: 'a' and 'b', Output: 'r0:r1' (low 128-bits in r0, high in r1) + * Note: 'r1' may be 'a' or 'b', 'r0' must not be either 'a' or 'b'. + */ +#define PMUL_128x128(r0, r1, a, b, t1, t2, interleave_op) \ + veor t1##_h, b##_l, b##_h; \ + veor t1##_l, a##_l, a##_h; \ + vmull_p64( r0, r0##_l, r0##_h, a##_l, b##_l ); \ + vmull_p64( r1, r1##_l, r1##_h, a##_h, b##_h ); \ + vmull_p64( t2, t2##_h, t2##_l, t1##_h, t1##_l ); \ + interleave_op; \ + veor t2, r0; \ + veor t2, r1; \ + veor r0##_h, t2##_l; \ + veor r1##_l, t2##_h; + +/* Reduction using Xor and Shift. + * Input: 'r0:r1', Output: 'a' + * + * See "Shay Gueron, Michael E. Kounavis. Intel Carry-Less Multiplication + * Instruction and its Usage for Computing the GCM Mode" for details. + */ +#define REDUCTION(a, r0, r1, t, interleave_op) \ + vshl.u32 t0q, r0, #31; \ + vshl.u32 t1q, r0, #30; \ + vshl.u32 t2q, r0, #25; \ + veor t0q, t0q, t1q; \ + veor t0q, t0q, t2q; \ + vext.8 t, t0q, k0, #4; \ + vext.8 t0q, k0, t0q, #(16-12); \ + veor r0, r0, t0q; \ + interleave_op; \ + vshr.u32 t0q, r0, #1; \ + vshr.u32 t1q, r0, #2; \ + vshr.u32 t2q, r0, #7; \ + veor t0q, t0q, t1q; \ + veor t0q, t0q, t2q; \ + veor t0q, t0q, t; \ + veor r0, r0, t0q; \ + veor a, r0, r1; + +#define _(...) __VA_ARGS__ +#define __ _() + +/* Other functional macros */ + +#define CLEAR_REG(reg) veor reg, reg; + + +/* + * unsigned int _gcry_ghash_armv7_neon (void *gcm_key, byte *result, + * const byte *buf, size_t nblocks); + */ +.align 3 +.globl _gcry_ghash_armv7_neon +.type _gcry_ghash_armv7_neon,%function; +_gcry_ghash_armv7_neon: + /* input: + * r0: gcm_key + * r1: result/hash + * r2: buf + * r3: nblocks + */ + push {r4-r6, lr} + + cmp r3, #0 + beq .Ldo_nothing + + vpush {q4-q7} + + vld1.64 {rhash}, [r1] + vld1.64 {rh1}, [r0] + + vrev64.8 rhash, rhash /* byte-swap */ + + vmov.i64 k0, #0x0 + vmov.i64 k16, #0xffff + vmov.i64 k32, #0xffffffff + vmov.i64 k48, #0xffffffffffff + + vext.8 rhash, rhash, rhash, #8 + + /* Handle remaining blocks. */ + + vld1.64 {rbuf}, [r2]! + subs r3, r3, #1 + + vrev64.8 rbuf, rbuf /* byte-swap */ + vext.8 rbuf, rbuf, rbuf, #8 + + veor rhash, rhash, rbuf + + beq .Lend + +.Loop: + vld1.64 {rbuf}, [r2]! + PMUL_128x128(rr0, rr1, rhash, rh1, rt0, rt1, _(vrev64.8 rbuf, rbuf)) + REDUCTION(rhash, rr0, rr1, rt0, _(vext.8 rbuf, rbuf, rbuf, #8)) + subs r3, r3, #1 + veor rhash, rhash, rbuf + + bne .Loop + +.Lend: + PMUL_128x128(rr0, rr1, rhash, rh1, rt0, rt1, _(CLEAR_REG(rbuf))) + REDUCTION(rhash, rr0, rr1, rt0, _(CLEAR_REG(rh1))) + +.Ldone: + CLEAR_REG(rr1) + vrev64.8 rhash, rhash /* byte-swap */ + CLEAR_REG(rt0) + CLEAR_REG(rr0) + vext.8 rhash, rhash, rhash, #8 + CLEAR_REG(rt1) + CLEAR_REG(t0q) + CLEAR_REG(t1q) + CLEAR_REG(t2q) + CLEAR_REG(t3q) + vst1.64 {rhash}, [r1] + CLEAR_REG(rhash) + + vpop {q4-q7} + +.Ldo_nothing: + mov r0, #0 + pop {r4-r6, pc} +.size _gcry_ghash_armv7_neon,.-_gcry_ghash_armv7_neon; + + +/* + * void _gcry_ghash_armv7_neon (void *gcm_key); + */ +.align 3 +.globl _gcry_ghash_setup_armv7_neon +.type _gcry_ghash_setup_armv7_neon,%function; +_gcry_ghash_setup_armv7_neon: + /* input: + * r0: gcm_key + */ + + vpush {q4-q7} + + GET_DATA_POINTER(r2, .Lrconst64, r3) + + vld1.64 {rrconst_h}, [r2] + +#define GCM_LSH_1(r_out, ia, ib, const_d, oa, ob, ma) \ + /* H <<< 1 */ \ + vshr.s64 ma, ib, #63; \ + vshr.u64 oa, ib, #63; \ + vshr.u64 ob, ia, #63; \ + vand ma, const_d; \ + vshl.u64 ib, ib, #1; \ + vshl.u64 ia, ia, #1; \ + vorr ob, ib; \ + vorr oa, ia; \ + veor ob, ma; \ + vst1.64 {oa, ob}, [r_out] + + vld1.64 {rhash}, [r0] + vrev64.8 rhash, rhash /* byte-swap */ + vext.8 rhash, rhash, rhash, #8 + + vmov rbuf1, rhash + GCM_LSH_1(r0, rhash_l, rhash_h, rrconst_h, rh1_l, rh1_h, rt1_l) /* H<<<1 */ + + CLEAR_REG(rh1) + CLEAR_REG(rhash) + CLEAR_REG(rbuf1) + CLEAR_REG(rrconst) + vpop {q4-q7} + bx lr +.size _gcry_ghash_setup_armv7_neon,.-_gcry_ghash_setup_armv7_neon; + +#endif diff --git a/cipher/cipher-gcm.c b/cipher/cipher-gcm.c index 4fdd61207..cbda87be2 100644 --- a/cipher/cipher-gcm.c +++ b/cipher/cipher-gcm.c @@ -58,8 +58,28 @@ ghash_armv8_ce_pmull (gcry_cipher_hd_t c, byte *result, const byte *buf, return _gcry_ghash_armv8_ce_pmull(c->u_mode.gcm.u_ghash_key.key, result, buf, nblocks, c->u_mode.gcm.gcm_table); } +#endif /* GCM_USE_ARM_PMULL */ -#endif +#ifdef GCM_USE_ARM_NEON +extern void _gcry_ghash_setup_armv7_neon (void *gcm_key); + +extern unsigned int _gcry_ghash_armv7_neon (void *gcm_key, byte *result, + const byte *buf, size_t nblocks); + +static void +ghash_setup_armv7_neon (gcry_cipher_hd_t c) +{ + _gcry_ghash_setup_armv7_neon(c->u_mode.gcm.u_ghash_key.key); +} + +static unsigned int +ghash_armv7_neon (gcry_cipher_hd_t c, byte *result, const byte *buf, + size_t nblocks) +{ + return _gcry_ghash_armv7_neon(c->u_mode.gcm.u_ghash_key.key, result, buf, + nblocks); +} +#endif /* GCM_USE_ARM_NEON */ #ifdef GCM_USE_TABLES @@ -422,6 +442,13 @@ setupM (gcry_cipher_hd_t c) c->u_mode.gcm.ghash_fn = ghash_armv8_ce_pmull; ghash_setup_armv8_ce_pmull (c); } +#endif +#ifdef GCM_USE_ARM_NEON + else if (features & HWF_ARM_NEON) + { + c->u_mode.gcm.ghash_fn = ghash_armv7_neon; + ghash_setup_armv7_neon (c); + } #endif else { diff --git a/cipher/cipher-internal.h b/cipher/cipher-internal.h index 5ece774e6..2283bf319 100644 --- a/cipher/cipher-internal.h +++ b/cipher/cipher-internal.h @@ -86,6 +86,15 @@ # endif #endif /* GCM_USE_ARM_PMULL */ +/* GCM_USE_ARM_NEON indicates whether to compile GCM with ARMv7 NEON code. */ +#undef GCM_USE_ARM_NEON +#if defined(GCM_USE_TABLES) +#if defined(HAVE_ARM_ARCH_V6) && defined(__ARMEL__) && \ + defined(HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS) && \ + defined(HAVE_GCC_INLINE_ASM_NEON) +# define GCM_USE_ARM_NEON 1 +#endif +#endif /* GCM_USE_ARM_NEON */ typedef unsigned int (*ghash_fn_t) (gcry_cipher_hd_t c, byte *result, const byte *buf, size_t nblocks); From jussi.kivilinna at iki.fi Sun Mar 24 09:26:46 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 24 Mar 2019 10:26:46 +0200 Subject: [PATCH] random-drbg: do not use calloc for zero ctr Message-ID: <155341600666.23890.10590867675690596685.stgit@localhost.localdomain> * random/random-drbg.c (DRBG_CTR_NULL_LEN): Move to 'constants' section. (drbg_state_s): Remove 'ctr_null' member. (drbg_ctr_generate): Add 'drbg_ctr_null'. (drbg_sym_fini, drbg_sym_init): Remove 'drbg->ctr_null' usage. -- GnuPG-bug-id: 3878 Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/random/random-drbg.c b/random/random-drbg.c index 7f66997be..e0b4230e6 100644 --- a/random/random-drbg.c +++ b/random/random-drbg.c @@ -235,6 +235,8 @@ #define DRBG_DEFAULT_TYPE DRBG_NOPR_HMACSHA256 +#define DRBG_CTR_NULL_LEN 128 + /****************************************************************** * Common data structures @@ -313,8 +315,6 @@ struct drbg_state_s * operation -- allocated during init */ void *priv_data; /* Cipher handle */ gcry_cipher_hd_t ctr_handle; /* CTR mode cipher handle */ -#define DRBG_CTR_NULL_LEN 128 - unsigned char *ctr_null; /* CTR mode zero buffer */ int seeded:1; /* DRBG fully seeded? */ int pr:1; /* Prediction resistance enabled? */ /* Taken from libgcrypt ANSI X9.31 DRNG: We need to keep track of the @@ -951,6 +951,7 @@ drbg_ctr_generate (drbg_state_t drbg, unsigned char *buf, unsigned int buflen, drbg_string_t *addtl) { + static const unsigned char drbg_ctr_null[DRBG_CTR_NULL_LEN] = { 0, }; gpg_err_code_t ret = 0; memset (drbg->scratchpad, 0, drbg_blocklen (drbg)); @@ -965,7 +966,7 @@ drbg_ctr_generate (drbg_state_t drbg, } /* 10.2.1.5.2 step 4.1 */ - ret = drbg_sym_ctr (drbg, drbg->ctr_null, DRBG_CTR_NULL_LEN, buf, buflen); + ret = drbg_sym_ctr (drbg, drbg_ctr_null, sizeof(drbg_ctr_null), buf, buflen); if (ret) goto out; @@ -2582,8 +2583,6 @@ drbg_sym_fini (drbg_state_t drbg) _gcry_cipher_close (hd); if (drbg->ctr_handle) _gcry_cipher_close (drbg->ctr_handle); - if (drbg->ctr_null) - free(drbg->ctr_null); } static gpg_err_code_t @@ -2592,10 +2591,6 @@ drbg_sym_init (drbg_state_t drbg) gcry_cipher_hd_t hd; gpg_error_t err; - drbg->ctr_null = calloc(1, DRBG_CTR_NULL_LEN); - if (!drbg->ctr_null) - return GPG_ERR_ENOMEM; - err = _gcry_cipher_open (&hd, drbg->core->backend_cipher, GCRY_CIPHER_MODE_ECB, 0); if (err) From jussi.kivilinna at iki.fi Sun Mar 24 09:51:31 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 24 Mar 2019 10:51:31 +0200 Subject: [PATCH] doc: add mention about aligning data to cachelines for best performance Message-ID: <155341749143.28601.2773789330581895178.stgit@localhost.localdomain> * doc/gcrypt.text: Add mention about aligning data to cachelines for best performance. -- GnuPG-bug-id: 2388 Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/doc/gcrypt.texi b/doc/gcrypt.texi index ea57dfb92..8adf3a355 100644 --- a/doc/gcrypt.texi +++ b/doc/gcrypt.texi @@ -5727,6 +5727,9 @@ Set an initialization vector to be used for encryption or decryption. Encrypt or decrypt data. These functions may be called with arbitrary amounts of data and as often as needed to encrypt or decrypt all data. +There is no strict alignment requirements for data, but the best +performance can be archived if data is aligned to cacheline boundary. + @end table There are also functions to query properties of algorithms or context, @@ -5764,6 +5767,9 @@ Set the key for the MAC. @item gcry_md_write Pass more data for computing the message digest to an instance. +There is no strict alignment requirements for data, but the best +performance can be archived if data is aligned to cacheline boundary. + @item gcry_md_putc Buffered version of @code{gcry_md_write} implemented as a macro. From jussi.kivilinna at iki.fi Tue Mar 26 18:31:08 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 26 Mar 2019 19:31:08 +0200 Subject: [PATCH 1/2] chacha20-poly1305: fix wrong en/decryption on large input buffers Message-ID: <155362146831.13779.9173455240401392547.stgit@localhost.localdomain> * cipher/chacha20.c (_gcry_chacha20_poly1305_encrypt) (_gcry_chacha20_poly1305_decrypt): Correctly use 'currlen' for chacha20 on the non-stitched code path. -- This patch fixes bug which was introduced by commit: "Add stitched ChaCha20-Poly1305 SSSE3 and AVX2 implementations" d6330dfb4b0e9fb3f8eef65ea13146060b804a97 Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/chacha20.c b/cipher/chacha20.c index eae4979cc..48fff6250 100644 --- a/cipher/chacha20.c +++ b/cipher/chacha20.c @@ -714,7 +714,7 @@ _gcry_chacha20_poly1305_encrypt(gcry_cipher_hd_t c, byte *outbuf, if (currlen > 24 * 1024) currlen = 24 * 1024; - nburn = do_chacha20_encrypt_stream_tail (ctx, outbuf, inbuf, length); + nburn = do_chacha20_encrypt_stream_tail (ctx, outbuf, inbuf, currlen); burn = nburn > burn ? nburn : burn; nburn = _gcry_poly1305_update_burn (&c->u_mode.poly1305.ctx, outbuf, @@ -838,7 +838,7 @@ _gcry_chacha20_poly1305_decrypt(gcry_cipher_hd_t c, byte *outbuf, currlen); burn = nburn > burn ? nburn : burn; - nburn = do_chacha20_encrypt_stream_tail (ctx, outbuf, inbuf, length); + nburn = do_chacha20_encrypt_stream_tail (ctx, outbuf, inbuf, currlen); burn = nburn > burn ? nburn : burn; outbuf += currlen; From jussi.kivilinna at iki.fi Tue Mar 26 18:31:13 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Tue, 26 Mar 2019 19:31:13 +0200 Subject: [PATCH 2/2] tests/basic: add large buffer testing for ciphers In-Reply-To: <155362146831.13779.9173455240401392547.stgit@localhost.localdomain> References: <155362146831.13779.9173455240401392547.stgit@localhost.localdomain> Message-ID: <155362147351.13779.1880351865074486738.stgit@localhost.localdomain> * tests/basic.c (check_one_cipher_core): Allocate buffers from heap. (check_one_cipher): Add testing with large buffer (~65 KiB) in addition to medium size buffer (~2 KiB). -- Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/tests/basic.c b/tests/basic.c index 3d86e022e..190b0060b 100644 --- a/tests/basic.c +++ b/tests/basic.c @@ -7326,8 +7326,8 @@ check_one_cipher_core (int algo, int mode, int flags, int bufshift, int pass) { gcry_cipher_hd_t hd; - unsigned char in_buffer[1904+1], out_buffer[1904+1]; - unsigned char enc_result[1904]; + unsigned char *in_buffer, *out_buffer; + unsigned char *enc_result; unsigned char tag_result[16]; unsigned char tag[16]; unsigned char *in, *out; @@ -7338,13 +7338,22 @@ check_one_cipher_core (int algo, int mode, int flags, unsigned int pos; unsigned int taglen; + in_buffer = malloc (nplain + 1); + out_buffer = malloc (nplain + 1); + enc_result = malloc (nplain); + if (!in_buffer || !out_buffer || !enc_result) + { + fail ("pass %d, algo %d, mode %d, malloc failed\n", + pass, algo, mode); + goto err_out_free; + } + blklen = get_algo_mode_blklen(algo, mode); taglen = get_algo_mode_taglen(algo, mode); assert (nkey == 64); - assert (nplain == 1904); - assert (sizeof(in_buffer) == nplain + 1); - assert (sizeof(out_buffer) == sizeof(in_buffer)); + assert (nplain > 0); + assert ((nplain % 16) == 0); assert (blklen > 0); if ((mode == GCRY_CIPHER_MODE_CBC && (flags & GCRY_CIPHER_CBC_CTS)) || @@ -7380,13 +7389,13 @@ check_one_cipher_core (int algo, int mode, int flags, { fail ("pass %d, algo %d, mode %d, gcry_cipher_get_algo_keylen failed\n", pass, algo, mode); - return -1; + goto err_out_free; } if (keylen < 40 / 8 || keylen > 32) { fail ("pass %d, algo %d, mode %d, keylength problem (%d)\n", pass, algo, mode, keylen); - return -1; + goto err_out_free; } if (mode == GCRY_CIPHER_MODE_XTS) @@ -7399,7 +7408,7 @@ check_one_cipher_core (int algo, int mode, int flags, { fail ("pass %d, algo %d, mode %d, gcry_cipher_open failed: %s\n", pass, algo, mode, gpg_strerror (err)); - return -1; + goto err_out_free; } err = gcry_cipher_setkey (hd, key, keylen); @@ -7408,11 +7417,11 @@ check_one_cipher_core (int algo, int mode, int flags, fail ("pass %d, algo %d, mode %d, gcry_cipher_setkey failed: %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } if (check_one_cipher_core_reset (hd, algo, mode, pass, nplain) < 0) - return -1; + goto err_out_free; err = gcry_cipher_encrypt (hd, out, nplain, plain, nplain); if (err) @@ -7420,7 +7429,7 @@ check_one_cipher_core (int algo, int mode, int flags, fail ("pass %d, algo %d, mode %d, gcry_cipher_encrypt failed: %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } if (taglen > 0) @@ -7431,7 +7440,7 @@ check_one_cipher_core (int algo, int mode, int flags, fail ("pass %d, algo %d, mode %d, gcry_cipher_gettag failed: %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } memcpy(tag_result, tag, taglen); @@ -7440,7 +7449,7 @@ check_one_cipher_core (int algo, int mode, int flags, memcpy (enc_result, out, nplain); if (check_one_cipher_core_reset (hd, algo, mode, pass, nplain) < 0) - return -1; + goto err_out_free; err = gcry_cipher_decrypt (hd, in, nplain, out, nplain); if (err) @@ -7448,7 +7457,7 @@ check_one_cipher_core (int algo, int mode, int flags, fail ("pass %d, algo %d, mode %d, gcry_cipher_decrypt failed: %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } if (taglen > 0) @@ -7459,7 +7468,7 @@ check_one_cipher_core (int algo, int mode, int flags, fail ("pass %d, algo %d, mode %d, gcry_cipher_checktag failed: %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } } @@ -7469,7 +7478,7 @@ check_one_cipher_core (int algo, int mode, int flags, /* Again, using in-place encryption. */ if (check_one_cipher_core_reset (hd, algo, mode, pass, nplain) < 0) - return -1; + goto err_out_free; memcpy (out, plain, nplain); err = gcry_cipher_encrypt (hd, out, nplain, NULL, 0); @@ -7479,7 +7488,7 @@ check_one_cipher_core (int algo, int mode, int flags, " %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } if (taglen > 0) @@ -7491,7 +7500,7 @@ check_one_cipher_core (int algo, int mode, int flags, "gcry_cipher_gettag failed: %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } if (memcmp (tag_result, tag, taglen)) @@ -7504,7 +7513,7 @@ check_one_cipher_core (int algo, int mode, int flags, pass, algo, mode); if (check_one_cipher_core_reset (hd, algo, mode, pass, nplain) < 0) - return -1; + goto err_out_free; err = gcry_cipher_decrypt (hd, out, nplain, NULL, 0); if (err) @@ -7513,7 +7522,7 @@ check_one_cipher_core (int algo, int mode, int flags, " %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } if (taglen > 0) @@ -7525,7 +7534,7 @@ check_one_cipher_core (int algo, int mode, int flags, "gcry_cipher_checktag failed: %s\n", pass, algo, mode, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } } @@ -7535,7 +7544,7 @@ check_one_cipher_core (int algo, int mode, int flags, /* Again, splitting encryption in multiple operations. */ if (check_one_cipher_core_reset (hd, algo, mode, pass, nplain) < 0) - return -1; + goto err_out_free; piecelen = blklen; pos = 0; @@ -7552,7 +7561,7 @@ check_one_cipher_core (int algo, int mode, int flags, "piecelen: %d), gcry_cipher_encrypt failed: %s\n", pass, algo, mode, pos, piecelen, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } pos += piecelen; @@ -7568,7 +7577,7 @@ check_one_cipher_core (int algo, int mode, int flags, "piecelen: %d), gcry_cipher_gettag failed: %s\n", pass, algo, mode, pos, piecelen, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } if (memcmp (tag_result, tag, taglen)) @@ -7581,7 +7590,7 @@ check_one_cipher_core (int algo, int mode, int flags, pass, algo, mode); if (check_one_cipher_core_reset (hd, algo, mode, pass, nplain) < 0) - return -1; + goto err_out_free; piecelen = blklen; pos = 0; @@ -7597,7 +7606,7 @@ check_one_cipher_core (int algo, int mode, int flags, "piecelen: %d), gcry_cipher_decrypt failed: %s\n", pass, algo, mode, pos, piecelen, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } pos += piecelen; @@ -7613,7 +7622,7 @@ check_one_cipher_core (int algo, int mode, int flags, "piecelen: %d), gcry_cipher_checktag failed: %s\n", pass, algo, mode, pos, piecelen, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } } @@ -7624,7 +7633,7 @@ check_one_cipher_core (int algo, int mode, int flags, /* Again, using in-place encryption and splitting encryption in multiple * operations. */ if (check_one_cipher_core_reset (hd, algo, mode, pass, nplain) < 0) - return -1; + goto err_out_free; piecelen = blklen; pos = 0; @@ -7641,7 +7650,7 @@ check_one_cipher_core (int algo, int mode, int flags, "piecelen: %d), gcry_cipher_encrypt failed: %s\n", pass, algo, mode, pos, piecelen, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } pos += piecelen; @@ -7653,7 +7662,7 @@ check_one_cipher_core (int algo, int mode, int flags, pass, algo, mode); if (check_one_cipher_core_reset (hd, algo, mode, pass, nplain) < 0) - return -1; + goto err_out_free; piecelen = blklen; pos = 0; @@ -7669,7 +7678,7 @@ check_one_cipher_core (int algo, int mode, int flags, "piecelen: %d), gcry_cipher_decrypt failed: %s\n", pass, algo, mode, pos, piecelen, gpg_strerror (err)); gcry_cipher_close (hd); - return -1; + goto err_out_free; } pos += piecelen; @@ -7683,7 +7692,16 @@ check_one_cipher_core (int algo, int mode, int flags, gcry_cipher_close (hd); + free (enc_result); + free (out_buffer); + free (in_buffer); return 0; + +err_out_free: + free (enc_result); + free (out_buffer); + free (in_buffer); + return -1; } @@ -7691,17 +7709,26 @@ check_one_cipher_core (int algo, int mode, int flags, static void check_one_cipher (int algo, int mode, int flags) { + size_t medium_buffer_size = 2048 - 16; + size_t large_buffer_size = 64 * 1024 + 1024 - 16; char key[64+1]; - unsigned char plain[1904+1]; + unsigned char *plain; int bufshift, i; - for (bufshift=0; bufshift < 4; bufshift++) + plain = malloc (large_buffer_size + 1); + if (!plain) + { + fail ("pass %d, algo %d, mode %d, malloc failed\n", -1, algo, mode); + return; + } + + for (bufshift = 0; bufshift < 4; bufshift++) { /* Pass 0: Standard test. */ memcpy (key, "0123456789abcdef.,;/[]{}-=ABCDEF_" "0123456789abcdef.,;/[]{}-=ABCDEF", 64); memcpy (plain, "foobar42FOOBAR17", 16); - for (i = 16; i < 1904; i += 16) + for (i = 16; i < medium_buffer_size; i += 16) { memcpy (&plain[i], &plain[i-16], 16); if (!++plain[i+7]) @@ -7710,30 +7737,53 @@ check_one_cipher (int algo, int mode, int flags) plain[i+14]++; } - if (check_one_cipher_core (algo, mode, flags, key, 64, plain, 1904, - bufshift, 0+10*bufshift)) - return; + if (check_one_cipher_core (algo, mode, flags, key, 64, plain, + medium_buffer_size, bufshift, + 0+10*bufshift)) + goto out; /* Pass 1: Key not aligned. */ memmove (key+1, key, 64); - if (check_one_cipher_core (algo, mode, flags, key+1, 64, plain, 1904, - bufshift, 1+10*bufshift)) - return; + if (check_one_cipher_core (algo, mode, flags, key+1, 64, plain, + medium_buffer_size, bufshift, + 1+10*bufshift)) + goto out; /* Pass 2: Key not aligned and data not aligned. */ - memmove (plain+1, plain, 1904); - if (check_one_cipher_core (algo, mode, flags, key+1, 64, plain+1, 1904, - bufshift, 2+10*bufshift)) - return; + memmove (plain+1, plain, medium_buffer_size); + if (check_one_cipher_core (algo, mode, flags, key+1, 64, plain+1, + medium_buffer_size, bufshift, + 2+10*bufshift)) + goto out; /* Pass 3: Key aligned and data not aligned. */ memmove (key, key+1, 64); - if (check_one_cipher_core (algo, mode, flags, key, 64, plain+1, 1904, - bufshift, 3+10*bufshift)) - return; + if (check_one_cipher_core (algo, mode, flags, key, 64, plain+1, + medium_buffer_size, bufshift, + 3+10*bufshift)) + goto out; } - return; + /* Pass 5: Large buffer test. */ + memcpy (key, "0123456789abcdef.,;/[]{}-=ABCDEF_" + "0123456789abcdef.,;/[]{}-=ABCDEF", 64); + memcpy (plain, "foobar42FOOBAR17", 16); + for (i = 16; i < large_buffer_size; i += 16) + { + memcpy (&plain[i], &plain[i-16], 16); + if (!++plain[i+7]) + plain[i+6]++; + if (!++plain[i+15]) + plain[i+14]++; + } + + if (check_one_cipher_core (algo, mode, flags, key, 64, plain, + large_buffer_size, bufshift, + 50)) + goto out; + +out: + free (plain); } From jussi.kivilinna at iki.fi Thu Mar 28 21:13:34 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 28 Mar 2019 22:13:34 +0200 Subject: [PATCH 1/3] AES-NI/OCB: Use stack for temporary storage Message-ID: <155380401407.18144.16314440043116026646.stgit@localhost.localdomain> * cipher/rijndael-aesni.c (aesni_ocb_enc, aesni_ocb_dec): Use stack allocated 'tmpbuf' instead of output buffer as temporary storage. -- This change gives (very) small improvement for performance (~0.5%) when output buffer is unaligned. Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/rijndael-aesni.c b/cipher/rijndael-aesni.c index 9883861a2..b1f6b0c02 100644 --- a/cipher/rijndael-aesni.c +++ b/cipher/rijndael-aesni.c @@ -2371,8 +2371,13 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, const unsigned char *inbuf = inbuf_arg; u64 n = c->u_mode.ocb.data_nblocks; const unsigned char *l; + byte tmpbuf_store[3 * 16 + 15]; + byte *tmpbuf; aesni_prepare_2_7_variable; + asm volatile ("" : "=r" (tmpbuf) : "0" (tmpbuf_store) : "memory"); + tmpbuf = tmpbuf + (-(uintptr_t)tmpbuf & 15); + aesni_prepare (); aesni_prepare_2_7 (); @@ -2478,22 +2483,22 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, "movdqa %%xmm5, %%xmm0\n\t" "pxor %%xmm6, %%xmm0\n\t" "pxor %%xmm0, %%xmm8\n\t" - "movdqu %%xmm0, %[outbuf4]\n\t" + "movdqa %%xmm0, %[tmpbuf0]\n\t" "movdqa %%xmm10, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm9\n\t" - "movdqu %%xmm0, %[outbuf5]\n\t" - : [outbuf4] "=m" (*(outbuf + 4 * BLOCKSIZE)), - [outbuf5] "=m" (*(outbuf + 5 * BLOCKSIZE)) + "movdqa %%xmm0, %[tmpbuf1]\n\t" + : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)) : : "memory" ); asm volatile ("movdqu %[inbuf6], %%xmm10\n\t" "movdqa %%xmm11, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm10\n\t" - "movdqu %%xmm0, %[outbuf6]\n\t" - : [outbuf6] "=m" (*(outbuf + 6 * BLOCKSIZE)) + "movdqa %%xmm0, %[tmpbuf2]\n\t" + : [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) : [inbuf6] "m" (*(inbuf + 6 * BLOCKSIZE)) : "memory" ); asm volatile ("movdqu %[l7], %%xmm0\n\t" @@ -2510,14 +2515,11 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, asm volatile ("pxor %%xmm12, %%xmm1\n\t" "pxor %%xmm13, %%xmm2\n\t" - "movdqu %[outbuf4],%%xmm0\n\t" - "movdqu %[outbuf5],%%xmm12\n\t" - "movdqu %[outbuf6],%%xmm13\n\t" "pxor %%xmm14, %%xmm3\n\t" "pxor %%xmm15, %%xmm4\n\t" - "pxor %%xmm0, %%xmm8\n\t" - "pxor %%xmm12, %%xmm9\n\t" - "pxor %%xmm13, %%xmm10\n\t" + "pxor %[tmpbuf0],%%xmm8\n\t" + "pxor %[tmpbuf1],%%xmm9\n\t" + "pxor %[tmpbuf2],%%xmm10\n\t" "pxor %%xmm5, %%xmm11\n\t" "movdqu %%xmm1, %[outbuf0]\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" @@ -2531,11 +2533,13 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)), [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)), - [outbuf4] "+m" (*(outbuf + 4 * BLOCKSIZE)), - [outbuf5] "+m" (*(outbuf + 5 * BLOCKSIZE)), - [outbuf6] "+m" (*(outbuf + 6 * BLOCKSIZE)), + [outbuf4] "=m" (*(outbuf + 4 * BLOCKSIZE)), + [outbuf5] "=m" (*(outbuf + 5 * BLOCKSIZE)), + [outbuf6] "=m" (*(outbuf + 6 * BLOCKSIZE)), [outbuf7] "=m" (*(outbuf + 7 * BLOCKSIZE)) - : + : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), + [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) : "memory" ); outbuf += 8*BLOCKSIZE; @@ -2565,24 +2569,24 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, "movdqu %[l3], %%xmm6\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm1\n\t" - "movdqu %%xmm0, %[outbuf0]\n\t" - : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)) + "movdqa %%xmm0, %[tmpbuf0]\n\t" + : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)) : [l1] "m" (*c->u_mode.ocb.L[1]), [l3] "m" (*l) : "memory" ); asm volatile ("movdqu %[inbuf1], %%xmm2\n\t" "pxor %%xmm5, %%xmm3\n\t" "pxor %%xmm3, %%xmm2\n\t" - "movdqu %%xmm3, %[outbuf1]\n\t" - : [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)) + "movdqa %%xmm3, %[tmpbuf1]\n\t" + : [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)) : [inbuf1] "m" (*(inbuf + 1 * BLOCKSIZE)) : "memory" ); asm volatile ("movdqa %%xmm4, %%xmm0\n\t" "movdqu %[inbuf2], %%xmm3\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm3\n\t" - "movdqu %%xmm0, %[outbuf2]\n\t" - : [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)) + "movdqa %%xmm0, %[tmpbuf2]\n\t" + : [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) : [inbuf2] "m" (*(inbuf + 2 * BLOCKSIZE)) : "memory" ); @@ -2596,22 +2600,21 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, do_aesni_enc_vec4 (ctx); - asm volatile ("movdqu %[outbuf0],%%xmm0\n\t" - "pxor %%xmm0, %%xmm1\n\t" + asm volatile ("pxor %[tmpbuf0],%%xmm1\n\t" "movdqu %%xmm1, %[outbuf0]\n\t" - "movdqu %[outbuf1],%%xmm0\n\t" - "pxor %%xmm0, %%xmm2\n\t" + "pxor %[tmpbuf1],%%xmm2\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" - "movdqu %[outbuf2],%%xmm0\n\t" - "pxor %%xmm0, %%xmm3\n\t" + "pxor %[tmpbuf2],%%xmm3\n\t" "movdqu %%xmm3, %[outbuf2]\n\t" "pxor %%xmm5, %%xmm4\n\t" "movdqu %%xmm4, %[outbuf3]\n\t" - : [outbuf0] "+m" (*(outbuf + 0 * BLOCKSIZE)), - [outbuf1] "+m" (*(outbuf + 1 * BLOCKSIZE)), - [outbuf2] "+m" (*(outbuf + 2 * BLOCKSIZE)), + : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), + [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)), + [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)) - : + : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), + [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) : "memory" ); outbuf += 4*BLOCKSIZE; @@ -2651,6 +2654,16 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, : : "memory" ); + asm volatile ("pxor %%xmm0, %%xmm0\n\t" + "movdqa %%xmm0, %[tmpbuf0]\n\t" + "movdqa %%xmm0, %[tmpbuf1]\n\t" + "movdqa %%xmm0, %[tmpbuf2]\n\t" + : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)), + [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) + : + : "memory" ); + aesni_cleanup (); aesni_cleanup_2_7 (); @@ -2668,8 +2681,13 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, u64 n = c->u_mode.ocb.data_nblocks; const unsigned char *l; size_t nblocks = nblocks_arg; + byte tmpbuf_store[3 * 16 + 15]; + byte *tmpbuf; aesni_prepare_2_7_variable; + asm volatile ("" : "=r" (tmpbuf) : "0" (tmpbuf_store) : "memory"); + tmpbuf = tmpbuf + (-(uintptr_t)tmpbuf & 15); + aesni_prepare (); aesni_prepare_2_7 (); @@ -2779,22 +2797,22 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, "movdqa %%xmm5, %%xmm0\n\t" "pxor %%xmm6, %%xmm0\n\t" "pxor %%xmm0, %%xmm8\n\t" - "movdqu %%xmm0, %[outbuf4]\n\t" + "movdqa %%xmm0, %[tmpbuf0]\n\t" "movdqa %%xmm10, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm9\n\t" - "movdqu %%xmm0, %[outbuf5]\n\t" - : [outbuf4] "=m" (*(outbuf + 4 * BLOCKSIZE)), - [outbuf5] "=m" (*(outbuf + 5 * BLOCKSIZE)) + "movdqa %%xmm0, %[tmpbuf1]\n\t" + : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)) : : "memory" ); asm volatile ("movdqu %[inbuf6], %%xmm10\n\t" "movdqa %%xmm11, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm10\n\t" - "movdqu %%xmm0, %[outbuf6]\n\t" - : [outbuf6] "=m" (*(outbuf + 6 * BLOCKSIZE)) + "movdqa %%xmm0, %[tmpbuf2]\n\t" + : [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) : [inbuf6] "m" (*(inbuf + 6 * BLOCKSIZE)) : "memory" ); asm volatile ("movdqu %[l7], %%xmm0\n\t" @@ -2811,14 +2829,11 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, asm volatile ("pxor %%xmm12, %%xmm1\n\t" "pxor %%xmm13, %%xmm2\n\t" - "movdqu %[outbuf4],%%xmm0\n\t" - "movdqu %[outbuf5],%%xmm12\n\t" - "movdqu %[outbuf6],%%xmm13\n\t" "pxor %%xmm14, %%xmm3\n\t" "pxor %%xmm15, %%xmm4\n\t" - "pxor %%xmm0, %%xmm8\n\t" - "pxor %%xmm12, %%xmm9\n\t" - "pxor %%xmm13, %%xmm10\n\t" + "pxor %[tmpbuf0],%%xmm8\n\t" + "pxor %[tmpbuf1],%%xmm9\n\t" + "pxor %[tmpbuf2],%%xmm10\n\t" "pxor %%xmm5, %%xmm11\n\t" "movdqu %%xmm1, %[outbuf0]\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" @@ -2832,11 +2847,13 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)), [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)), - [outbuf4] "+m" (*(outbuf + 4 * BLOCKSIZE)), - [outbuf5] "+m" (*(outbuf + 5 * BLOCKSIZE)), - [outbuf6] "+m" (*(outbuf + 6 * BLOCKSIZE)), + [outbuf4] "=m" (*(outbuf + 4 * BLOCKSIZE)), + [outbuf5] "=m" (*(outbuf + 5 * BLOCKSIZE)), + [outbuf6] "=m" (*(outbuf + 6 * BLOCKSIZE)), [outbuf7] "=m" (*(outbuf + 7 * BLOCKSIZE)) - : + : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), + [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) : "memory" ); outbuf += 8*BLOCKSIZE; @@ -2866,24 +2883,24 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, "movdqu %[l3], %%xmm6\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm1\n\t" - "movdqu %%xmm0, %[outbuf0]\n\t" - : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)) + "movdqa %%xmm0, %[tmpbuf0]\n\t" + : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)) : [l1] "m" (*c->u_mode.ocb.L[1]), [l3] "m" (*l) : "memory" ); asm volatile ("movdqu %[inbuf1], %%xmm2\n\t" "pxor %%xmm5, %%xmm3\n\t" "pxor %%xmm3, %%xmm2\n\t" - "movdqu %%xmm3, %[outbuf1]\n\t" - : [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)) + "movdqa %%xmm3, %[tmpbuf1]\n\t" + : [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)) : [inbuf1] "m" (*(inbuf + 1 * BLOCKSIZE)) : "memory" ); asm volatile ("movdqa %%xmm4, %%xmm0\n\t" "movdqu %[inbuf2], %%xmm3\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm3\n\t" - "movdqu %%xmm0, %[outbuf2]\n\t" - : [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)) + "movdqa %%xmm0, %[tmpbuf2]\n\t" + : [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) : [inbuf2] "m" (*(inbuf + 2 * BLOCKSIZE)) : "memory" ); @@ -2897,22 +2914,21 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, do_aesni_dec_vec4 (ctx); - asm volatile ("movdqu %[outbuf0],%%xmm0\n\t" - "pxor %%xmm0, %%xmm1\n\t" + asm volatile ("pxor %[tmpbuf0],%%xmm1\n\t" "movdqu %%xmm1, %[outbuf0]\n\t" - "movdqu %[outbuf1],%%xmm0\n\t" - "pxor %%xmm0, %%xmm2\n\t" + "pxor %[tmpbuf1],%%xmm2\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" - "movdqu %[outbuf2],%%xmm0\n\t" - "pxor %%xmm0, %%xmm3\n\t" + "pxor %[tmpbuf2],%%xmm3\n\t" "movdqu %%xmm3, %[outbuf2]\n\t" "pxor %%xmm5, %%xmm4\n\t" "movdqu %%xmm4, %[outbuf3]\n\t" - : [outbuf0] "+m" (*(outbuf + 0 * BLOCKSIZE)), - [outbuf1] "+m" (*(outbuf + 1 * BLOCKSIZE)), - [outbuf2] "+m" (*(outbuf + 2 * BLOCKSIZE)), + : [outbuf0] "=m" (*(outbuf + 0 * BLOCKSIZE)), + [outbuf1] "=m" (*(outbuf + 1 * BLOCKSIZE)), + [outbuf2] "=m" (*(outbuf + 2 * BLOCKSIZE)), [outbuf3] "=m" (*(outbuf + 3 * BLOCKSIZE)) - : + : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), + [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) : "memory" ); outbuf += 4*BLOCKSIZE; @@ -2953,6 +2969,16 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, : : "memory" ); + asm volatile ("pxor %%xmm0, %%xmm0\n\t" + "movdqa %%xmm0, %[tmpbuf0]\n\t" + "movdqa %%xmm0, %[tmpbuf1]\n\t" + "movdqa %%xmm0, %[tmpbuf2]\n\t" + : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)), + [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)), + [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) + : + : "memory" ); + aesni_ocb_checksum (c, outbuf_arg, nblocks_arg); aesni_cleanup (); From jussi.kivilinna at iki.fi Thu Mar 28 21:13:39 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 28 Mar 2019 22:13:39 +0200 Subject: [PATCH 2/3] AES-NI/OCB: Perform checksumming inline with encryption In-Reply-To: <155380401407.18144.16314440043116026646.stgit@localhost.localdomain> References: <155380401407.18144.16314440043116026646.stgit@localhost.localdomain> Message-ID: <155380401925.18144.5103949122001978275.stgit@localhost.localdomain> * cipher/rijndael-aesni.c (aesni_ocb_enc): Remove call to 'aesni_ocb_checksum', instead perform checksumming inline with offset calculations. -- This patch reverts the OCB checksumming split for encryption to avoid performance issue seen on Intel CPUs. Commit b42de67f34 "Optimizations for AES-NI OCB" changed AES-NI/OCB implementation perform checksumming as separate pass from encryption and decryption. While this change improved performance for buffer sizes 16 to 4096 bytes (buffer sizes used by bench-slope), it introduced performance anomalia with OCB encryption on Intel processors. Below is large buffer OCB encryption results on Intel Haswell. There we can see that with buffer sizes larger than 32 KiB performance starts dropping. Decryption does not suffer from the same issue. MiB/s Speed by Data Length (at 2 Ghz) 2800 +-------------------------------------------------------------+ 2600 |-+ + + **.****.****+ + + +-| | **.** *.****.****.**** | 2400 |-+ *.** *.*****.****| 2200 |-+ *** +-| 2000 |-+ *.* +-| | ** | 1800 |-+ ** +-| 1600 |-+ *.* +-| 1400 |-+** +-| |** | 1200 |*+ + + + + + + +-| 1000 +-------------------------------------------------------------+ 1024 4096 16384 65536 262144 1048576 Data Length in Bytes I've tested and reproduced this issue on Intel Ivy-Bridge, Haswell and Skylake processors. Same performance drop on large buffers is not seen on AMD Ryzen. Below is OCB decryption speed plot from Haswell for reference, showing expected performance curve over increasing buffer sizes. MiB/s Speed by Data Length (at 2 Ghz) 2800 +-------------------------------------------------------------+ 2600 |-+ + + **.****.****.****.****.****.*****.****| | **.** | 2400 |-+ *.** +-| 2200 |-+ *** +-| 2000 |-+ *.* +-| | ** | 1800 |-+ ** +-| 1600 |-+ *.* +-| 1400 |-+** +-| |** | 1200 |*+ + + + + + + +-| 1000 +-------------------------------------------------------------+ 1024 4096 16384 65536 262144 1048576 Data Length in Bytes After this patch, bench-slope shows ~2% reduction on performance on Intel Haswell: Before: AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz OCB enc | 0.171 ns/B 5581 MiB/s 0.683 c/B 3998 After: AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz OCB enc | 0.174 ns/B 5468 MiB/s 0.697 c/B 3998 Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/rijndael-aesni.c b/cipher/rijndael-aesni.c index b1f6b0c02..e9d9f680e 100644 --- a/cipher/rijndael-aesni.c +++ b/cipher/rijndael-aesni.c @@ -2381,23 +2381,25 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, aesni_prepare (); aesni_prepare_2_7 (); - aesni_ocb_checksum (c, inbuf_arg, nblocks); - /* Preload Offset */ asm volatile ("movdqu %[iv], %%xmm5\n\t" - : /* No output */ - : [iv] "m" (*c->u_iv.iv) - : "memory" ); + "movdqu %[ctr], %%xmm7\n\t" + : /* No output */ + : [iv] "m" (*c->u_iv.iv), + [ctr] "m" (*c->u_ctr.ctr) + : "memory" ); for ( ;nblocks && n % 4; nblocks-- ) { l = aes_ocb_get_l(c, ++n); + /* Checksum_i = Checksum_{i-1} xor P_i */ /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ /* C_i = Offset_i xor ENCIPHER(K, P_i xor Offset_i) */ asm volatile ("movdqu %[l], %%xmm1\n\t" "movdqu %[inbuf], %%xmm0\n\t" "pxor %%xmm1, %%xmm5\n\t" + "pxor %%xmm0, %%xmm7\n\t" "pxor %%xmm5, %%xmm0\n\t" : : [l] "m" (*l), @@ -2445,6 +2447,7 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, n += 4; l = aes_ocb_get_l(c, n); + /* Checksum_i = Checksum_{i-1} xor P_i */ /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ /* P_i = Offset_i xor ENCIPHER(K, C_i xor Offset_i) */ asm volatile ("movdqu %[inbuf0], %%xmm1\n\t" @@ -2465,28 +2468,34 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, : "memory" ); asm volatile ("movdqa %%xmm6, %%xmm12\n\t" "pxor %%xmm5, %%xmm12\n\t" + "pxor %%xmm1, %%xmm7\n\t" "pxor %%xmm12, %%xmm1\n\t" "movdqa %%xmm10, %%xmm13\n\t" "pxor %%xmm5, %%xmm13\n\t" + "pxor %%xmm2, %%xmm7\n\t" "pxor %%xmm13, %%xmm2\n\t" "movdqa %%xmm11, %%xmm14\n\t" "pxor %%xmm5, %%xmm14\n\t" + "pxor %%xmm3, %%xmm7\n\t" "pxor %%xmm14, %%xmm3\n\t" "pxor %%xmm11, %%xmm5\n\t" "pxor %%xmm15, %%xmm5\n\t" + "pxor %%xmm4, %%xmm7\n\t" "pxor %%xmm5, %%xmm4\n\t" "movdqa %%xmm5, %%xmm15\n\t" "movdqa %%xmm5, %%xmm0\n\t" "pxor %%xmm6, %%xmm0\n\t" + "pxor %%xmm8, %%xmm7\n\t" "pxor %%xmm0, %%xmm8\n\t" "movdqa %%xmm0, %[tmpbuf0]\n\t" "movdqa %%xmm10, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" + "pxor %%xmm9, %%xmm7\n\t" "pxor %%xmm0, %%xmm9\n\t" "movdqa %%xmm0, %[tmpbuf1]\n\t" : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)), @@ -2496,6 +2505,7 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, asm volatile ("movdqu %[inbuf6], %%xmm10\n\t" "movdqa %%xmm11, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" + "pxor %%xmm10, %%xmm7\n\t" "pxor %%xmm0, %%xmm10\n\t" "movdqa %%xmm0, %[tmpbuf2]\n\t" : [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) @@ -2505,6 +2515,7 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, "pxor %%xmm11, %%xmm5\n\t" "pxor %%xmm0, %%xmm5\n\t" "movdqu %[inbuf7], %%xmm11\n\t" + "pxor %%xmm11, %%xmm7\n\t" "pxor %%xmm5, %%xmm11\n\t" : : [l7] "m" (*l), @@ -2555,6 +2566,7 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, n += 4; l = aes_ocb_get_l(c, n); + /* Checksum_i = Checksum_{i-1} xor P_i */ /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ /* C_i = Offset_i xor ENCIPHER(K, P_i xor Offset_i) */ asm volatile ("movdqu %[l0], %%xmm0\n\t" @@ -2568,6 +2580,7 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, asm volatile ("movdqu %[l1], %%xmm4\n\t" "movdqu %[l3], %%xmm6\n\t" "pxor %%xmm5, %%xmm0\n\t" + "pxor %%xmm1, %%xmm7\n\t" "pxor %%xmm0, %%xmm1\n\t" "movdqa %%xmm0, %[tmpbuf0]\n\t" : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)) @@ -2576,6 +2589,7 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, : "memory" ); asm volatile ("movdqu %[inbuf1], %%xmm2\n\t" "pxor %%xmm5, %%xmm3\n\t" + "pxor %%xmm2, %%xmm7\n\t" "pxor %%xmm3, %%xmm2\n\t" "movdqa %%xmm3, %[tmpbuf1]\n\t" : [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)) @@ -2584,6 +2598,7 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, asm volatile ("movdqa %%xmm4, %%xmm0\n\t" "movdqu %[inbuf2], %%xmm3\n\t" "pxor %%xmm5, %%xmm0\n\t" + "pxor %%xmm3, %%xmm7\n\t" "pxor %%xmm0, %%xmm3\n\t" "movdqa %%xmm0, %[tmpbuf2]\n\t" : [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) @@ -2593,6 +2608,7 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, asm volatile ("pxor %%xmm6, %%xmm5\n\t" "pxor %%xmm4, %%xmm5\n\t" "movdqu %[inbuf3], %%xmm4\n\t" + "pxor %%xmm4, %%xmm7\n\t" "pxor %%xmm5, %%xmm4\n\t" : : [inbuf3] "m" (*(inbuf + 3 * BLOCKSIZE)) @@ -2625,11 +2641,13 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, { l = aes_ocb_get_l(c, ++n); + /* Checksum_i = Checksum_{i-1} xor P_i */ /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ /* C_i = Offset_i xor ENCIPHER(K, P_i xor Offset_i) */ asm volatile ("movdqu %[l], %%xmm1\n\t" "movdqu %[inbuf], %%xmm0\n\t" "pxor %%xmm1, %%xmm5\n\t" + "pxor %%xmm0, %%xmm7\n\t" "pxor %%xmm5, %%xmm0\n\t" : : [l] "m" (*l), @@ -2650,7 +2668,9 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, c->u_mode.ocb.data_nblocks = n; asm volatile ("movdqu %%xmm5, %[iv]\n\t" - : [iv] "=m" (*c->u_iv.iv) + "movdqu %%xmm7, %[ctr]\n\t" + : [iv] "=m" (*c->u_iv.iv), + [ctr] "=m" (*c->u_ctr.ctr) : : "memory" ); From jussi.kivilinna at iki.fi Thu Mar 28 21:13:44 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Thu, 28 Mar 2019 22:13:44 +0200 Subject: [PATCH 3/3] AES-NI/OCB: Optimize last and first key XORing In-Reply-To: <155380401407.18144.16314440043116026646.stgit@localhost.localdomain> References: <155380401407.18144.16314440043116026646.stgit@localhost.localdomain> Message-ID: <155380402442.18144.3389416509618610652.stgit@localhost.localdomain> * cipher/rijndael-aesni.c (aesni_ocb_enc, aesni_ocb_dec) [__x86_64__]: Reorder and mix first and last key XORing with OCB offset XOR operations. -- OCB pre-XORing and post-XORing can be mixed and reordered with first and last round XORing of AES cipher. This commit utilizes this fact for additional optimization of AES-NI/OCB encryption and decryption. Benchmark on Intel Haswell: Before: AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz OCB enc | 0.174 ns/B 5468 MiB/s 0.697 c/B 3998 OCB dec | 0.170 ns/B 5617 MiB/s 0.679 c/B 3998 After (enc ~11% faster, dec ~6% faster): AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz OCB enc | 0.157 ns/B 6065 MiB/s 0.629 c/B 3998 OCB dec | 0.160 ns/B 5956 MiB/s 0.640 c/B 3998 For reference, CTR: AES | nanosecs/byte mebibytes/sec cycles/byte auto Mhz CTR enc | 0.157 ns/B 6090 MiB/s 0.626 c/B 3998 CTR dec | 0.157 ns/B 6092 MiB/s 0.626 c/B 3998 Signed-off-by: Jussi Kivilinna --- 0 files changed diff --git a/cipher/rijndael-aesni.c b/cipher/rijndael-aesni.c index e9d9f680e..a2a62abd8 100644 --- a/cipher/rijndael-aesni.c +++ b/cipher/rijndael-aesni.c @@ -2421,13 +2421,27 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, #ifdef __x86_64__ if (nblocks >= 8) { + unsigned char last_xor_first_key_store[16 + 15]; + unsigned char *lxf_key; aesni_prepare_8_15_variable; + asm volatile ("" + : "=r" (lxf_key) + : "0" (last_xor_first_key_store) + : "memory"); + lxf_key = lxf_key + (-(uintptr_t)lxf_key & 15); + aesni_prepare_8_15(); asm volatile ("movdqu %[l0], %%xmm6\n\t" - : - : [l0] "m" (*c->u_mode.ocb.L[0]) + "movdqa %[last_key], %%xmm0\n\t" + "pxor %[first_key], %%xmm5\n\t" + "pxor %[first_key], %%xmm0\n\t" + "movdqa %%xmm0, %[lxfkey]\n\t" + : [lxfkey] "=m" (*lxf_key) + : [l0] "m" (*c->u_mode.ocb.L[0]), + [last_key] "m" (ctx->keyschenc[ctx->rounds][0][0]), + [first_key] "m" (ctx->keyschenc[0][0][0]) : "memory" ); for ( ;nblocks >= 8 ; nblocks -= 8 ) @@ -2466,72 +2480,208 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, [inbuf4] "m" (*(inbuf + 4 * BLOCKSIZE)), [inbuf5] "m" (*(inbuf + 5 * BLOCKSIZE)) : "memory" ); - asm volatile ("movdqa %%xmm6, %%xmm12\n\t" + asm volatile ("movdqa %[lxfkey], %%xmm0\n\t" + "movdqa %%xmm6, %%xmm12\n\t" "pxor %%xmm5, %%xmm12\n\t" "pxor %%xmm1, %%xmm7\n\t" "pxor %%xmm12, %%xmm1\n\t" + "pxor %%xmm0, %%xmm12\n\t" "movdqa %%xmm10, %%xmm13\n\t" "pxor %%xmm5, %%xmm13\n\t" "pxor %%xmm2, %%xmm7\n\t" "pxor %%xmm13, %%xmm2\n\t" + "pxor %%xmm0, %%xmm13\n\t" "movdqa %%xmm11, %%xmm14\n\t" "pxor %%xmm5, %%xmm14\n\t" "pxor %%xmm3, %%xmm7\n\t" "pxor %%xmm14, %%xmm3\n\t" + "pxor %%xmm0, %%xmm14\n\t" "pxor %%xmm11, %%xmm5\n\t" "pxor %%xmm15, %%xmm5\n\t" "pxor %%xmm4, %%xmm7\n\t" "pxor %%xmm5, %%xmm4\n\t" "movdqa %%xmm5, %%xmm15\n\t" + "pxor %%xmm0, %%xmm15\n\t" "movdqa %%xmm5, %%xmm0\n\t" "pxor %%xmm6, %%xmm0\n\t" "pxor %%xmm8, %%xmm7\n\t" "pxor %%xmm0, %%xmm8\n\t" + "pxor %[lxfkey], %%xmm0\n\t" "movdqa %%xmm0, %[tmpbuf0]\n\t" "movdqa %%xmm10, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm9, %%xmm7\n\t" "pxor %%xmm0, %%xmm9\n\t" + "pxor %[lxfkey], %%xmm0\n" "movdqa %%xmm0, %[tmpbuf1]\n\t" : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)), [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)) - : + : [lxfkey] "m" (*lxf_key) : "memory" ); asm volatile ("movdqu %[inbuf6], %%xmm10\n\t" "movdqa %%xmm11, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm10, %%xmm7\n\t" "pxor %%xmm0, %%xmm10\n\t" + "pxor %[lxfkey], %%xmm0\n\t" "movdqa %%xmm0, %[tmpbuf2]\n\t" : [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) - : [inbuf6] "m" (*(inbuf + 6 * BLOCKSIZE)) + : [inbuf6] "m" (*(inbuf + 6 * BLOCKSIZE)), + [lxfkey] "m" (*lxf_key) : "memory" ); asm volatile ("movdqu %[l7], %%xmm0\n\t" "pxor %%xmm11, %%xmm5\n\t" "pxor %%xmm0, %%xmm5\n\t" + "movdqa 0x10(%[key]), %%xmm0\n\t" "movdqu %[inbuf7], %%xmm11\n\t" "pxor %%xmm11, %%xmm7\n\t" "pxor %%xmm5, %%xmm11\n\t" : : [l7] "m" (*l), - [inbuf7] "m" (*(inbuf + 7 * BLOCKSIZE)) + [inbuf7] "m" (*(inbuf + 7 * BLOCKSIZE)), + [key] "r" (ctx->keyschenc) : "memory" ); - do_aesni_enc_vec8 (ctx); - - asm volatile ("pxor %%xmm12, %%xmm1\n\t" - "pxor %%xmm13, %%xmm2\n\t" - "pxor %%xmm14, %%xmm3\n\t" - "pxor %%xmm15, %%xmm4\n\t" - "pxor %[tmpbuf0],%%xmm8\n\t" - "pxor %[tmpbuf1],%%xmm9\n\t" - "pxor %[tmpbuf2],%%xmm10\n\t" - "pxor %%xmm5, %%xmm11\n\t" + asm volatile ("cmpl $12, %[rounds]\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0x20(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0x30(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0x40(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0x50(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0x60(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0x70(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0x80(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0x90(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "jb .Ldeclast%=\n\t" + "movdqa 0xa0(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0xb0(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "je .Ldeclast%=\n\t" + "movdqa 0xc0(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + "movdqa 0xd0(%[key]), %%xmm0\n\t" + "aesenc %%xmm0, %%xmm1\n\t" + "aesenc %%xmm0, %%xmm2\n\t" + "aesenc %%xmm0, %%xmm3\n\t" + "aesenc %%xmm0, %%xmm4\n\t" + "aesenc %%xmm0, %%xmm8\n\t" + "aesenc %%xmm0, %%xmm9\n\t" + "aesenc %%xmm0, %%xmm10\n\t" + "aesenc %%xmm0, %%xmm11\n\t" + + ".Ldeclast%=:\n\t" + : + : [key] "r" (ctx->keyschenc), + [rounds] "r" (ctx->rounds) + : "cc", "memory"); + + asm volatile ("aesenclast %%xmm12, %%xmm1\n\t" + "aesenclast %%xmm13, %%xmm2\n\t" + "aesenclast %%xmm14, %%xmm3\n\t" + "aesenclast %%xmm15, %%xmm4\n\t" + "aesenclast %[tmpbuf0],%%xmm8\n\t" + "aesenclast %[tmpbuf1],%%xmm9\n\t" + "aesenclast %[tmpbuf2],%%xmm10\n\t" + "aesenclast %%xmm5, %%xmm11\n\t" + "pxor %[lxfkey], %%xmm11\n\t" "movdqu %%xmm1, %[outbuf0]\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" "movdqu %%xmm3, %[outbuf2]\n\t" @@ -2550,15 +2700,23 @@ aesni_ocb_enc (gcry_cipher_hd_t c, void *outbuf_arg, [outbuf7] "=m" (*(outbuf + 7 * BLOCKSIZE)) : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), - [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) + [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)), + [lxfkey] "m" (*lxf_key) : "memory" ); outbuf += 8*BLOCKSIZE; inbuf += 8*BLOCKSIZE; } - aesni_cleanup_8_15(); - } + asm volatile ("pxor %[first_key], %%xmm5\n\t" + "pxor %%xmm0, %%xmm0\n\t" + "movdqu %%xmm0, %[lxfkey]\n\t" + : [lxfkey] "=m" (*lxf_key) + : [first_key] "m" (ctx->keyschenc[0][0][0]) + : "memory" ); + + aesni_cleanup_8_15(); + } #endif for ( ;nblocks >= 4 ; nblocks -= 4 ) @@ -2753,13 +2911,27 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, #ifdef __x86_64__ if (nblocks >= 8) { + unsigned char last_xor_first_key_store[16 + 15]; + unsigned char *lxf_key; aesni_prepare_8_15_variable; + asm volatile ("" + : "=r" (lxf_key) + : "0" (last_xor_first_key_store) + : "memory"); + lxf_key = lxf_key + (-(uintptr_t)lxf_key & 15); + aesni_prepare_8_15(); asm volatile ("movdqu %[l0], %%xmm6\n\t" - : - : [l0] "m" (*c->u_mode.ocb.L[0]) + "movdqa %[last_key], %%xmm0\n\t" + "pxor %[first_key], %%xmm5\n\t" + "pxor %[first_key], %%xmm0\n\t" + "movdqa %%xmm0, %[lxfkey]\n\t" + : [lxfkey] "=m" (*lxf_key) + : [l0] "m" (*c->u_mode.ocb.L[0]), + [last_key] "m" (ctx->keyschdec[ctx->rounds][0][0]), + [first_key] "m" (ctx->keyschdec[0][0][0]) : "memory" ); for ( ;nblocks >= 8 ; nblocks -= 8 ) @@ -2780,7 +2952,7 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, l = aes_ocb_get_l(c, n); /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ - /* P_i = Offset_i xor DECIPHER(K, C_i xor Offset_i) */ + /* P_i = Offset_i xor ENCIPHER(K, C_i xor Offset_i) */ asm volatile ("movdqu %[inbuf0], %%xmm1\n\t" "movdqu %[inbuf1], %%xmm2\n\t" "movdqu %[inbuf2], %%xmm3\n\t" @@ -2797,64 +2969,200 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, [inbuf4] "m" (*(inbuf + 4 * BLOCKSIZE)), [inbuf5] "m" (*(inbuf + 5 * BLOCKSIZE)) : "memory" ); - asm volatile ("movdqa %%xmm6, %%xmm12\n\t" + asm volatile ("movdqa %[lxfkey], %%xmm0\n\t" + "movdqa %%xmm6, %%xmm12\n\t" "pxor %%xmm5, %%xmm12\n\t" "pxor %%xmm12, %%xmm1\n\t" + "pxor %%xmm0, %%xmm12\n\t" "movdqa %%xmm10, %%xmm13\n\t" "pxor %%xmm5, %%xmm13\n\t" "pxor %%xmm13, %%xmm2\n\t" + "pxor %%xmm0, %%xmm13\n\t" "movdqa %%xmm11, %%xmm14\n\t" "pxor %%xmm5, %%xmm14\n\t" "pxor %%xmm14, %%xmm3\n\t" + "pxor %%xmm0, %%xmm14\n\t" "pxor %%xmm11, %%xmm5\n\t" "pxor %%xmm15, %%xmm5\n\t" "pxor %%xmm5, %%xmm4\n\t" "movdqa %%xmm5, %%xmm15\n\t" + "pxor %%xmm0, %%xmm15\n\t" "movdqa %%xmm5, %%xmm0\n\t" "pxor %%xmm6, %%xmm0\n\t" "pxor %%xmm0, %%xmm8\n\t" + "pxor %[lxfkey], %%xmm0\n\t" "movdqa %%xmm0, %[tmpbuf0]\n\t" "movdqa %%xmm10, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm9\n\t" + "pxor %[lxfkey], %%xmm0\n" "movdqa %%xmm0, %[tmpbuf1]\n\t" : [tmpbuf0] "=m" (*(tmpbuf + 0 * BLOCKSIZE)), [tmpbuf1] "=m" (*(tmpbuf + 1 * BLOCKSIZE)) - : + : [lxfkey] "m" (*lxf_key) : "memory" ); asm volatile ("movdqu %[inbuf6], %%xmm10\n\t" "movdqa %%xmm11, %%xmm0\n\t" "pxor %%xmm5, %%xmm0\n\t" "pxor %%xmm0, %%xmm10\n\t" + "pxor %[lxfkey], %%xmm0\n\t" "movdqa %%xmm0, %[tmpbuf2]\n\t" : [tmpbuf2] "=m" (*(tmpbuf + 2 * BLOCKSIZE)) - : [inbuf6] "m" (*(inbuf + 6 * BLOCKSIZE)) + : [inbuf6] "m" (*(inbuf + 6 * BLOCKSIZE)), + [lxfkey] "m" (*lxf_key) : "memory" ); asm volatile ("movdqu %[l7], %%xmm0\n\t" "pxor %%xmm11, %%xmm5\n\t" "pxor %%xmm0, %%xmm5\n\t" + "movdqa 0x10(%[key]), %%xmm0\n\t" "movdqu %[inbuf7], %%xmm11\n\t" "pxor %%xmm5, %%xmm11\n\t" : : [l7] "m" (*l), - [inbuf7] "m" (*(inbuf + 7 * BLOCKSIZE)) + [inbuf7] "m" (*(inbuf + 7 * BLOCKSIZE)), + [key] "r" (ctx->keyschdec) : "memory" ); - do_aesni_dec_vec8 (ctx); - - asm volatile ("pxor %%xmm12, %%xmm1\n\t" - "pxor %%xmm13, %%xmm2\n\t" - "pxor %%xmm14, %%xmm3\n\t" - "pxor %%xmm15, %%xmm4\n\t" - "pxor %[tmpbuf0],%%xmm8\n\t" - "pxor %[tmpbuf1],%%xmm9\n\t" - "pxor %[tmpbuf2],%%xmm10\n\t" - "pxor %%xmm5, %%xmm11\n\t" + asm volatile ("cmpl $12, %[rounds]\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0x20(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0x30(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0x40(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0x50(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0x60(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0x70(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0x80(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0x90(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "jb .Ldeclast%=\n\t" + "movdqa 0xa0(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0xb0(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "je .Ldeclast%=\n\t" + "movdqa 0xc0(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + "movdqa 0xd0(%[key]), %%xmm0\n\t" + "aesdec %%xmm0, %%xmm1\n\t" + "aesdec %%xmm0, %%xmm2\n\t" + "aesdec %%xmm0, %%xmm3\n\t" + "aesdec %%xmm0, %%xmm4\n\t" + "aesdec %%xmm0, %%xmm8\n\t" + "aesdec %%xmm0, %%xmm9\n\t" + "aesdec %%xmm0, %%xmm10\n\t" + "aesdec %%xmm0, %%xmm11\n\t" + + ".Ldeclast%=:\n\t" + : + : [key] "r" (ctx->keyschdec), + [rounds] "r" (ctx->rounds) + : "cc", "memory"); + + asm volatile ("aesdeclast %%xmm12, %%xmm1\n\t" + "aesdeclast %%xmm13, %%xmm2\n\t" + "aesdeclast %%xmm14, %%xmm3\n\t" + "aesdeclast %%xmm15, %%xmm4\n\t" + "aesdeclast %[tmpbuf0],%%xmm8\n\t" + "aesdeclast %[tmpbuf1],%%xmm9\n\t" + "aesdeclast %[tmpbuf2],%%xmm10\n\t" + "aesdeclast %%xmm5, %%xmm11\n\t" + "pxor %[lxfkey], %%xmm11\n\t" "movdqu %%xmm1, %[outbuf0]\n\t" "movdqu %%xmm2, %[outbuf1]\n\t" "movdqu %%xmm3, %[outbuf2]\n\t" @@ -2873,13 +3181,21 @@ aesni_ocb_dec (gcry_cipher_hd_t c, void *outbuf_arg, [outbuf7] "=m" (*(outbuf + 7 * BLOCKSIZE)) : [tmpbuf0] "m" (*(tmpbuf + 0 * BLOCKSIZE)), [tmpbuf1] "m" (*(tmpbuf + 1 * BLOCKSIZE)), - [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)) + [tmpbuf2] "m" (*(tmpbuf + 2 * BLOCKSIZE)), + [lxfkey] "m" (*lxf_key) : "memory" ); outbuf += 8*BLOCKSIZE; inbuf += 8*BLOCKSIZE; } + asm volatile ("pxor %[first_key], %%xmm5\n\t" + "pxor %%xmm0, %%xmm0\n\t" + "movdqu %%xmm0, %[lxfkey]\n\t" + : [lxfkey] "=m" (*lxf_key) + : [first_key] "m" (ctx->keyschdec[0][0][0]) + : "memory" ); + aesni_cleanup_8_15(); } #endif From jussi.kivilinna at iki.fi Thu Mar 28 23:04:33 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Fri, 29 Mar 2019 00:04:33 +0200 Subject: [PATCH] AES/ARMv8-CE: Use inline assembly for key setup Message-ID: <155381067303.10335.5689448339538128513.stgit@localhost.localdomain> * cipher/rijndael-armv8-aarch32-ce.S (_gcry_aes_sbox4_armv8_ce) (_gcry_aes_invmixcol_armv8_ce): Remove. * cipher/rijndael-armv8-aarch64-ce.S (_gcry_aes_sbox4_armv8_ce) (_gcry_aes_invmixcol_armv8_ce): Remove. * cipher/rijndael-armv8-ce.c (_gcry_aes_sbox4_armv8_ce) (_gcry_aes_invmixcol_armv8_ce): Replace prototypes with ... (_gcry_aes_sbox4_armv8_ce, _gcry_aes_invmixcol_armv8_ce): ... these inline functions. -- Signed-off-by: Jussi Kivilinna --- cipher/rijndael-armv8-aarch32-ce.S | 36 ------------------- cipher/rijndael-armv8-aarch64-ce.S | 35 ------------------ cipher/rijndael-armv8-ce.c | 70 +++++++++++++++++++++++++++++++++++- 3 files changed, 68 insertions(+), 73 deletions(-) diff --git a/cipher/rijndael-armv8-aarch32-ce.S b/cipher/rijndael-armv8-aarch32-ce.S index 66440bd4e..bbd33d353 100644 --- a/cipher/rijndael-armv8-aarch32-ce.S +++ b/cipher/rijndael-armv8-aarch32-ce.S @@ -1828,40 +1828,4 @@ _gcry_aes_xts_dec_armv8_ce: .size _gcry_aes_xts_dec_armv8_ce,.-_gcry_aes_xts_dec_armv8_ce; -/* - * u32 _gcry_aes_sbox4_armv8_ce(u32 in4b); - */ -.align 3 -.globl _gcry_aes_sbox4_armv8_ce -.type _gcry_aes_sbox4_armv8_ce,%function; -_gcry_aes_sbox4_armv8_ce: - /* See "Gouv?a, C. P. L. & L?pez, J. Implementing GCM on ARMv8. Topics in - * Cryptology ? CT-RSA 2015" for details. - */ - vmov.i8 q0, #0x52 - vmov.i8 q1, #0 - vmov s0, r0 - aese.8 q0, q1 - veor d0, d1 - vpadd.i32 d0, d0, d1 - vmov r0, s0 - CLEAR_REG(q0) - bx lr -.size _gcry_aes_sbox4_armv8_ce,.-_gcry_aes_sbox4_armv8_ce; - - -/* - * void _gcry_aes_invmixcol_armv8_ce(void *dst, const void *src); - */ -.align 3 -.globl _gcry_aes_invmixcol_armv8_ce -.type _gcry_aes_invmixcol_armv8_ce,%function; -_gcry_aes_invmixcol_armv8_ce: - vld1.8 {q0}, [r1] - aesimc.8 q0, q0 - vst1.8 {q0}, [r0] - CLEAR_REG(q0) - bx lr -.size _gcry_aes_invmixcol_armv8_ce,.-_gcry_aes_invmixcol_armv8_ce; - #endif diff --git a/cipher/rijndael-armv8-aarch64-ce.S b/cipher/rijndael-armv8-aarch64-ce.S index f0012c20a..f3ec97f82 100644 --- a/cipher/rijndael-armv8-aarch64-ce.S +++ b/cipher/rijndael-armv8-aarch64-ce.S @@ -1554,39 +1554,4 @@ _gcry_aes_xts_dec_armv8_ce: ELF(.size _gcry_aes_xts_dec_armv8_ce,.-_gcry_aes_xts_dec_armv8_ce;) -/* - * u32 _gcry_aes_sbox4_armv8_ce(u32 in4b); - */ -.align 3 -.globl _gcry_aes_sbox4_armv8_ce -ELF(.type _gcry_aes_sbox4_armv8_ce,%function;) -_gcry_aes_sbox4_armv8_ce: - /* See "Gouv?a, C. P. L. & L?pez, J. Implementing GCM on ARMv8. Topics in - * Cryptology ? CT-RSA 2015" for details. - */ - movi v0.16b, #0x52 - movi v1.16b, #0 - mov v0.S[0], w0 - aese v0.16b, v1.16b - addv s0, v0.4s - mov w0, v0.S[0] - CLEAR_REG(v0) - ret -ELF(.size _gcry_aes_sbox4_armv8_ce,.-_gcry_aes_sbox4_armv8_ce;) - - -/* - * void _gcry_aes_invmixcol_armv8_ce(void *dst, const void *src); - */ -.align 3 -.globl _gcry_aes_invmixcol_armv8_ce -ELF(.type _gcry_aes_invmixcol_armv8_ce,%function;) -_gcry_aes_invmixcol_armv8_ce: - ld1 {v0.16b}, [x1] - aesimc v0.16b, v0.16b - st1 {v0.16b}, [x0] - CLEAR_REG(v0) - ret -ELF(.size _gcry_aes_invmixcol_armv8_ce,.-_gcry_aes_invmixcol_armv8_ce;) - #endif diff --git a/cipher/rijndael-armv8-ce.c b/cipher/rijndael-armv8-ce.c index 6e46830ee..1d27157be 100644 --- a/cipher/rijndael-armv8-ce.c +++ b/cipher/rijndael-armv8-ce.c @@ -37,8 +37,6 @@ typedef struct u128_s { u32 a, b, c, d; } u128_t; -extern u32 _gcry_aes_sbox4_armv8_ce(u32 in4b); -extern void _gcry_aes_invmixcol_armv8_ce(u128_t *dst, const u128_t *src); extern unsigned int _gcry_aes_enc_armv8_ce(const void *keysched, byte *dst, const byte *src, @@ -123,6 +121,74 @@ typedef void (*xts_crypt_fn_t) (const void *keysched, unsigned char *outbuf, unsigned char *tweak, size_t nblocks, unsigned int nrounds); + +static inline u32 +_gcry_aes_sbox4_armv8_ce(u32 val) +{ +#if defined(HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS) && \ + defined(HAVE_GCC_INLINE_ASM_AARCH32_CRYPTO) + asm (".syntax unified\n" + ".arch armv8-a\n" + ".fpu crypto-neon-fp-armv8\n" + + "vmov.i8 q0, #0x52\n" + "vmov.i8 q1, #0\n" + "vmov s0, %[in]\n" + "aese.8 q0, q1\n" + "veor d0, d1\n" + "vpadd.i32 d0, d0, d1\n" + "vmov %[out], s0\n" + : [out] "=r" (val) + : [in] "r" (val) + : "q0", "q1", "d0", "s0"); +#elif defined(HAVE_COMPATIBLE_GCC_AARCH64_PLATFORM_AS) && \ + defined(HAVE_GCC_INLINE_ASM_AARCH64_CRYPTO) + asm (".cpu generic+simd+crypto\n" + + "movi v0.16b, #0x52\n" + "movi v1.16b, #0\n" + "mov v0.S[0], %w[in]\n" + "aese v0.16b, v1.16b\n" + "addv s0, v0.4s\n" + "mov %w[out], v0.S[0]\n" + : [out] "=r" (val) + : [in] "r" (val) + : "v0", "v1", "s0"); +#endif + + return val; +} + + +static inline void +_gcry_aes_invmixcol_armv8_ce(u128_t *dst, const u128_t *src) +{ +#if defined(HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS) && \ + defined(HAVE_GCC_INLINE_ASM_AARCH32_CRYPTO) + asm (".syntax unified\n" + ".arch armv8-a\n" + ".fpu crypto-neon-fp-armv8\n" + + "vld1.8 {q0}, [%[src]]\n" + "aesimc.8 q0, q0\n" + "vst1.8 {q0}, [%[dst]]\n" + : + : [dst] "r" (dst), [src] "r" (src) + : "q0", "memory"); +#elif defined(HAVE_COMPATIBLE_GCC_AARCH64_PLATFORM_AS) && \ + defined(HAVE_GCC_INLINE_ASM_AARCH64_CRYPTO) + asm (".cpu generic+simd+crypto\n" + + "ld1 {v0.16b}, [%[src]]\n" + "aesimc v0.16b, v0.16b\n" + "st1 {v0.16b}, [%[dst]]\n" + : + : [dst] "r" (dst), [src] "r" (src) + : "v0", "memory"); +#endif +} + + void _gcry_aes_armv8_ce_setkey (RIJNDAEL_context *ctx, const byte *key) { From Mushabbar.Hussain at kpit.com Sat Mar 30 23:29:57 2019 From: Mushabbar.Hussain at kpit.com (Mushabbar Hussain) Date: Sat, 30 Mar 2019 22:29:57 +0000 Subject: Enquiry: AES_256_GCM on GNU PG Libgcrypt Message-ID: Dear Support, We are looking for AES_256_GCM algorithm(NIST/FIPS complaint) for use in Automotive ECU's? The Basic requirements is to have a code which is platform independent, MISRA complaint(ex, w/o dynamic memory, system calls). Does GNU GP Libgcrypt have AES_GCM algorithm which is suitable for use in Auto ECU's? We are looking for full access to the source code, license terms shall allow code modifications, and unlimited usage in any of the OEM products. Could you share your f/b on this usecase and commercials terms? Best Regards, Hussain This message contains information that may be privileged or confidential and is the property of the KPIT Technologies Ltd. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. KPIT Technologies Ltd. does not accept any liability for virus infected mails. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jussi.kivilinna at iki.fi Sun Mar 31 17:59:28 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Mar 2019 18:59:28 +0300 Subject: [PATCH 1/4] Add helper function for adding value to cipher block Message-ID: <155404796871.13638.16726195793507743936.stgit@localhost.localdomain> * cipher/cipher-internal.h (cipher_block_add): New. * cipher/blowfish.c (_gcry_blowfish_ctr_enc): Use new helper function for CTR block increment. * cipher/camellia-glue.c (_gcry_camellia_ctr_enc): Ditto. * cipher/cast5.c (_gcry_cast5_ctr_enc): Ditto. * cipher/cipher-ctr.c (_gcry_cipher_ctr_encrypt): Ditto. * cipher/des.c (_gcry_3des_ctr_enc): Ditto. * cipher/rijndael.c (_gcry_aes_ctr_enc): Ditto. * cipher/serpent.c (_gcry_serpent_ctr_enc): Ditto. * cipher/twofish.c (_gcry_twofish_ctr_enc): Ditto. -- Signed-off-by: Jussi Kivilinna --- cipher/blowfish.c | 8 +------- cipher/camellia-glue.c | 8 +------- cipher/cast5.c | 8 +------- cipher/cipher-ctr.c | 7 +------ cipher/cipher-internal.h | 23 +++++++++++++++++++++++ cipher/des.c | 8 +------- cipher/rijndael.c | 8 +------- cipher/serpent.c | 8 +------- cipher/twofish.c | 8 +------- 9 files changed, 31 insertions(+), 55 deletions(-) diff --git a/cipher/blowfish.c b/cipher/blowfish.c index f032c5c6f..e7e199afc 100644 --- a/cipher/blowfish.c +++ b/cipher/blowfish.c @@ -619,7 +619,6 @@ _gcry_blowfish_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, const unsigned char *inbuf = inbuf_arg; unsigned char tmpbuf[BLOWFISH_BLOCKSIZE]; int burn_stack_depth = (64) + 2 * BLOWFISH_BLOCKSIZE; - int i; #ifdef USE_AMD64_ASM { @@ -665,12 +664,7 @@ _gcry_blowfish_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, outbuf += BLOWFISH_BLOCKSIZE; inbuf += BLOWFISH_BLOCKSIZE; /* Increment the counter. */ - for (i = BLOWFISH_BLOCKSIZE; i > 0; i--) - { - ctr[i-1]++; - if (ctr[i-1]) - break; - } + cipher_block_add (ctr, 1, BLOWFISH_BLOCKSIZE); } wipememory(tmpbuf, sizeof(tmpbuf)); diff --git a/cipher/camellia-glue.c b/cipher/camellia-glue.c index 69b240b79..4b0989ea5 100644 --- a/cipher/camellia-glue.c +++ b/cipher/camellia-glue.c @@ -363,7 +363,6 @@ _gcry_camellia_ctr_enc(void *context, unsigned char *ctr, const unsigned char *inbuf = inbuf_arg; unsigned char tmpbuf[CAMELLIA_BLOCK_SIZE]; int burn_stack_depth = CAMELLIA_encrypt_stack_burn_size; - int i; #ifdef USE_AESNI_AVX2 if (ctx->use_aesni_avx2) @@ -434,12 +433,7 @@ _gcry_camellia_ctr_enc(void *context, unsigned char *ctr, outbuf += CAMELLIA_BLOCK_SIZE; inbuf += CAMELLIA_BLOCK_SIZE; /* Increment the counter. */ - for (i = CAMELLIA_BLOCK_SIZE; i > 0; i--) - { - ctr[i-1]++; - if (ctr[i-1]) - break; - } + cipher_block_add(ctr, 1, CAMELLIA_BLOCK_SIZE); } wipememory(tmpbuf, sizeof(tmpbuf)); diff --git a/cipher/cast5.c b/cipher/cast5.c index 49e8b781b..cc5bd9d66 100644 --- a/cipher/cast5.c +++ b/cipher/cast5.c @@ -593,7 +593,6 @@ _gcry_cast5_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, unsigned char tmpbuf[CAST5_BLOCKSIZE]; int burn_stack_depth = (20 + 4 * sizeof(void*)) + 2 * CAST5_BLOCKSIZE; - int i; #ifdef USE_AMD64_ASM { @@ -639,12 +638,7 @@ _gcry_cast5_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, outbuf += CAST5_BLOCKSIZE; inbuf += CAST5_BLOCKSIZE; /* Increment the counter. */ - for (i = CAST5_BLOCKSIZE; i > 0; i--) - { - ctr[i-1]++; - if (ctr[i-1]) - break; - } + cipher_block_add (ctr, 1, CAST5_BLOCKSIZE); } wipememory(tmpbuf, sizeof(tmpbuf)); diff --git a/cipher/cipher-ctr.c b/cipher/cipher-ctr.c index 546d4f8e6..5f0afc2f8 100644 --- a/cipher/cipher-ctr.c +++ b/cipher/cipher-ctr.c @@ -83,12 +83,7 @@ _gcry_cipher_ctr_encrypt (gcry_cipher_hd_t c, nburn = enc_fn (&c->context.c, tmp, c->u_ctr.ctr); burn = nburn > burn ? nburn : burn; - for (i = blocksize; i > 0; i--) - { - c->u_ctr.ctr[i-1]++; - if (c->u_ctr.ctr[i-1] != 0) - break; - } + cipher_block_add(c->u_ctr.ctr, 1, blocksize); if (inbuflen < blocksize) break; diff --git a/cipher/cipher-internal.h b/cipher/cipher-internal.h index 2283bf319..970aa9860 100644 --- a/cipher/cipher-internal.h +++ b/cipher/cipher-internal.h @@ -628,6 +628,29 @@ static inline unsigned int _gcry_blocksize_shift(gcry_cipher_hd_t c) } +/* Optimized function for adding value to cipher block. */ +static inline void +cipher_block_add(void *_dstsrc, unsigned int add, size_t blocksize) +{ + byte *dstsrc = _dstsrc; + u64 s[2]; + + if (blocksize == 8) + { + buf_put_be64(dstsrc + 0, buf_get_be64(dstsrc + 0) + add); + } + else /* blocksize == 16 */ + { + s[0] = buf_get_be64(dstsrc + 8); + s[1] = buf_get_be64(dstsrc + 0); + s[0] += add; + s[1] += (s[0] < add); + buf_put_be64(dstsrc + 8, s[0]); + buf_put_be64(dstsrc + 0, s[1]); + } +} + + /* Optimized function for cipher block copying */ static inline void cipher_block_cpy(void *_dst, const void *_src, size_t blocksize) diff --git a/cipher/des.c b/cipher/des.c index a008b93e5..e4d10caa2 100644 --- a/cipher/des.c +++ b/cipher/des.c @@ -881,7 +881,6 @@ _gcry_3des_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, const unsigned char *inbuf = inbuf_arg; unsigned char tmpbuf[DES_BLOCKSIZE]; int burn_stack_depth = TRIPLEDES_ECB_BURN_STACK; - int i; #ifdef USE_AMD64_ASM { @@ -913,12 +912,7 @@ _gcry_3des_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, outbuf += DES_BLOCKSIZE; inbuf += DES_BLOCKSIZE; /* Increment the counter. */ - for (i = DES_BLOCKSIZE; i > 0; i--) - { - ctr[i-1]++; - if (ctr[i-1]) - break; - } + cipher_block_add(ctr, 1, DES_BLOCKSIZE); } wipememory(tmpbuf, sizeof(tmpbuf)); diff --git a/cipher/rijndael.c b/cipher/rijndael.c index 80945376b..1001b1d52 100644 --- a/cipher/rijndael.c +++ b/cipher/rijndael.c @@ -928,7 +928,6 @@ _gcry_aes_ctr_enc (void *context, unsigned char *ctr, unsigned char *outbuf = outbuf_arg; const unsigned char *inbuf = inbuf_arg; unsigned int burn_depth = 0; - int i; if (0) ; @@ -970,12 +969,7 @@ _gcry_aes_ctr_enc (void *context, unsigned char *ctr, outbuf += BLOCKSIZE; inbuf += BLOCKSIZE; /* Increment the counter. */ - for (i = BLOCKSIZE; i > 0; i--) - { - ctr[i-1]++; - if (ctr[i-1]) - break; - } + cipher_block_add(ctr, 1, BLOCKSIZE); } wipememory(&tmp, sizeof(tmp)); diff --git a/cipher/serpent.c b/cipher/serpent.c index 8e3faa7c5..71d843d00 100644 --- a/cipher/serpent.c +++ b/cipher/serpent.c @@ -912,7 +912,6 @@ _gcry_serpent_ctr_enc(void *context, unsigned char *ctr, const unsigned char *inbuf = inbuf_arg; unsigned char tmpbuf[sizeof(serpent_block_t)]; int burn_stack_depth = 2 * sizeof (serpent_block_t); - int i; #ifdef USE_AVX2 if (ctx->use_avx2) @@ -1006,12 +1005,7 @@ _gcry_serpent_ctr_enc(void *context, unsigned char *ctr, outbuf += sizeof(serpent_block_t); inbuf += sizeof(serpent_block_t); /* Increment the counter. */ - for (i = sizeof(serpent_block_t); i > 0; i--) - { - ctr[i-1]++; - if (ctr[i-1]) - break; - } + cipher_block_add(ctr, 1, sizeof(serpent_block_t)); } wipememory(tmpbuf, sizeof(tmpbuf)); diff --git a/cipher/twofish.c b/cipher/twofish.c index 51982c530..417d73781 100644 --- a/cipher/twofish.c +++ b/cipher/twofish.c @@ -1105,7 +1105,6 @@ _gcry_twofish_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, const unsigned char *inbuf = inbuf_arg; unsigned char tmpbuf[TWOFISH_BLOCKSIZE]; unsigned int burn, burn_stack_depth = 0; - int i; #ifdef USE_AVX2 if (ctx->use_avx2) @@ -1165,12 +1164,7 @@ _gcry_twofish_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, outbuf += TWOFISH_BLOCKSIZE; inbuf += TWOFISH_BLOCKSIZE; /* Increment the counter. */ - for (i = TWOFISH_BLOCKSIZE; i > 0; i--) - { - ctr[i-1]++; - if (ctr[i-1]) - break; - } + cipher_block_add(ctr, 1, TWOFISH_BLOCKSIZE); } wipememory(tmpbuf, sizeof(tmpbuf)); From jussi.kivilinna at iki.fi Sun Mar 31 17:59:39 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Mar 2019 18:59:39 +0300 Subject: [PATCH 3/4] cast5: add three rounds parallel handling to generic C implementation In-Reply-To: <155404796871.13638.16726195793507743936.stgit@localhost.localdomain> References: <155404796871.13638.16726195793507743936.stgit@localhost.localdomain> Message-ID: <155404797906.13638.8481248100526770883.stgit@localhost.localdomain> * cipher/cast5.c (do_encrypt_block_3, do_decrypt_block_3): New. (_gcry_cast5_ctr_enc, _gcry_cast5_cbc_dec, _gcry_cast5_cfb_dec): Use new three block functions. -- Benchmark on aarch64 (cortex-a53, 816 Mhz): Before: CAST5 | nanosecs/byte mebibytes/sec cycles/byte CBC dec | 35.24 ns/B 27.07 MiB/s 28.75 c/B CFB dec | 34.62 ns/B 27.54 MiB/s 28.25 c/B CTR enc | 35.39 ns/B 26.95 MiB/s 28.88 c/B After (~40%-50% faster): CAST5 | nanosecs/byte mebibytes/sec cycles/byte CBC dec | 23.05 ns/B 41.38 MiB/s 18.81 c/B CFB dec | 24.49 ns/B 38.94 MiB/s 19.98 c/B CTR dec | 24.57 ns/B 38.82 MiB/s 20.05 c/B Benchmark on i386 (haswell, 4000 Mhz): Before: CAST5 | nanosecs/byte mebibytes/sec cycles/byte CBC dec | 6.92 ns/B 137.7 MiB/s 27.69 c/B CFB dec | 6.83 ns/B 139.7 MiB/s 27.32 c/B CTR enc | 7.01 ns/B 136.1 MiB/s 28.03 c/B After (~70% faster): CAST5 | nanosecs/byte mebibytes/sec cycles/byte CBC dec | 3.97 ns/B 240.1 MiB/s 15.89 c/B CFB dec | 3.96 ns/B 241.0 MiB/s 15.83 c/B CTR enc | 4.01 ns/B 237.8 MiB/s 16.04 c/B Signed-off-by: Jussi Kivilinna --- cipher/cast5.c | 245 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 237 insertions(+), 8 deletions(-) diff --git a/cipher/cast5.c b/cipher/cast5.c index 65485ba23..7219e3eaf 100644 --- a/cipher/cast5.c +++ b/cipher/cast5.c @@ -534,6 +534,97 @@ encrypt_block (void *context , byte *outbuf, const byte *inbuf) } +static void +do_encrypt_block_3( CAST5_context *c, byte *outbuf, const byte *inbuf ) +{ + u32 l0, r0, t0, l1, r1, t1, l2, r2, t2; + u32 I; /* used by the Fx macros */ + u32 *Km; + u32 Kr; + + Km = c->Km; + Kr = buf_get_le32(c->Kr + 0); + + l0 = buf_get_be32(inbuf + 0); + r0 = buf_get_be32(inbuf + 4); + l1 = buf_get_be32(inbuf + 8); + r1 = buf_get_be32(inbuf + 12); + l2 = buf_get_be32(inbuf + 16); + r2 = buf_get_be32(inbuf + 20); + + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[ 0], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[ 0], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[ 0], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[ 1], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[ 1], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[ 1], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[ 2], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[ 2], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[ 2], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[ 3], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[ 3], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[ 3], Kr & 31); + Kr = buf_get_le32(c->Kr + 4); + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[ 4], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[ 4], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[ 4], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[ 5], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[ 5], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[ 5], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[ 6], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[ 6], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[ 6], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[ 7], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[ 7], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[ 7], Kr & 31); + Kr = buf_get_le32(c->Kr + 8); + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[ 8], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[ 8], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[ 8], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[ 9], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[ 9], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[ 9], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[10], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[10], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[10], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[11], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[11], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[11], Kr & 31); + Kr = buf_get_le32(c->Kr + 12); + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[12], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[12], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[12], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[13], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[13], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[13], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[14], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[14], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[14], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[15], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[15], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[15], Kr & 31); + + buf_put_be32(outbuf + 0, r0); + buf_put_be32(outbuf + 4, l0); + buf_put_be32(outbuf + 8, r1); + buf_put_be32(outbuf + 12, l1); + buf_put_be32(outbuf + 16, r2); + buf_put_be32(outbuf + 20, l2); +} + + static void do_decrypt_block (CAST5_context *c, byte *outbuf, const byte *inbuf ) { @@ -577,6 +668,97 @@ decrypt_block (void *context, byte *outbuf, const byte *inbuf) return /*burn_stack*/ (20+4*sizeof(void*)); } + +static void +do_decrypt_block_3 (CAST5_context *c, byte *outbuf, const byte *inbuf ) +{ + u32 l0, r0, t0, l1, r1, t1, l2, r2, t2; + u32 I; + u32 *Km; + u32 Kr; + + Km = c->Km; + Kr = buf_get_be32(c->Kr + 12); + + l0 = buf_get_be32(inbuf + 0); + r0 = buf_get_be32(inbuf + 4); + l1 = buf_get_be32(inbuf + 8); + r1 = buf_get_be32(inbuf + 12); + l2 = buf_get_be32(inbuf + 16); + r2 = buf_get_be32(inbuf + 20); + + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[15], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[15], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[15], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[14], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[14], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[14], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[13], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[13], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[13], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[12], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[12], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[12], Kr & 31); + Kr = buf_get_be32(c->Kr + 8); + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[11], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[11], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[11], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[10], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[10], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[10], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[ 9], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[ 9], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[ 9], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[ 8], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[ 8], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[ 8], Kr & 31); + Kr = buf_get_be32(c->Kr + 4); + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[ 7], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[ 7], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[ 7], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[ 6], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[ 6], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[ 6], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[ 5], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[ 5], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[ 5], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[ 4], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[ 4], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[ 4], Kr & 31); + Kr = buf_get_be32(c->Kr + 0); + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[ 3], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[ 3], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[ 3], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F3(r0, Km[ 2], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F3(r1, Km[ 2], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F3(r2, Km[ 2], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F2(r0, Km[ 1], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F2(r1, Km[ 1], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F2(r2, Km[ 1], Kr & 31); + Kr >>= 8; + t0 = l0; l0 = r0; r0 = t0 ^ F1(r0, Km[ 0], Kr & 31); + t1 = l1; l1 = r1; r1 = t1 ^ F1(r1, Km[ 0], Kr & 31); + t2 = l2; l2 = r2; r2 = t2 ^ F1(r2, Km[ 0], Kr & 31); + + buf_put_be32(outbuf + 0, r0); + buf_put_be32(outbuf + 4, l0); + buf_put_be32(outbuf + 8, r1); + buf_put_be32(outbuf + 12, l1); + buf_put_be32(outbuf + 16, r2); + buf_put_be32(outbuf + 20, l2); +} + #endif /*!USE_ARM_ASM*/ @@ -590,9 +772,8 @@ _gcry_cast5_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, CAST5_context *ctx = context; unsigned char *outbuf = outbuf_arg; const unsigned char *inbuf = inbuf_arg; - unsigned char tmpbuf[CAST5_BLOCKSIZE]; - int burn_stack_depth = (20 + 4 * sizeof(void*)) + 2 * CAST5_BLOCKSIZE; - + unsigned char tmpbuf[CAST5_BLOCKSIZE * 3]; + int burn_stack_depth = (20 + 4 * sizeof(void*)) + 4 * CAST5_BLOCKSIZE; #ifdef USE_AMD64_ASM { @@ -610,7 +791,6 @@ _gcry_cast5_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, } /* Use generic code to handle smaller chunks... */ - /* TODO: use caching instead? */ } #elif defined(USE_ARM_ASM) { @@ -625,10 +805,28 @@ _gcry_cast5_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, } /* Use generic code to handle smaller chunks... */ - /* TODO: use caching instead? */ } #endif +#if !defined(USE_AMD64_ASM) && !defined(USE_ARM_ASM) + for ( ;nblocks >= 3; nblocks -= 3) + { + /* Prepare the counter blocks. */ + cipher_block_cpy (tmpbuf + 0, ctr, CAST5_BLOCKSIZE); + cipher_block_cpy (tmpbuf + 8, ctr, CAST5_BLOCKSIZE); + cipher_block_cpy (tmpbuf + 16, ctr, CAST5_BLOCKSIZE); + cipher_block_add (tmpbuf + 8, 1, CAST5_BLOCKSIZE); + cipher_block_add (tmpbuf + 16, 2, CAST5_BLOCKSIZE); + cipher_block_add (ctr, 3, CAST5_BLOCKSIZE); + /* Encrypt the counter. */ + do_encrypt_block_3(ctx, tmpbuf, tmpbuf); + /* XOR the input with the encrypted counter and store in output. */ + buf_xor(outbuf, tmpbuf, inbuf, CAST5_BLOCKSIZE * 3); + outbuf += CAST5_BLOCKSIZE * 3; + inbuf += CAST5_BLOCKSIZE * 3; + } +#endif + for ( ;nblocks; nblocks-- ) { /* Encrypt the counter. */ @@ -655,8 +853,8 @@ _gcry_cast5_cbc_dec(void *context, unsigned char *iv, void *outbuf_arg, CAST5_context *ctx = context; unsigned char *outbuf = outbuf_arg; const unsigned char *inbuf = inbuf_arg; - unsigned char savebuf[CAST5_BLOCKSIZE]; - int burn_stack_depth = (20 + 4 * sizeof(void*)) + 2 * CAST5_BLOCKSIZE; + unsigned char savebuf[CAST5_BLOCKSIZE * 3]; + int burn_stack_depth = (20 + 4 * sizeof(void*)) + 4 * CAST5_BLOCKSIZE; #ifdef USE_AMD64_ASM { @@ -691,6 +889,22 @@ _gcry_cast5_cbc_dec(void *context, unsigned char *iv, void *outbuf_arg, } #endif +#if !defined(USE_AMD64_ASM) && !defined(USE_ARM_ASM) + for ( ;nblocks >= 3; nblocks -= 3) + { + /* INBUF is needed later and it may be identical to OUTBUF, so store + the intermediate result to SAVEBUF. */ + do_decrypt_block_3 (ctx, savebuf, inbuf); + + cipher_block_xor_1 (savebuf + 0, iv, CAST5_BLOCKSIZE); + cipher_block_xor_1 (savebuf + 8, inbuf, CAST5_BLOCKSIZE * 2); + cipher_block_cpy (iv, inbuf + 16, CAST5_BLOCKSIZE); + buf_cpy (outbuf, savebuf, CAST5_BLOCKSIZE * 3); + inbuf += CAST5_BLOCKSIZE * 3; + outbuf += CAST5_BLOCKSIZE * 3; + } +#endif + for ( ;nblocks; nblocks-- ) { /* INBUF is needed later and it may be identical to OUTBUF, so store @@ -715,7 +929,8 @@ _gcry_cast5_cfb_dec(void *context, unsigned char *iv, void *outbuf_arg, CAST5_context *ctx = context; unsigned char *outbuf = outbuf_arg; const unsigned char *inbuf = inbuf_arg; - int burn_stack_depth = (20 + 4 * sizeof(void*)) + 2 * CAST5_BLOCKSIZE; + unsigned char tmpbuf[CAST5_BLOCKSIZE * 3]; + int burn_stack_depth = (20 + 4 * sizeof(void*)) + 4 * CAST5_BLOCKSIZE; #ifdef USE_AMD64_ASM { @@ -750,6 +965,19 @@ _gcry_cast5_cfb_dec(void *context, unsigned char *iv, void *outbuf_arg, } #endif +#if !defined(USE_AMD64_ASM) && !defined(USE_ARM_ASM) + for ( ;nblocks >= 3; nblocks -= 3 ) + { + cipher_block_cpy (tmpbuf + 0, iv, CAST5_BLOCKSIZE); + cipher_block_cpy (tmpbuf + 8, inbuf + 0, CAST5_BLOCKSIZE * 2); + cipher_block_cpy (iv, inbuf + 16, CAST5_BLOCKSIZE); + do_encrypt_block_3 (ctx, tmpbuf, tmpbuf); + buf_xor (outbuf, inbuf, tmpbuf, CAST5_BLOCKSIZE * 3); + outbuf += CAST5_BLOCKSIZE * 3; + inbuf += CAST5_BLOCKSIZE * 3; + } +#endif + for ( ;nblocks; nblocks-- ) { do_encrypt_block(ctx, iv, iv); @@ -758,6 +986,7 @@ _gcry_cast5_cfb_dec(void *context, unsigned char *iv, void *outbuf_arg, inbuf += CAST5_BLOCKSIZE; } + wipememory(tmpbuf, sizeof(tmpbuf)); _gcry_burn_stack(burn_stack_depth); } From jussi.kivilinna at iki.fi Sun Mar 31 17:59:34 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Mar 2019 18:59:34 +0300 Subject: [PATCH 2/4] cast5: read Kr four blocks at time and shift for current round In-Reply-To: <155404796871.13638.16726195793507743936.stgit@localhost.localdomain> References: <155404796871.13638.16726195793507743936.stgit@localhost.localdomain> Message-ID: <155404797390.13638.3094981768012180887.stgit@localhost.localdomain> * cipher/cast5.c (do_encrypt_block, do_decrypt_block): Read Kr as 32-bit words instead of bytes and shift value for each round. -- Signed-off-by: Jussi Kivilinna --- cipher/cast5.c | 72 ++++++++++++++++++++++++++++---------------------------- 1 file changed, 36 insertions(+), 36 deletions(-) diff --git a/cipher/cast5.c b/cipher/cast5.c index cc5bd9d66..65485ba23 100644 --- a/cipher/cast5.c +++ b/cipher/cast5.c @@ -483,10 +483,10 @@ do_encrypt_block( CAST5_context *c, byte *outbuf, const byte *inbuf ) u32 l, r, t; u32 I; /* used by the Fx macros */ u32 *Km; - byte *Kr; + u32 Kr; Km = c->Km; - Kr = c->Kr; + Kr = buf_get_le32(c->Kr + 0); /* (L0,R0) <-- (m1...m64). (Split the plaintext into left and * right 32-bit halves L0 = m1...m32 and R0 = m33...m64.) @@ -502,22 +502,22 @@ do_encrypt_block( CAST5_context *c, byte *outbuf, const byte *inbuf ) * Rounds 3, 6, 9, 12, and 15 use f function Type 3. */ - t = l; l = r; r = t ^ F1(r, Km[ 0], Kr[ 0]); - t = l; l = r; r = t ^ F2(r, Km[ 1], Kr[ 1]); - t = l; l = r; r = t ^ F3(r, Km[ 2], Kr[ 2]); - t = l; l = r; r = t ^ F1(r, Km[ 3], Kr[ 3]); - t = l; l = r; r = t ^ F2(r, Km[ 4], Kr[ 4]); - t = l; l = r; r = t ^ F3(r, Km[ 5], Kr[ 5]); - t = l; l = r; r = t ^ F1(r, Km[ 6], Kr[ 6]); - t = l; l = r; r = t ^ F2(r, Km[ 7], Kr[ 7]); - t = l; l = r; r = t ^ F3(r, Km[ 8], Kr[ 8]); - t = l; l = r; r = t ^ F1(r, Km[ 9], Kr[ 9]); - t = l; l = r; r = t ^ F2(r, Km[10], Kr[10]); - t = l; l = r; r = t ^ F3(r, Km[11], Kr[11]); - t = l; l = r; r = t ^ F1(r, Km[12], Kr[12]); - t = l; l = r; r = t ^ F2(r, Km[13], Kr[13]); - t = l; l = r; r = t ^ F3(r, Km[14], Kr[14]); - t = l; l = r; r = t ^ F1(r, Km[15], Kr[15]); + t = l; l = r; r = t ^ F1(r, Km[ 0], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F2(r, Km[ 1], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F3(r, Km[ 2], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F1(r, Km[ 3], Kr & 31); Kr = buf_get_le32(c->Kr + 4); + t = l; l = r; r = t ^ F2(r, Km[ 4], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F3(r, Km[ 5], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F1(r, Km[ 6], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F2(r, Km[ 7], Kr & 31); Kr = buf_get_le32(c->Kr + 8); + t = l; l = r; r = t ^ F3(r, Km[ 8], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F1(r, Km[ 9], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F2(r, Km[10], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F3(r, Km[11], Kr & 31); Kr = buf_get_le32(c->Kr + 12); + t = l; l = r; r = t ^ F1(r, Km[12], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F2(r, Km[13], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F3(r, Km[14], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F1(r, Km[15], Kr & 31); /* c1...c64 <-- (R16,L16). (Exchange final blocks L16, R16 and * concatenate to form the ciphertext.) */ @@ -540,30 +540,30 @@ do_decrypt_block (CAST5_context *c, byte *outbuf, const byte *inbuf ) u32 l, r, t; u32 I; u32 *Km; - byte *Kr; + u32 Kr; Km = c->Km; - Kr = c->Kr; + Kr = buf_get_be32(c->Kr + 12); l = buf_get_be32(inbuf + 0); r = buf_get_be32(inbuf + 4); - t = l; l = r; r = t ^ F1(r, Km[15], Kr[15]); - t = l; l = r; r = t ^ F3(r, Km[14], Kr[14]); - t = l; l = r; r = t ^ F2(r, Km[13], Kr[13]); - t = l; l = r; r = t ^ F1(r, Km[12], Kr[12]); - t = l; l = r; r = t ^ F3(r, Km[11], Kr[11]); - t = l; l = r; r = t ^ F2(r, Km[10], Kr[10]); - t = l; l = r; r = t ^ F1(r, Km[ 9], Kr[ 9]); - t = l; l = r; r = t ^ F3(r, Km[ 8], Kr[ 8]); - t = l; l = r; r = t ^ F2(r, Km[ 7], Kr[ 7]); - t = l; l = r; r = t ^ F1(r, Km[ 6], Kr[ 6]); - t = l; l = r; r = t ^ F3(r, Km[ 5], Kr[ 5]); - t = l; l = r; r = t ^ F2(r, Km[ 4], Kr[ 4]); - t = l; l = r; r = t ^ F1(r, Km[ 3], Kr[ 3]); - t = l; l = r; r = t ^ F3(r, Km[ 2], Kr[ 2]); - t = l; l = r; r = t ^ F2(r, Km[ 1], Kr[ 1]); - t = l; l = r; r = t ^ F1(r, Km[ 0], Kr[ 0]); + t = l; l = r; r = t ^ F1(r, Km[15], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F3(r, Km[14], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F2(r, Km[13], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F1(r, Km[12], Kr & 31); Kr = buf_get_be32(c->Kr + 8); + t = l; l = r; r = t ^ F3(r, Km[11], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F2(r, Km[10], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F1(r, Km[ 9], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F3(r, Km[ 8], Kr & 31); Kr = buf_get_be32(c->Kr + 4); + t = l; l = r; r = t ^ F2(r, Km[ 7], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F1(r, Km[ 6], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F3(r, Km[ 5], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F2(r, Km[ 4], Kr & 31); Kr = buf_get_be32(c->Kr + 0); + t = l; l = r; r = t ^ F1(r, Km[ 3], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F3(r, Km[ 2], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F2(r, Km[ 1], Kr & 31); Kr >>= 8; + t = l; l = r; r = t ^ F1(r, Km[ 0], Kr & 31); buf_put_be32(outbuf + 0, r); buf_put_be32(outbuf + 4, l); From jussi.kivilinna at iki.fi Sun Mar 31 17:59:44 2019 From: jussi.kivilinna at iki.fi (Jussi Kivilinna) Date: Sun, 31 Mar 2019 18:59:44 +0300 Subject: [PATCH 4/4] blowfish: add three rounds parallel handling to generic C implementation In-Reply-To: <155404796871.13638.16726195793507743936.stgit@localhost.localdomain> References: <155404796871.13638.16726195793507743936.stgit@localhost.localdomain> Message-ID: <155404798422.13638.15185087029882354076.stgit@localhost.localdomain> * cipher/blowfish.c (BLOWFISH_ROUNDS): Remove. [BLOWFISH_ROUNDS != 16] (function_F): Remove. (F): Replace big-endian and little-endian version with single endian-neutral version. (R3, do_encrypt_3, do_decrypt_3): New. (_gcry_blowfish_ctr_enc, _gcry_blowfish_cbc_dec) (_gcry_blowfish_cfb_dec): Use new three block functions. -- Benchmark on aarch64 (cortex-a53, 816 Mhz): Before: BLOWFISH | nanosecs/byte mebibytes/sec cycles/byte CBC dec | 29.58 ns/B 32.24 MiB/s 24.13 c/B CFB dec | 33.38 ns/B 28.57 MiB/s 27.24 c/B CTR enc | 34.18 ns/B 27.90 MiB/s 27.89 c/B After (~60%-70% faster): BLOWFISH | nanosecs/byte mebibytes/sec cycles/byte CBC dec | 18.18 ns/B 52.45 MiB/s 14.84 c/B CFB dec | 19.67 ns/B 48.50 MiB/s 16.05 c/B CTR enc | 19.77 ns/B 48.25 MiB/s 16.13 c/B Benchmark on i386 (haswell, 4000 Mhz): Before: BLOWFISH | nanosecs/byte mebibytes/sec cycles/byte CBC dec | 6.10 ns/B 156.4 MiB/s 24.39 c/B CFB dec | 6.39 ns/B 149.2 MiB/s 25.56 c/B CTR enc | 6.73 ns/B 141.6 MiB/s 26.93 c/B After (~80% faster): BLOWFISH | nanosecs/byte mebibytes/sec cycles/byte CBC dec | 3.46 ns/B 275.5 MiB/s 13.85 c/B CFB dec | 3.53 ns/B 270.4 MiB/s 14.11 c/B CTR enc | 3.56 ns/B 268.0 MiB/s 14.23 c/B Signed-off-by: Jussi Kivilinna --- cipher/blowfish.c | 293 ++++++++++++++++++++++++++++++++--------------------- 1 file changed, 179 insertions(+), 114 deletions(-) diff --git a/cipher/blowfish.c b/cipher/blowfish.c index e7e199afc..ea6e64a7b 100644 --- a/cipher/blowfish.c +++ b/cipher/blowfish.c @@ -41,21 +41,19 @@ #include "cipher-selftest.h" #define BLOWFISH_BLOCKSIZE 8 -#define BLOWFISH_ROUNDS 16 /* USE_AMD64_ASM indicates whether to use AMD64 assembly code. */ #undef USE_AMD64_ASM #if defined(__x86_64__) && (defined(HAVE_COMPATIBLE_GCC_AMD64_PLATFORM_AS) || \ - defined(HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS)) && \ - (BLOWFISH_ROUNDS == 16) + defined(HAVE_COMPATIBLE_GCC_WIN64_PLATFORM_AS)) # define USE_AMD64_ASM 1 #endif /* USE_ARM_ASM indicates whether to use ARM assembly code. */ #undef USE_ARM_ASM #if defined(__ARMEL__) -# if (BLOWFISH_ROUNDS == 16) && defined(HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS) +# if defined(HAVE_COMPATIBLE_GCC_ARM_PLATFORM_AS) # define USE_ARM_ASM 1 # endif #endif @@ -65,7 +63,7 @@ typedef struct { u32 s1[256]; u32 s2[256]; u32 s3[256]; - u32 p[BLOWFISH_ROUNDS+2]; + u32 p[16+2]; } BLOWFISH_context; static gcry_err_code_t bf_setkey (void *c, const byte *key, unsigned keylen, @@ -255,7 +253,7 @@ static const u32 ks3[256] = { 0x01C36AE4,0xD6EBE1F9,0x90D4F869,0xA65CDEA0,0x3F09252D,0xC208E69F, 0xB74E6132,0xCE77E25B,0x578FDFE3,0x3AC372E6 }; -static const u32 ps[BLOWFISH_ROUNDS+2] = { +static const u32 ps[16+2] = { 0x243F6A88,0x85A308D3,0x13198A2E,0x03707344,0xA4093822,0x299F31D0, 0x082EFA98,0xEC4E6C89,0x452821E6,0x38D01377,0xBE5466CF,0x34E90C6C, 0xC0AC29B7,0xC97C50DD,0x3F84D5B5,0xB5470917,0x9216D5D9,0x8979FB1B }; @@ -396,42 +394,16 @@ decrypt_block (void *context, byte *outbuf, const byte *inbuf) #else /*USE_ARM_ASM*/ -#if BLOWFISH_ROUNDS != 16 -static inline u32 -function_F( BLOWFISH_context *bc, u32 x ) -{ - u16 a, b, c, d; - -#ifdef WORDS_BIGENDIAN - a = ((byte*)&x)[0]; - b = ((byte*)&x)[1]; - c = ((byte*)&x)[2]; - d = ((byte*)&x)[3]; -#else - a = ((byte*)&x)[3]; - b = ((byte*)&x)[2]; - c = ((byte*)&x)[1]; - d = ((byte*)&x)[0]; -#endif - return ((bc->s0[a] + bc->s1[b]) ^ bc->s2[c] ) + bc->s3[d]; -} -#endif - -#ifdef WORDS_BIGENDIAN -#define F(x) ((( s0[((byte*)&x)[0]] + s1[((byte*)&x)[1]]) \ - ^ s2[((byte*)&x)[2]]) + s3[((byte*)&x)[3]] ) -#else -#define F(x) ((( s0[((byte*)&x)[3]] + s1[((byte*)&x)[2]]) \ - ^ s2[((byte*)&x)[1]]) + s3[((byte*)&x)[0]] ) -#endif -#define R(l,r,i) do { l ^= p[i]; r ^= F(l); } while(0) +#define F(x) ((( s0[(x)>>24] + s1[((x)>>16)&0xff]) \ + ^ s2[((x)>>8)&0xff]) + s3[(x)&0xff] ) +#define R(l,r,i) do { l ^= p[i]; r ^= F(l); } while(0) +#define R3(l,r,i) do { R(l##0,r##0,i);R(l##1,r##1,i);R(l##2,r##2,i);} while(0) static void do_encrypt ( BLOWFISH_context *bc, u32 *ret_xl, u32 *ret_xr ) { -#if BLOWFISH_ROUNDS == 16 u32 xl, xr, *s0, *s1, *s2, *s3, *p; xl = *ret_xl; @@ -442,16 +414,16 @@ do_encrypt ( BLOWFISH_context *bc, u32 *ret_xl, u32 *ret_xr ) s2 = bc->s2; s3 = bc->s3; - R( xl, xr, 0); - R( xr, xl, 1); - R( xl, xr, 2); - R( xr, xl, 3); - R( xl, xr, 4); - R( xr, xl, 5); - R( xl, xr, 6); - R( xr, xl, 7); - R( xl, xr, 8); - R( xr, xl, 9); + R( xl, xr, 0); + R( xr, xl, 1); + R( xl, xr, 2); + R( xr, xl, 3); + R( xl, xr, 4); + R( xr, xl, 5); + R( xl, xr, 6); + R( xr, xl, 7); + R( xl, xr, 8); + R( xr, xl, 9); R( xl, xr, 10); R( xr, xl, 11); R( xl, xr, 12); @@ -459,45 +431,67 @@ do_encrypt ( BLOWFISH_context *bc, u32 *ret_xl, u32 *ret_xr ) R( xl, xr, 14); R( xr, xl, 15); - xl ^= p[BLOWFISH_ROUNDS]; - xr ^= p[BLOWFISH_ROUNDS+1]; + xl ^= p[16]; + xr ^= p[16+1]; *ret_xl = xr; *ret_xr = xl; +} -#else - u32 xl, xr, temp, *p; - int i; - xl = *ret_xl; - xr = *ret_xr; +static void +do_encrypt_3 ( BLOWFISH_context *bc, byte *dst, const byte *src ) +{ + u32 xl0, xr0, xl1, xr1, xl2, xr2, *s0, *s1, *s2, *s3, *p; + + xl0 = buf_get_be32(src + 0); + xr0 = buf_get_be32(src + 4); + xl1 = buf_get_be32(src + 8); + xr1 = buf_get_be32(src + 12); + xl2 = buf_get_be32(src + 16); + xr2 = buf_get_be32(src + 20); p = bc->p; + s0 = bc->s0; + s1 = bc->s1; + s2 = bc->s2; + s3 = bc->s3; - for(i=0; i < BLOWFISH_ROUNDS; i++ ) - { - xl ^= p[i]; - xr ^= function_F(bc, xl); - temp = xl; - xl = xr; - xr = temp; - } - temp = xl; - xl = xr; - xr = temp; - - xr ^= p[BLOWFISH_ROUNDS]; - xl ^= p[BLOWFISH_ROUNDS+1]; - - *ret_xl = xl; - *ret_xr = xr; -#endif + R3( xl, xr, 0); + R3( xr, xl, 1); + R3( xl, xr, 2); + R3( xr, xl, 3); + R3( xl, xr, 4); + R3( xr, xl, 5); + R3( xl, xr, 6); + R3( xr, xl, 7); + R3( xl, xr, 8); + R3( xr, xl, 9); + R3( xl, xr, 10); + R3( xr, xl, 11); + R3( xl, xr, 12); + R3( xr, xl, 13); + R3( xl, xr, 14); + R3( xr, xl, 15); + + xl0 ^= p[16]; + xr0 ^= p[16+1]; + xl1 ^= p[16]; + xr1 ^= p[16+1]; + xl2 ^= p[16]; + xr2 ^= p[16+1]; + + buf_put_be32(dst + 0, xr0); + buf_put_be32(dst + 4, xl0); + buf_put_be32(dst + 8, xr1); + buf_put_be32(dst + 12, xl1); + buf_put_be32(dst + 16, xr2); + buf_put_be32(dst + 20, xl2); } static void decrypt ( BLOWFISH_context *bc, u32 *ret_xl, u32 *ret_xr ) { -#if BLOWFISH_ROUNDS == 16 u32 xl, xr, *s0, *s1, *s2, *s3, *p; xl = *ret_xl; @@ -516,52 +510,75 @@ decrypt ( BLOWFISH_context *bc, u32 *ret_xl, u32 *ret_xr ) R( xr, xl, 12); R( xl, xr, 11); R( xr, xl, 10); - R( xl, xr, 9); - R( xr, xl, 8); - R( xl, xr, 7); - R( xr, xl, 6); - R( xl, xr, 5); - R( xr, xl, 4); - R( xl, xr, 3); - R( xr, xl, 2); + R( xl, xr, 9); + R( xr, xl, 8); + R( xl, xr, 7); + R( xr, xl, 6); + R( xl, xr, 5); + R( xr, xl, 4); + R( xl, xr, 3); + R( xr, xl, 2); xl ^= p[1]; xr ^= p[0]; *ret_xl = xr; *ret_xr = xl; +} -#else - u32 xl, xr, temp, *p; - int i; - xl = *ret_xl; - xr = *ret_xr; +static void +do_decrypt_3 ( BLOWFISH_context *bc, byte *dst, const byte *src ) +{ + u32 xl0, xr0, xl1, xr1, xl2, xr2, *s0, *s1, *s2, *s3, *p; + + xl0 = buf_get_be32(src + 0); + xr0 = buf_get_be32(src + 4); + xl1 = buf_get_be32(src + 8); + xr1 = buf_get_be32(src + 12); + xl2 = buf_get_be32(src + 16); + xr2 = buf_get_be32(src + 20); p = bc->p; + s0 = bc->s0; + s1 = bc->s1; + s2 = bc->s2; + s3 = bc->s3; - for (i=BLOWFISH_ROUNDS+1; i > 1; i-- ) - { - xl ^= p[i]; - xr ^= function_F(bc, xl); - temp = xl; - xl = xr; - xr = temp; - } - - temp = xl; - xl = xr; - xr = temp; - - xr ^= p[1]; - xl ^= p[0]; - - *ret_xl = xl; - *ret_xr = xr; -#endif + R3( xl, xr, 17); + R3( xr, xl, 16); + R3( xl, xr, 15); + R3( xr, xl, 14); + R3( xl, xr, 13); + R3( xr, xl, 12); + R3( xl, xr, 11); + R3( xr, xl, 10); + R3( xl, xr, 9); + R3( xr, xl, 8); + R3( xl, xr, 7); + R3( xr, xl, 6); + R3( xl, xr, 5); + R3( xr, xl, 4); + R3( xl, xr, 3); + R3( xr, xl, 2); + + xl0 ^= p[1]; + xr0 ^= p[0]; + xl1 ^= p[1]; + xr1 ^= p[0]; + xl2 ^= p[1]; + xr2 ^= p[0]; + + buf_put_be32(dst + 0, xr0); + buf_put_be32(dst + 4, xl0); + buf_put_be32(dst + 8, xr1); + buf_put_be32(dst + 12, xl1); + buf_put_be32(dst + 16, xr2); + buf_put_be32(dst + 20, xl2); } #undef F #undef R +#undef R3 static void do_encrypt_block ( BLOWFISH_context *bc, byte *outbuf, const byte *inbuf ) @@ -617,8 +634,8 @@ _gcry_blowfish_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, BLOWFISH_context *ctx = context; unsigned char *outbuf = outbuf_arg; const unsigned char *inbuf = inbuf_arg; - unsigned char tmpbuf[BLOWFISH_BLOCKSIZE]; - int burn_stack_depth = (64) + 2 * BLOWFISH_BLOCKSIZE; + unsigned char tmpbuf[BLOWFISH_BLOCKSIZE * 3]; + int burn_stack_depth = (64) + 4 * BLOWFISH_BLOCKSIZE; #ifdef USE_AMD64_ASM { @@ -636,7 +653,6 @@ _gcry_blowfish_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, } /* Use generic code to handle smaller chunks... */ - /* TODO: use caching instead? */ } #elif defined(USE_ARM_ASM) { @@ -651,10 +667,28 @@ _gcry_blowfish_ctr_enc(void *context, unsigned char *ctr, void *outbuf_arg, } /* Use generic code to handle smaller chunks... */ - /* TODO: use caching instead? */ } #endif +#if !defined(USE_AMD64_ASM) && !defined(USE_ARM_ASM) + for ( ;nblocks >= 3; nblocks -= 3) + { + /* Prepare the counter blocks. */ + cipher_block_cpy (tmpbuf + 0, ctr, BLOWFISH_BLOCKSIZE); + cipher_block_cpy (tmpbuf + 8, ctr, BLOWFISH_BLOCKSIZE); + cipher_block_cpy (tmpbuf + 16, ctr, BLOWFISH_BLOCKSIZE); + cipher_block_add (tmpbuf + 8, 1, BLOWFISH_BLOCKSIZE); + cipher_block_add (tmpbuf + 16, 2, BLOWFISH_BLOCKSIZE); + cipher_block_add (ctr, 3, BLOWFISH_BLOCKSIZE); + /* Encrypt the counter. */ + do_encrypt_3(ctx, tmpbuf, tmpbuf); + /* XOR the input with the encrypted counter and store in output. */ + buf_xor(outbuf, tmpbuf, inbuf, BLOWFISH_BLOCKSIZE * 3); + outbuf += BLOWFISH_BLOCKSIZE * 3; + inbuf += BLOWFISH_BLOCKSIZE * 3; + } +#endif + for ( ;nblocks; nblocks-- ) { /* Encrypt the counter. */ @@ -681,8 +715,8 @@ _gcry_blowfish_cbc_dec(void *context, unsigned char *iv, void *outbuf_arg, BLOWFISH_context *ctx = context; unsigned char *outbuf = outbuf_arg; const unsigned char *inbuf = inbuf_arg; - unsigned char savebuf[BLOWFISH_BLOCKSIZE]; - int burn_stack_depth = (64) + 2 * BLOWFISH_BLOCKSIZE; + unsigned char savebuf[BLOWFISH_BLOCKSIZE * 3]; + int burn_stack_depth = (64) + 4 * BLOWFISH_BLOCKSIZE; #ifdef USE_AMD64_ASM { @@ -717,6 +751,22 @@ _gcry_blowfish_cbc_dec(void *context, unsigned char *iv, void *outbuf_arg, } #endif +#if !defined(USE_AMD64_ASM) && !defined(USE_ARM_ASM) + for ( ;nblocks >= 3; nblocks -= 3) + { + /* INBUF is needed later and it may be identical to OUTBUF, so store + the intermediate result to SAVEBUF. */ + do_decrypt_3 (ctx, savebuf, inbuf); + + cipher_block_xor_1 (savebuf + 0, iv, BLOWFISH_BLOCKSIZE); + cipher_block_xor_1 (savebuf + 8, inbuf, BLOWFISH_BLOCKSIZE * 2); + cipher_block_cpy (iv, inbuf + 16, BLOWFISH_BLOCKSIZE); + buf_cpy (outbuf, savebuf, BLOWFISH_BLOCKSIZE * 3); + inbuf += BLOWFISH_BLOCKSIZE * 3; + outbuf += BLOWFISH_BLOCKSIZE * 3; + } +#endif + for ( ;nblocks; nblocks-- ) { /* INBUF is needed later and it may be identical to OUTBUF, so store @@ -742,7 +792,8 @@ _gcry_blowfish_cfb_dec(void *context, unsigned char *iv, void *outbuf_arg, BLOWFISH_context *ctx = context; unsigned char *outbuf = outbuf_arg; const unsigned char *inbuf = inbuf_arg; - int burn_stack_depth = (64) + 2 * BLOWFISH_BLOCKSIZE; + unsigned char tmpbuf[BLOWFISH_BLOCKSIZE * 3]; + int burn_stack_depth = (64) + 4 * BLOWFISH_BLOCKSIZE; #ifdef USE_AMD64_ASM { @@ -777,6 +828,19 @@ _gcry_blowfish_cfb_dec(void *context, unsigned char *iv, void *outbuf_arg, } #endif +#if !defined(USE_AMD64_ASM) && !defined(USE_ARM_ASM) + for ( ;nblocks >= 3; nblocks -= 3 ) + { + cipher_block_cpy (tmpbuf + 0, iv, BLOWFISH_BLOCKSIZE); + cipher_block_cpy (tmpbuf + 8, inbuf + 0, BLOWFISH_BLOCKSIZE * 2); + cipher_block_cpy (iv, inbuf + 16, BLOWFISH_BLOCKSIZE); + do_encrypt_3 (ctx, tmpbuf, tmpbuf); + buf_xor (outbuf, inbuf, tmpbuf, BLOWFISH_BLOCKSIZE * 3); + outbuf += BLOWFISH_BLOCKSIZE * 3; + inbuf += BLOWFISH_BLOCKSIZE * 3; + } +#endif + for ( ;nblocks; nblocks-- ) { do_encrypt_block(ctx, iv, iv); @@ -785,6 +849,7 @@ _gcry_blowfish_cfb_dec(void *context, unsigned char *iv, void *outbuf_arg, inbuf += BLOWFISH_BLOCKSIZE; } + wipememory(tmpbuf, sizeof(tmpbuf)); _gcry_burn_stack(burn_stack_depth); } @@ -955,7 +1020,7 @@ do_bf_setkey (BLOWFISH_context *c, const byte *key, unsigned keylen) memset(hset, 0, sizeof(hset)); - for(i=0; i < BLOWFISH_ROUNDS+2; i++ ) + for(i=0; i < 16+2; i++ ) c->p[i] = ps[i]; for(i=0; i < 256; i++ ) { @@ -965,7 +1030,7 @@ do_bf_setkey (BLOWFISH_context *c, const byte *key, unsigned keylen) c->s3[i] = ks3[i]; } - for(i=j=0; i < BLOWFISH_ROUNDS+2; i++ ) + for(i=j=0; i < 16+2; i++ ) { data = ((u32)key[j] << 24) | ((u32)key[(j+1)%keylen] << 16) | @@ -976,7 +1041,7 @@ do_bf_setkey (BLOWFISH_context *c, const byte *key, unsigned keylen) } datal = datar = 0; - for(i=0; i < BLOWFISH_ROUNDS+2; i += 2 ) + for(i=0; i < 16+2; i += 2 ) { do_encrypt( c, &datal, &datar ); c->p[i] = datal;