En-/Decryption speed for large files (GnuPG and Gpg4win)
Andre Heinecke
aheinecke at gnupg.org
Tue Jan 17 13:08:18 CET 2023
Hi,
On Sunday 15 January 2023 10:52:23 CET Christoph Klassen wrote:
> When I was testing the decryption I also tried "gpg --decrypt
> test_file.gpg" (without output file) with the 10 GB file and it took 8
> minutes and 47 seconds. I was wondering why it took longer when GnuPG
> didn't need to create an output file.
Yes that is expected. Gpg encrypt and decrypt with AES should be mostly IO
Bound as with AES-NI instructions it is really fast in the CPU. So not writing
the output to disk will result in faster operations. And one of the biggest
differences you get is when you encrypt / decrypt on a faster disk.
Another big difference what you will see in the perfomance of GnuPG is if you
use -z 0 which disables compression. Currently GnuPG on the command line
disables compression when the input file name already looks compressed
depending on the file name. We want to improve that, especially since Kleopatra
hands the filename only in a way that is not used in that compression
calculation. E.g. Adding Media data formats there might already help in a lot
of use cases. For uncompressable output, like random data, this will make the
largest difference. You can put "compress-level 0" into your gpg.conf to cause
Kleopatra to also use that.
That issue is: https://dev.gnupg.org/T6332 If you could do a run of your
tests and comment in that issue with the results that would be helpful.
It does not surprise me that Kleopatra is much slower. Due to our Architecture
Kleopatra passes Data, through GPGME directly to GnuPG. This results in
additional overhead but gives us more flexibility what kind of data we encrypt
/ decrypt. E.g. a mail or something that is not even written on the File
system.
For some parts we want to change that. Most notably Ingo is currently working
on Gpgtar. Gpgtar can nowadays directly encrypt / decrypt so there is no need
to pipe the input / output of GnuPG to or from GpgTar. Using GpgTar directly
should help a lot when working with larger Archives. https://dev.gnupg.org/
T5478
We also already increased the buffer size in GPGME to reduce the number of
callbacks we do internally but there can be more optimization there. Currently
our recommendation for Large Data is to use the command line directly, which
will always be fastest as there is no overhead.
> Did someone of you also try to en-/decrypt larger files? Maybe even
> files that are larger than 1 TB? It would be really nice to know how
> long GnuPG and Gpg4win are busy with such large files.
I think my largest tests were around 40GB. But I don't have the numbers
anymore, the testing I did there was mostly because there were reports that
Kleopatra crashes on such large files.
Maybe you can open a ticket for this with a reference to https://
dev.gnupg.org/T5478 about performance problems when decrypting / encrypting
large files (In contrast to archives.)
Best Regards,
Andre
P.S. we are currently also looking at the startup / initial keycache building
time of Kleopatra. This might also be intresting for those looking at Kleo
performance. https://dev.gnupg.org/T6259
--
GnuPG.com - a brand of g10 Code, the GnuPG experts.
g10 Code GmbH, Erkrath/Germany, AG Wuppertal HRB14459
GF Werner Koch, USt-Id DE215605608, www.g10code.com.
GnuPG e.V., Rochusstr. 44, D-40479 Düsseldorf. VR 11482 Düsseldorf
Vorstand: W.Koch, B.Reiter, A.Heinecke Mail: board at gnupg.org
Finanzamt D-Altstadt, St-Nr: 103/5923/1779. Tel: +49-211-28010702
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 5655 bytes
Desc: This is a digitally signed message part.
URL: <https://lists.gnupg.org/pipermail/gnupg-users/attachments/20230117/78755d59/attachment.sig>
More information about the Gnupg-users
mailing list