Upgrading from gpg1 to gpg2: lots of trouble, need help

Daniel Kahn Gillmor dkg at fifthhorseman.net
Wed Dec 20 15:52:14 CET 2017


Hi raf--

Hi On Wed 2017-12-20 14:11:26 +1100, gnupg at raf.org wrote:
> Daniel Kahn Gillmor wrote:
>> On Mon 2017-12-18 20:01:02 +1100, gnupg at raf.org wrote:
>> > For most of my decryption use cases I can't use a
>> > pinentry program. Instead, I have to start gpg-agent in
>> > advance (despite what its manpage says) with
>> > --allow-preset-passphrase so that I can then use
>> > gpg-preset-passphrase so that when gpg is run later, it
>> > can decrypt unaided.
>> 
>> can you explain more about this use case?  it sounds to me like you
>> might prefer to just keep your secret keys without a passphrase in the
>> first place.
>
> I'm assuming that you are referring to the use case in Question 1.
>
> Definitely not. That would make it possible for the decryption to
> take place at any time. I need it to only be able to take place
> for short periods of time when I am expecting it.

OK, so your preferred outcome is some way to enable a key for a limited
period of time.  is that right?

> I think the real problem with this use case is that the incoming
> ssh connections from the other hosts are starting their own
> gpg-agent (I'm guessing using the S.gpg-agent.ssh socket) rather
> than just connecting to the existing gpg-agent that I have put
> the passphrase into (I'm guessing that gpg-agent uses the
> S.gpg-agent socket).

there should be only one S.gpg-agent.ssh socket, and therefore only one
agent.  If you were using systemd and dbus user sessions, those system
management tools would make sure that these things exist.  This is the
entire point of session management.  It's complex to do by hand, and
choosing to abandon the tools that offer it to you seems gratuitously
masochistic.  But ok…

> What I want is to have gpg and encrypted data and a
> key-with-a-strong-passphrase on a small number of servers and
> then, when needed and only when needed, I want to be able to
> enable unassisted decryption by the uid that owns the
> data/keys/gpg-agent. Other hosts that need access to the
> decrypted data need to be able to ssh to the host that has
> gpg/keys/data to get that data without my interaction.
>
> I need to be able to ssh to the server with gpg/keys/data to set
> things up. Then I need to be able to log out without gpg-agent
> disappearing. Then the other servers need to be able to ssh to
> that server and use the gpg-agent that I prepared earlier so as
> to decrypt the data. Then I need to be able to ssh back in and
> turn off gpg-agent.

I'm still not sure i understand your threat model -- apparently your
theorized attacker is capable of compromising the account on the
targeted host, but *only* between the times before you enable (and after
you disable) gpg-agent.  Is that right?

Why do you need these multi-detached operations?  by "multi-detached" i
mean that your sequence of operations appears to be:

 * attach
 * enable gpg-agent
 * detach
 * other things use…
 * attach
 * disable gpg-agent
 * detach

wouldn't you rather monitor these potentially-vulnerable accounts (by
staying attached or keeping a session open while they're in use)?

> The big picture is that there are some publically accessible
> servers that need access to sensitive data (e.g. database
> passwords and symmetric encryption keys and similar) that I
> don't want stored on those servers at all. Instead there are
> service processes that fetch the data from a set of several
> other servers that are not publically accessible. This fetching
> of data only needs to happen when the publically accessible
> servers reboot or when the data fetching services are
> restarted/reconfigured.

so what is the outcome if the gpg-agent is disabled when these
reboots/restarts happen?  how do you coordinate that access?

> I want to be able to enter the passphrase once (on each of the
> gpg/data/key hosts) before I reboot the publically accessible
> hosts, and I want that to be sufficient to enable multiple
> incoming ssh connections from the rebooting hosts to get what
> they need, and when the hosts have successfully rebooted I want
> to be able to turn off gpg-agent.
>
> If you prefer, the confirmation of the use of private keys is me
> entering the passphrase into gpg-agent before the other hosts
> make their ssh connections.

this approach seems congruent with my single-attach proposal:

 * you log into "key management" host (this enables the systemd
   gpg-agent user service)
   
 * on "key management" host, enable key access using
   gpg-preset-passphrase or something similar

 * you trigger restart of public-facing service

 * public-facing service connects to "key management" host, gets the
   data it needs

 * you verify that the restart of the public-facing service is successful

 * you log out of "key management" host.  dbus-user-session closes the
   gpg-agent automatically with your logout, thereby closing the agent
   and disabling access to those keys.

can you explain why that doesn't meet your goals?

> Also, for redundancy purposes, the data and keys need to be
> stored on multiple servers in different locations.

I think there are other ways to address your redundancy concerns that
don't involve giving each of the redundant backup servers access to the
cleartext of the secret key material at any time; so i'm not going to
address this redundancy concern.

> Even if I consider those servers to be "local", it's still not what I
> want because that assumes that it is the server with the keys that
> connects to the other servers with data that needs to be decrypted
> with those keys. In this case, it is those other servers that will be
> making the connections to the server with the keys (and the data). I
> don't want their rebooting to be delayed by my having to log in to
> each of them with a passphrase or a forwarded gpg-agent connection. I
> want them to make the connection by themselves as soon as they are
> ready to, obtain the data they need, and continue booting up.

Here, i think you're making an efficiency argument -- you want to
prepare the "key management" host in advance, so that during the boot
process of the public-facing service, it gets what it needs without
you needing to manipulate it directly.

> I'm not sure I understand your reasons for asking all these
> questions. Is it that you don't think that want I want to do is
> still possible with gnupg2.1+ and are you trying to convince me
> to fundamentally change what I'm doing?

I'm trying to extract high-level, security-conscious, sensible goals
from your descriptions, so that i can help you figure out how to meet
them.  It's possible that your existing choices don't actually meet your
goals as well as you thought they did, and newer tools can help get you
closer to meeting your goals.

This may mean some amount of change, but it's change in the direction of
what you actually want, so hopefully it's worth the pain.

> Can incoming ssh connections use the existing gpg-agent that I
> have already started and preset with a passphrase or not? Does
> anyone know?

yes, i've tested it.  it works.

> Is continuing to use gpg1 indefinitely an option? Will it
> contine to work with recent versions of gpg-agent?

gpg1 only "works" with versions of gpg-agent as a passphrase cache, but
modern versions of GnuPG use gpg-agent as an actual cryptographic agent,
which does not release the secret key at all.

This is actually what i think you want, as it minimizes exposure of the
secret key itself.  gpg1 has access to the full secret key, while gpg2
deliberately does not.

gpg-preset-passphrase only unlocks access to secret key material in the
agent -- that is, it does *not* touch the passphrase cache.  This means
that it is incompatible with gpg1, as noted in the manual page.

> Debian says that gpg1 is deprecated but I've read that gpg1 is
> now mostly only useful for embedded systems (or servers).

where did you read this?  imho, gpg1 is now mostly only useful for
people with bizarre legacy constraints (like using an ancient, known-bad
PGP-2 key to maintain a system that is so crufty it cannot update the
list of administrator keys).

> Since IoT and servers will never go away, does that mean that gpg1
> will never go away? I'd be happy to keep using gpg1 if I knew that it
> wouldn't go away and if I knew that it would keep working with recent
> versions of gpg-agent.

i advise against this approach.  please use the modern version.  it is
well-maintained and should meet your needs.

                --dkg
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <https://lists.gnupg.org/pipermail/gnupg-users/attachments/20171220/11eb2efd/attachment.sig>


More information about the Gnupg-users mailing list