Upgrading from gpg1 to gpg2: lots of trouble, need help
gnupg at raf.org
gnupg at raf.org
Wed Dec 20 04:11:26 CET 2017
Hi Daniel,
Thanks for responding.
Daniel Kahn Gillmor wrote:
> On Mon 2017-12-18 20:01:02 +1100, gnupg at raf.org wrote:
> > For most of my decryption use cases I can't use a
> > pinentry program. Instead, I have to start gpg-agent in
> > advance (despite what its manpage says) with
> > --allow-preset-passphrase so that I can then use
> > gpg-preset-passphrase so that when gpg is run later, it
> > can decrypt unaided.
>
> can you explain more about this use case? it sounds to me like you
> might prefer to just keep your secret keys without a passphrase in the
> first place.
I'm assuming that you are referring to the use case in Question 1.
Definitely not. That would make it possible for the decryption to
take place at any time. I need it to only be able to take place
for short periods of time when I am expecting it.
> > I also discovered that I need to disable systemd's
> > handling of gpg-agent (on debian9 with gpg-2.1.18) if I
> > want to control when gpg-agent starts and stops and
> > which options are passed to it. I know this is not
> > recommended but I've had too much trouble in the past
> > with systemd thinking that it knows when a "user" has
> > "logged out" and then deciding to "clean up" causing me
> > masses of grief that I just can't bring myself to trust
> > it to know what it's doing.
> >
> > I've disabled systemd's handling of gpg-agent on the
> > debian9 hosts with:
> >
> > systemctl --global mask --now gpg-agent.service
> > systemctl --global mask --now gpg-agent.socket
> > systemctl --global mask --now gpg-agent-ssh.socket
> > systemctl --global mask --now gpg-agent-extra.socket
> > systemctl --global mask --now gpg-agent-browser.socket
> >
> > (from /usr/share/doc/gnupg-agent/README.Debian)
> >
> > I know someone on the internet has expressed
> > unhappiness about people doing this and not being happy
> > about supporting people who do it but please just pretend
> > that it's a non-systemd system. Not everything is Linux
> > after all. Gnupg should still work.
>
> i might be "someone on the internet" :)
>
> I can pretend it's a non-systemd system if you like -- that means you
> simply don't have functional per-user session management, and it's now
> on you to figure out session management yourself.
Which is exactly how I want it. I want to decide when gpg-agent
starts and when it stops. It is unrelated to per-user sessions.
> Without going into detail on your many questions, it sounds to me like
> your main concern has to do with pinentry not seeming well-matched to
> the way that you connect to the machines you use, and the way you expect
> user interaction to happen.
That's true for some of the use case problems I'm having but not
with this one. I could use pinentry-curses here because it's
happening over an ssh connection in an xterm, not inside a gvim
window where curses doesn't work. But I'm happy to keep using
gpg-preset-passphrase.
I think the real problem with this use case is that the incoming
ssh connections from the other hosts are starting their own
gpg-agent (I'm guessing using the S.gpg-agent.ssh socket) rather
than just connecting to the existing gpg-agent that I have put
the passphrase into (I'm guessing that gpg-agent uses the
S.gpg-agent socket).
> Let me ask you to zoom out a minute from the specific details you're
> seeing and try to imagine what you *want* -- ideally, not just in terms
> of what you've done in the past.
What I want *is* what I've done in the past. That's why I did it. :-)
> for example, do you really want to have keys stored on a remote machine,
> or do you want them stored locally, with the goal of being able to *use*
> them remotely? do you want to be prompted to confirm the use of each
> private key? do you expect that confirmation to include a passphrase
> entry? how do you conceive of your adversary in this context? are you
> concerned about leaking private key material? auditing access? some
> other constraints?
>
> --dkg
For the purposes of this use case, all the hosts are "remote".
i.e. None of this is happening on the host that I have
physically in front of me. They are all servers of different
kinds.
What I want is to have gpg and encrypted data and a
key-with-a-strong-passphrase on a small number of servers and
then, when needed and only when needed, I want to be able to
enable unassisted decryption by the uid that owns the
data/keys/gpg-agent. Other hosts that need access to the
decrypted data need to be able to ssh to the host that has
gpg/keys/data to get that data without my interaction.
I need to be able to ssh to the server with gpg/keys/data to set
things up. Then I need to be able to log out without gpg-agent
disappearing. Then the other servers need to be able to ssh to
that server and use the gpg-agent that I prepared earlier so as
to decrypt the data. Then I need to be able to ssh back in and
turn off gpg-agent.
The big picture is that there are some publically accessible
servers that need access to sensitive data (e.g. database
passwords and symmetric encryption keys and similar) that I
don't want stored on those servers at all. Instead there are
service processes that fetch the data from a set of several
other servers that are not publically accessible. This fetching
of data only needs to happen when the publically accessible
servers reboot or when the data fetching services are
restarted/reconfigured.
So, in answer to your questions:
> do you really want to have keys stored on a remote machine or do you
> want them stored locally, with the goal of being able to *use* them
> remotely?
I don't want the keys stored locally on my laptop. I don't want
the keys stored on the publically accessible remote hosts where
the data is ultimately needed. I want to store and use the keys
on a different set of non-publically accessible remote hosts.
> do you want to be prompted to confirm the use of each private key?
> do you expect that confirmation to include a passphrase entry?
No. The private key will be used four times for each host that
reboots. I don't want to have to be there to physically confirm
each use of the private key (or enter the passphrase each time).
After all, they may well happen at the same time and from within
ssh connections that I have nothing to do with. That would be
similar to my ansible user case.
I want to be able to enter the passphrase once (on each of the
gpg/data/key hosts) before I reboot the publically accessible
hosts, and I want that to be sufficient to enable multiple
incoming ssh connections from the rebooting hosts to get what
they need, and when the hosts have successfully rebooted I want
to be able to turn off gpg-agent.
If you prefer, the confirmation of the use of private keys is me
entering the passphrase into gpg-agent before the other hosts
make their ssh connections.
> how do you conceive of your adversary in this context?
> are you concerned about leaking private key material?
> auditing access? other constraints?
I'm concerned about everything. Physical theft of servers,
hackers, you name it. There are many, many defenses in place but
I have to assume that someone might be able to get past them
all. So making things as hard as possible for attackers is the
way to go. It seems like a good idea not to have the sensitive
data on the publically accessible hosts at all except in memory.
Someone suggested using gpg-agent forwarding but that, and the
first in your batch of questions above, seems to imply that the
expectation is for keys to be stored locally and that access to
those keys be made available to gpg processes on other hosts
that a human user has connected to (in much the same way as
ssh-agent forwarding works). That is not at all what I want to
happen. My local laptop should have nothing to do with any of
this except that it is where I ssh from to get everywhere else.
Also, for redundancy purposes, the data and keys need to be
stored on multiple servers in different locations. Even if I
consider those servers to be "local", it's still not what I want
because that assumes that it is the server with the keys that
connects to the other servers with data that needs to be
decrypted with those keys. In this case, it is those other servers
that will be making the connections to the server with the keys
(and the data). I don't want their rebooting to be delayed by my
having to log in to each of them with a passphrase or a
forwarded gpg-agent connection. I want them to make the
connection by themselves as soon as they are ready to, obtain
the data they need, and continue booting up.
I'm not sure I understand your reasons for asking all these
questions. Is it that you don't think that want I want to do is
still possible with gnupg2.1+ and are you trying to convince me
to fundamentally change what I'm doing?
I don't want to fundamentally change what I'm doing. I don't
have the time (unless there really is no alternative). I just
wanted to upgrade my servers from debian8 to debian9. I had no
idea this was going to happen.
Can incoming ssh connections use the existing gpg-agent that I
have already started and preset with a passphrase or not? Does
anyone know?
Is continuing to use gpg1 indefinitely an option? Will it
contine to work with recent versions of gpg-agent?
Debian says that gpg1 is deprecated but I've read that gpg1 is
now mostly only useful for embedded systems (or servers). Since
IoT and servers will never go away, does that mean that gpg1
will never go away? I'd be happy to keep using gpg1 if I knew
that it wouldn't go away and if I knew that it would keep
working with recent versions of gpg-agent.
cheers,
raf
More information about the Gnupg-users
mailing list