security model
Robert J. Hansen
rjh at sixdemonbag.org
Sun Jul 13 07:36:28 CEST 2025
> Fair enough; my experience has all been on smaller scales. I have
> never had "a different unit" that handled some of the problem. That
> must be nice...
I was working in a DFIR lab environment where the company's attitude
was, "you guys are responsible for any security threats that get past
the rest of the corporate network and start knocking on your door."
(I once delivered a lecture on new developments in mobile phone
forensics and hooked up my personally-owned cellphone to show a data
recovery feature. It was then I discovered a Chinese document on my
phone which hadn't been there the night before: an attacker popped my
phone and did an excellent job cleaning up and concealing the intrusion,
but they forgot one file. The embarrassment of revealing to 100+ of your
professional peers that your phone got popped the night before and you
didn't notice will quickly teach you humility. :) )
> Persistence and evading detection are a tradeoff for the attacker.
They are not. Often, evading detection assists in persistence by
reducing the amount of evidence that might trigger a sysadmin's attention.
If you mean to say persistence always increases the signature that must
be concealed, there I'd mostly agree.
> Further, Mallory may not have the goals you assume. If Mallory is a
> professional, making the attack look like a low-skill smash and grab
> could, for example, be a strategy to avoid raising alarm at the
> targets that they *are* targets.
I don't understand. Executing a successful high-profile exfiltration
from a target which is sure to be spotted by the sysadmin ... avoids
letting the sysadmin know they've been targeted?
Targeted *by whom*, maybe. Changing TTP for misattribution purposes is
definitely a thing. With that change, I agree.
> Logically, the box is most likely to have about the same, except
> that a second copy of the same haul is valueless to Mallory.
Why should this be the default assumption? This is a computer, not a
cuneiform tablet: data is added and removed constantly. If right now it
has valuable data, it's at least as possible that in the future it will
continue to receive valuable data.
> In short, there is no amount of persistence that can save Mallory's
> access once the target becomes aware of it and gets serious about
> kicking Mallory out.
Salt Typhoon would appear to be a counterexample.
> If Mallory expects to get back in just as easily six months from
> now, why leave something that an attentive admin might notice
Great question (zero sarcasm). My answer would be, "in the example you
give, the access is already a persistent access, so the persistency
objective is already done for Mallory by the negligence of the sysadmin."
> Clients keep SFTP logs? Are you assuming that Mallory steals the
> user's password and then connects to sshd on the user's box to make
> off with the user's files?
Remote access, yes.
> I have been talking about attacks on clients, not servers, because
> GnuPG is most typically run on client nodes. Indeed, as far as I
> know, the PGP security model is focused on clients---the boxes
> physically used by the users. Servers are categorically untrusted
> in PGP.
In today's environment, you have to work really hard to even have a
meaningful network perimeter. I'm unconvinced it makes much sense
nowadays to talk about clean client/server distinctions at the machine
level. At the app level, maybe.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 236 bytes
Desc: OpenPGP digital signature
URL: <https://lists.gnupg.org/pipermail/gnupg-devel/attachments/20250713/6ecea1b2/attachment.sig>
More information about the Gnupg-devel
mailing list