IETF-SSH archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: An additional-auth mechanism for SSH to protect against scanning/probing attacks





On Wed, Dec 7, 2022 at 9:36 AM Peter Gutmann <pgut001%cs.auckland.ac.nz@localhost> wrote:
Jeffrey Hutzelman <jhutz%cmu.edu@localhost> writes:

>This doesn't work. The server cannot compute or validate such a challenge,
>because it almost never knows the password.

If it doesn't know the user's password, how does it authenticate them?

(If the answer is something like "It uses PAM" then that's usage scenario 1
which the draft specifically doesn't engage with).

Even in embedded scenarios, there's a pretty decent chance it does use PAM. But that's beside the point -- password authentication against a local account is nearly always done by validating the provided password against a salted hash.


>Which public key?

The one public key that's used for auth by the one user.  If there's (say) two
users then try both in turn.

I'm generally against the "just try all the keys" approach, but it might be reasonable in this case, particularly since the server isn't actually doing any public key operations. However, that raises another issue -- as you've described it, the HMAC key is based on the public key itself, which is _not a secret_.

Actually, I see another negotiation problem here... As currently described, this requires the client to guess which auth method will be used and, for public key, which key, before it can send anything. SSH clients determine what auth methods are supported by querying the server, and determine which public key to use by offering keys to the server until it says one might work -- only then does either side do any public-key crypto operations. They also typically try auth methods in an order driven by security and convenience and don't prompt for a password until trying a method that needs one (this includes trying keyboard-interactive before password, because with the former the server drives the prompting, and a spurious password prompt will tend to confuse the user).

So, the typical client wants to try public key first and not prompt for a password unless/until public key fails. The typical embedded server, at least out of the box, uses password auth and doesn't have any public keys configured. Most clients are not used _exclusively_ to connect to servers on embedded devices; they're used for other things, too, and may be (often are) configured with multiple public keys. How is a client supposed to determine the right thing to do, given the negotiation mechanism consists of the server hanging up on you if you get the answer wrong and agressively rate-limiting?


>This needs algorithm negotiation,

Why?

Seriously, why?  I can't think of any reason why you'd need this.

Because as long as computing technology improves, cryptographic algorithms, including hashes, have limited lifetime. Eventually SHA-256 will be too small. Of course, it also might eventually be broken. People will switch to something else, preferably before it becomes a large problem. People and/or vendors who don't switch will run afoul of regulation or auditing requirements, and/or be shamed in the press -- even though HMAC in general and this use in particular will probably still be reasonably safe.

You don't control the protocol version, this isn't part of the core protocol, and you can't have a flag day requiring all clients and servers to switch algorithms at once. That may not be possible even within one person's home network, with different device vendors updating their software at different times, often not mentioning a change like this. That user will use the same client to talk to all of them, and even if they can handle hand-configuring that client to use a different algorithm for preauth to each device, they'll find it inconvenient at best (and some clients won't make it easy to do at all).

So, in order for a client to choose the correct algorithm for a given server, there must be a negotiation mechanism. It doesn't have to be complicated -- just the server specifying an algorithm to use is sufficient. But it needs to be there, or there will be substantial interop problems the first time the algorithm needs to be updated.

Massive amounts of time and effort were spent stamping out problems like that across the entire universe of Internet protocols when we first had to phase out algorithms like DES and MD5; let's please not make the mistake of introducing the problem all over again based on the assumption that current algorithms will never become obsolete.

-- Jeff


Home | Main Index | Thread Index | Old Index