IETF-SSH archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Why SFTP performance sucks, and how to fix it



> I don't think the optimistic key exchange concept is difficult to
> implement.  I am, however, bothered that only one kex attempt can be
> made per-connection (whereas many authentication attempts can be made
> per-connection).

In looking at the transport draft for the word ``disconnect'', it
seems that the places where disconnection is explicitly a MUST related
to key exchange are places where there are no algorithms in common at
all, the idea apparently being that you list all the algorithms that
are supported, because you either support a particular key exchange
algorithm, or you don't, and you either are willing to use a
particular host key algorithm, or you aren't, etc.

But I don't see any reason why we would need to define a different
protocol version number if we wanted to offer the option of trying a
second round of key exchange when doing initial keying.  A Secure
Shell implementation that wants to do this could simply use
SSH_MSG_DEBUG rather than SSH_MSG_DISCONNECT for error reporting, and
then assume that doing another round of key exchange will work.  Any
implementation that follows the current draft will presumably decide
that it's got a fatal error when it sees a key exchange message in the
wrong place, and will fall back to the old behavior of simply
terminating the connection.

And when rekeying, you could simply keep using the old keys if
rekeying fails, and rekey early enough that you'll be able to retry a
few times before your keys actually need to expire.

> On the other hand, the key exchange negotiation part of the protocol and
> how it factors into the session ID generation is a crucial aspect of the
> protocol's security, and so keeping it simple/stupid is actually quite a
> reasonable thing to do.

Unless you can get into a state where the server thinks that key
exchange was successful and the client thinks it was unsuccessful, or
vice versa, I'm not sure why this actually becomes a problem.

The places key exchange can fail when both sides support the same
algorithm, assuming the connection doesn't break and you don't have a
man in the middle:

1) GSSAPI failures, which generally are going to happen early in the
   process, before you successfully transfer your large numbers (since
   the public diffie-hellman numbers get encrypted with GSSAPI).

2) The client deciding that it doesn't trust a particular public key.
   This is more annoying in some ways, since the public key is sent at
   the end of the process, and the server has already done all the
   computation it is going to do for key exchange by the time the
   client is able to figure out that there is a problem.  In
   particular, I think a server could send SSH_MSG_NEWKEYS right after
   it sends the message containing its public key and the signature,
   at which point, if the client decides it doesn't trust that key,
   you have a random untrusted encryption key encrypting one direction
   of the second attempt at initial key exchange.

   However, if the rule is that the session ID used for the session by
   the userauth protocol is taken from the first key exchange which
   resulted in both the client and the server sending SSH_MSG_NEWKEYS,
   I don't see where this is a problem.





Home | Main Index | Thread Index | Old Index