IETF-SSH archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: OpenSSH certified keys



Good points, Nicolas.

More often than not, however, the quick and dirty implementation will 
succeed in the market simply because it's there and gets the job 
approximately done when the rigorous solution is still in its design 
phases.

The ubiquity of Unix-like computer architecture is testament to this. 
Something like Smalltalk would have been rigorous and better, but the 
quick and dirty Unix was there and did the job approximately done 
faster. The market share of Windows is another bit of evidence.

The question is, will users be satisfied enough with the OpenSSH 
certified keys format that we'll all have to implement it. If that 
happens, it is in our interest to help the OpenSSH team fix any spec 
problems before we have to implement something even less optimal than it 
otherwise could be.

I would hope that users will not be satisfied with this solution and 
will demand a more rigorous implementation, but that remains to be seen.

denis


----- Original Message ----- 
From: "Nicolas Williams" <Nicolas.Williams%sun.com@localhost>
To: "Damien Miller" <djm%mindrot.org@localhost>
Cc: "denis bider \(Bitvise\)" <ietf-ssh2%denisbider.com@localhost>; 
<ietf-ssh%NetBSD.org@localhost>
Sent: Wednesday, March 17, 2010 12:17
Subject: Re: OpenSSH certified keys


On Wed, Mar 17, 2010 at 10:58:13AM +1100, Damien Miller wrote:
> On Tue, 16 Mar 2010, denis bider \(Bitvise\) wrote:
> > > For OpenSSH, revocation is implemented as a simple list
> > > of banned keys. [...]
> >
> > does this list of banned keys need to be manually deployed to each
> > server, or does OpenSSH support a protocol for retrieving this list
> > remotely?
> >
> > [...]
>
> The list of revoked keys is just a file on disk and OpenSSH implements
> no special means to update it (so far). We could define some 
> distribution
> service or an online protocol to lookup key revocation status, 
> possibly
> as a SSH subsystem.

Why repeat mistakes of the past?  What is the point of this PKI scheme
other than "ASN.1/BER/x.500/... is too complex"?  Why should anyone do
any work to help standardize a protocol that improves on predecessors in
only one [debatable] manner while taking steps backwards relative to
predecessors in several important aspects?

Areas of this protocol's design that repeat mistakes of the past or
where new mistakes are made:

 - extensibility and criticality
 - hash collision attacks
 - revocation
 - hierarchy/bridging
 - separation of online vs. off-line keys

And there's a bevvy of sub-protocols missing, such as certificate
singing requests.  As for your protocol's simplicity, that's ephemeral
-- or do you believe that your protocol can remain simple enough for
your tastes should it become popular?

The arguments over ASN.1's and BER/DER/CER's complexity are specious.
ASN.1's syntax is hard to parse, but it's also easy to come up with
alternatives that parse well.  XER proves that by and large ASN.1 and
XML schemas have equivalent expressive power.  PER can be seen as a
byte-aligned variant of XDR, with slight differences in the handling of
optional sequence elements and choice selection.  I'd argue that
redundancy of encoding is the main source of vulnerabilities in
encoding/decoding code, and that redundancy/verbosity is a problem for
XML as well as BER, yet XML is all the rage.  Perceived simplicity
clearly matters for a lot of implementors, but that should not excuse
the repetition of mistakes outside the choice of syntax and encoding.

Nico
-- 





Home | Main Index | Thread Index | Old Index