IETF-SSH archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: fds beyond 0/1/2



> Anyway, I don't see a big problem with having the same flow control
> window for stdout and stderr.

Neither do I, but that's because of how they're used, not because
there's no problem in principle.  There are few-to-no programs that
depend on stderr getting through even when stdout is blocked by
back-pressure, and even fewer that depend on stdout working when stderr
is blocked by back-pressure.

> Not sure what to think about the multiple fd case, since it's unclear
> to me what the applications are.

Well, I certainly want it.  Some of the applications I have for it
would break if one flow's blocking wedged other flows in the same
direction.  To pick a simple example (so simple that it probably would
not be done in practice, but it illustrates the risk):

cat file1 | (exec 3<&0; cat file2 | ssh -fd-in 3 remote cmp - /dev/fd/3)

Now, let us suppose that cmp works like

while (there's data available) {
	read from file A
	read from file B
	compare
}

Now suppose that "cat file1" runs first, and fills up pipes and
eventually blocks on back-pressure.  cmp is then sitting there trying
to read "cat file2"'s output, but it can't, because ssh can't send it
because there's no window space.

If the producer programs are more complex than cat, this sort of timing
is not at all implausible - all that's required is that one program
produce enough output to block while the other produces nothing.

In the case of cmp, I probably wouldn't bother running it remotely, but
it could be any consumer; all that's needed is that it be unwilling to
buffer arbitrarily large amounts of output from the fast producer while
waiting for output from the slow producer.

/~\ The ASCII				  Mouse
\ / Ribbon Campaign
 X  Against HTML		mouse%rodents-montreal.org@localhost
/ \ Email!	     7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B



Home | Main Index | Thread Index | Old Index