tech-userlevel archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: interactive shell detection in shrc
Date: Thu, 10 Oct 2024 09:16:08 -0700
From: "Greg A. Woods" <woods%planix.ca@localhost>
Message-ID: <m1syvpg-0036s2C@more.local>
| As I wrote in my other message yesterday I've given up entirely on
| trying to test for interactivity without using $-.
Yes, I saw that, and I think that's the right approach.
| I have discovered (by serendipity, having been reading the original Ash
| code to verify that it was setting $- with 'i' when necessary) that 's'
| is also set when the shell is reading from stdin, so for it testing for
| 's' in $-, and verifying stdin is a tty would also work to identify
| interactivity.
Perhaps, but I suspect that's considerably less portable than just
using -i ... esp as shells have now been directed to include 'i' in $-
when they consider themselves interactive. Do be aware that "interactive"
has a special meaning to the shell, it alters the way it works iternally,
particularly in relation to various errors. It's not just related to
when it is dealing with a human rather than a script.
| BTW, I see the original Bourne shell also tested stdin and _stdout_ when
| deciding if it was interactive or not. The change to test stderr must
| have come from AT&T Ksh.
Probably. I disagree with a lot of what ksh changed, but that one was
correct, it allows things like
sh | tee output
to continue to act as if one had just run
sh
except a copy of all the output is saved for later (except
stderr, ie: error messages, so the saved output isn't
cluttered with odd "typo: command not found" and stuff,
which for some of us is a big help!)
| Ah ha! I had forgotten about how '-c' works with extra arguments!
I suspect that most people never knew.
| Even more fun the third argument can look like an option letter, but
| still just ends up in argv[0]:
|
| $ sh -c 'echo $0' -i
| -i
|
| This is different from how rsh and ssh work.
Not really - what's different is how -c works. Most people still
believe that -c is like (say) the awk -V option, or cc -D (or -I)
where the option needs extra data, which is either immediately after
the 'V' (or D or I) in the same arg, or if that char is the end of
the option arg (just "-V") then the following arg.
But that's not how -c works at all, in getopt style usage, the options
to sh would be (not including all of them, and ignoring + variants)
"abcefo:svx"
'o' is the only one which needs extra data. What -c does is set a flag
which alters the interpretation of the first remaining arg after the
options are finished. So in your example 'echo $0' isn't an option
(doesn't start with '-') so the options end just before that one (you
could also use '--' between the -c and that string if you wanted).
Once the options are done, everything that follows is just a regular arg,
even if it starts with a '-', so there's nothing special there with the '-i'.
Note that usages like
sh -c -x 'echo $0'
or
sh -cx 'echo $0'
work fine, and always have (well, ignoring shells with bugs, which some
of them have had in this area, as not even all shell authors really
understood the way this worked, once anyway).
| This part confuses me.... The manual suggests '-a', '-o', and
| parenthesis can be used to create expressions of arbitrary length and
| complexity, and surely I've done so for decades with multitudes of
| implementations with nary a problem.
Then you've been lucky, or have learned the tricks that people have
used to avoid (some of) the issues for ages. But I agree, our man
page more or less had the minimal possible change done to it, almost
just as a sop to POSIX usages - the whole thing (or at least a significant
part of it) needs to be rewritten, so we stop giving people the wrong
impression.
Eg: you'd expect that
test "$x" = "$y"
should tell you whether $x and $y are equal or not, and in any modern
test it does, always (as everyone, I hope anyway, obeys the POSIX rules
for the cases specified there). But not in the past, which has led
to scripts mostly doing things like
test "x$1" = "x$y"
and making test waste time comparing the (known equal) first
character, before going on to the rest of it.
That's because in old versions of test, if you had x='(' y=')' and
used the first version, you'd get "true" as the result.
That's because to test it appeared as
test ( string )
or redundant parentheses around a single operand test, which is (rather,
was) a test for the single operand not being a null string, and since "="
is not empty, the result was true.
And that's just with 3 args. Add more of them, without protecting
in some way every single one whose value you don't already know, and
you can end up in a world of pain. For simple things like string
equality/inequality tests, prepending a char, so the operand cannot be
anything starting with a '-' and also can't be '(' ')' or '!', is easy
(as in the "insert x" ... I always used to use "%${x}%" ... thought it
looked neater!). But if you're doing
test -f "$file"
how can you protect "$file"? You can't prefix with "/" as it might be
a relative pathname, you can't prefix "./" as it might be an absolute
pathname, if you don't prefix it with anything, then it might be -a
and look like test's conjunction operator, if you prefix it with
anything else, or add any kind of suffix, then you're testing whether
something other than that $file is an existing regular file, and are
very likely guaranteeing a false response.
In a simple 2 arg usage like that, it isn't an issue, but if it is
embedded in a long list of conjunctions and alternations, you're
just hoping that test manages to guess at the parsing of the string
the way you intended it to be parsed. And in the worst cases, there
is simply nothing that it is possible to do to avoid that (you might
think that adding ( ) around everything would help .. it doesn't, when
"$file" might be ')' for example...
You could do
case "$file" in
[./]*) ;;
*) file=./$file ;;
esac
before the test (or if you need $file unchanged, use another var
that starts as a copy of $file, and then test the other one).
But even that kind of thing doesn't work with numbers (safely)
or not as easily (remember numbers can be negative).
No-one should be required to do all of that, it's all absurd,
and in practice, no-one ever does. That can lead to having
scripts that can be fooled into doing other than they should, by
carefully crafting their args and environment.
This is why POSIX (very properly incidentally) added a formal definition
of how test is required to work, which when compiled with the shell
&& || and ! operators (and reserved word, which is what '!' is) along
with the { and } reserved words for grouping, would be sufficient to
write any logical expression you desire, without any ambiguity, and
with everything specified precisely - no matter what the variable strings
(operands to be tested) happen to be (just be sure to understand how &&
and || are defined in sh, they're not the same as in C).
| I agree POSIX does only require support for 4 arguments, but with the
| XSI extension any number are allowed, but I've never met a POSIX-minimal
| only implementation of test(1).
No, these changes have only been around for about 20 years now, it takes
longer than that to expunge the old uses from every script that exists,
anywhere. There are still scripts in NetBSD that use -a and -o ... every
time I find one, I alter it, but there are so many (and people occasionally
still write more of it ... altering people's knowledge of "how things work"
once deeply ingrained takes much much longer.
So everyone's test(1) still handles all that old crap (even ours), and still
gets it wrong from time to time. One of these days I will make it start
issuing warnings whenever it sees an unspecified usage - and hope that all
the noise being produced will convince people to fix their scripts, and
learn better behaviour.
kre
ps: and of course, everything above applies equally to the '[' variant
of test, after the (required, no-op) final arg ']' has been removed.
Home |
Main Index |
Thread Index |
Old Index