We should not try and perform operations on an invalid DIR* stream.
Instead, we shall let the error message be printed, and the return
code set (existing behaviour) and abort afterwards.
- After first iteration, set first to 0 instead of !first.
- If Hflag || Lflag, then mkent used stat instead of lstat, so the
entity cannot be a symlink.
- Print path prefix along with directory name.
- In the 'if (Rflag)' block, just use 1 instead of Rflag.
This reverts commit bb83eade39.
This commit causes the loop through dents at the end of main to
continue past the end of the dents array, causing a crash when
called with multiple directory arguments.
Here's a better version of the patch.
When the R flag is used with a single directory, the given directory name is
omitted. With multiple directories each directory name is listed.
Directories that start with './' and '../' are now also printed.
ls was using (old) UNIX spec (struct stat).st_[acm]time.
It now uses POSIX (struct stat).(struct timespec st_[acm]tim) which
gives time resolution in seconds and nanoseconds.
If two files have the same time in seconds, we extend the comparision to
nanoseconds.
Entities arrays in main() were arrays of pointer to entities and were
not compatible with entcmp().
They have been changed to being arrays of entities.
Thanks to Michael Forney <mforney@mforney.org> for having seen that.
Previously, entcmp was being passed struct entry **, when it expected
struct entry *.
Many autoconf-generated configure scripts use `ls -t` to determine whether or
not the system clock is behaving correctly. If they are sorted in the wrong
order, it produces an error.
checking whether build environment is sane... configure: error: newly created file is older than distributed files!
Check your system clock
General convention is to use size_t to store sizes of all kinds.
Internally, the function uses double anyway, but at least this
doesn't clobber up the API any more and there's a chance in the
future to make this function a bit cleaner and not use this dirty
static buffer hack any more.
This has been a known issue for a long time. Example:
printf "word" > /dev/full
wouldn't report there's not enough space on the device.
This is due to the fact that every libc has internal buffers
for stdout which store fragments of written data until they reach
a certain size or on some callback to flush them all at once to the
kernel.
You can force the libc to flush them with fflush(). In case flushing
fails, you can check the return value of fflush() and report an error.
However, previously, sbase didn't have such checks and without fflush(),
the libc silently flushes the buffers on exit without checking the errors.
No offense, but there's no way for the libc to report errors in the exit-
condition.
GNU coreutils solve this by having onexit-callbacks to handle the flushing
and report issues, but they have obvious deficiencies.
After long discussions on IRC, we came to the conclusion that checking the
return value of every io-function would be a bit too much, and having a
general-purpose fclose-wrapper would be the best way to go.
It turned out that fclose() alone is not enough to detect errors. The right
way to do it is to fflush() + check ferror on the fp and then to a fclose().
This is what fshut does and that's how it's done before each return.
The return value is obviously affected, reporting an error in case a flush
or close failed, but also when reading failed for some reason, the error-
state is caught.
the !!( ... + ...) construction is used to call all functions inside the
brackets and not "terminating" on the first.
We want errors to be reported, but there's no reason to stop flushing buffers
when one other file buffer has issues.
Obviously, functionales come before the flush and ret-logic comes after to
prevent early exits as well without reporting warnings if there are any.
One more advantage of fshut() is that it is even able to report errors
on obscure NFS-setups which the other coreutils are unable to detect,
because they only check the return-value of fflush() and fclose(),
not ferror() as well.
pathconf() is just an insane interface to use. All sane operating-
systems set sane values for PATH_MAX. Due to the by-runtime-nature of
pathconf(), it actually weakens the programs depending on its values.
Given over 3 years it has still not been possible to implement a sane
and easy to use apathmax()-utility-function, and after discussing this
on IRC, we'll dump this garbage.
We are careful enough not to overflow PATH_MAX and even if, any user
is able to set another limit in config.mk if he so desires.
After a short correspondence with Otto Moerbeek it turned out
mallocarray() is only in the OpenBSD-Kernel, because the kernel-
malloc doesn't have realloc.
Userspace applications should rather use reallocarray with an
explicit NULL-pointer.
Assuming reallocarray() will become available in c-stdlibs in the
next few years, we nip mallocarray() in the bud to allow an easy
transition to a system-provided version when the day comes.
A function used only in the OpenBSD-Kernel as of now, but it surely
provides a helpful interface when you just don't want to make sure
the incoming pointer to erealloc() is really NULL so it behaves
like malloc, making it a bit more safer.
Talking about *allocarray(): It's definitely a major step in code-
hardening. Especially as a system administrator, you should be
able to trust your core tools without having to worry about segfaults
like this, which can easily lead to privilege escalation.
How do the GNU coreutils handle this?
$ strings -n 4611686018427387903
strings: invalid minimum string length -1
$ strings -n 4611686018427387904
strings: invalid minimum string length 0
They silently overflow...
In comparison, sbase:
$ strings -n 4611686018427387903
mallocarray: out of memory
$ strings -n 4611686018427387904
mallocarray: out of memory
The first out of memory is actually a true OOM returned by malloc,
whereas the second one is a detected overflow, which is not marked
in a special way.
Now tell me which diagnostic error-messages are easier to understand.
Stateless and I stumbled upon this issue while discussing the
semantics of read, accepting a size_t but only being able to return
ssize_t, effectively lacking the ability to report successful
reads > SSIZE_MAX.
The discussion went along and we came to the topic of input-based
memory allocations. Basically, it was possible for the argument
to a memory-allocation-function to overflow, leading to a segfault
later.
The OpenBSD-guys came up with the ingenious reallocarray-function,
and I implemented it as ereallocarray, which automatically returns
on error.
Read more about it here[0].
A simple testcase is this (courtesy to stateless):
$ sbase-strings -n (2^(32|64) / 4)
This will segfault before this patch and properly return an OOM-
situation afterwards (thanks to the overflow-check in reallocarray).
[0]: http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man3/calloc.3