which we are not planning to include into sbase.
What's left to discuss is how we're going to handle them in the
tools (dump usage() or silently ignore them).
Having multibyte delimiters is not enough. For full flexibility,
the possiblity of cutting input lines with arbitrary length delimiters
is the real deal.
Given this functionality, it only sounds reasonable to also add support
to resolve escapes.
Thanks to Truls Becken for making the suggestion and designing such a
flexible cut(1)-implementation!
Now you can specify a multibyte-delimiter to cut, which should
definitely be possible for the end-user (Fuck POSIX).
Looking at GNU/coreutils' cut(1)[0], which basically ignores the difference
between characters and bytes, the -n-option and which is bloated as hell,
one has to wonder why they are still default. This is insane!
Things like this personally keep me motivated to make sbase better
every day.
[0]: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f=src/cut.c;hb=HEAD
NSFW! You have been warned.
One major milestone is to have the sbase-tools supporting UTF-8.
Tools like cut(1) with the -n flag don't make sense otherwise.
And while the gnu coreutils cut(1) blatantly ignores such an
important aspect, we will not tolerate this madness and mark it
as a TODO in the main README.
Since most tools inherently support UTF-8 anyway, this just concerns
tools which mangle with text or search in it in special ways.
and mark it as finished in README.
One small rationale on the way the manpage is set up: Looking at
the coreutils manpage, it does not invite to be a quick reference
guide, whereas I wrote this manpage to be short and concise in regard
to the information the advanced user needs.
No one needs to explain what an octal number is. That's not part of
the scope of this manpage.
Also, nobody wants to read a block of text just to find out how
to build an octal mode string.
to mark tools considered finished.
Finished doesn't mean work has stopped on these, but that these
programs are in a satisfying state according to the current suckless
coding practices, this includes having a
1) mandoc manpage
2) clean code
In most cases, 1) was the failing criterion. So in the interest of
finishing more tools and if you want to, well-written mandoc man-
pages are very much appreciated.
This is one aspect which I think has blown up the complexity of many
tr-implementations around today.
Instead of complicating the set-theory-based parser itself (he should
still be relying on one rune per char, not multirunes), I added a
preprocessor, which basically scans the code for upcoming '\'s, reads
what he finds, substitutes the real character onto '\'s index and shifts
the entire following array so there are no "holes".
What is left to reflect on is what to do with octal sequences.
I have a local implementation here, which works fine, but imho,
given tr is already so focused on UTF-8, we might as well ignore
POSIX at this point and rather implement the unicode UTF-8 code points,
which are way more contemporary and future-proof.
Reading in \uC3A4 as a an array of 0xC3 and 0xA4 is not the issue,
but I'm still struggling to find a way to turn it into a well-formed
byte sequence. Hit me with a mail if you have a simple solution for
that.
It's standard behaviour to map a whole class of matched objects
to the last element of a given simple set2 instead of just passing
it through.
Also, error out more strictly when the user gives us bogus sets.