Documentation strings of most options used to contain a "\n" at the
end of each source line. When the option manager displayed these
strings, it treated each "\n" as a hard newline. On 80x24 terminals
however, the option description window has only 60 columes available
for the text (with the default setup.h), and the hard newlines were
further apart, so the option manager wrapped the text a second time,
resulting in rather ugly output where long lones are interleaved with
short ones. This could also cause the text to take up too much
vertical space and not fit in the window.
Replace most of those hard newlines with spaces so that the option
manager (or perhaps BFU) will take care of the wrapping. At the same
time, rewrap the strings in source code so that the source lines are
at most 79 columns wide.
In some options though, there is a list of possible values and their
meanings. In those lists, if the description of one value does not
fit in one line, then continuation lines should be indented. The
option manager and BFU are not currently able to do that. So, keep
the hard newlines in those lists, but rewrap them to 60 columns so
that they are less likely to require further wrapping at runtime.
test_search() was supposed to compare bookmark titles with
strcasestr(), but in commit 311d95358d
"bug 153, 1066: Convert bookmarks to/from UTF-8 when searching."
on 2009-02-08, I inadvertently changed that to strcasecmp(), even
while adding a comment about why strcasestr() is needed. strcasestr()
returns non-NULL if the strings match, and strcasecmp() returns
nonzero if they differ, so the search didn't work at all.
This reverts commit b94657869b.
I don't know where I got the idea that JS_SetErrorReporter affects the
entire JSRuntime, rather than only the provided JSContext. The people
on #jsapi say it has never worked that way.
In utf8_to_jsstring, do not free the string that is passed to
JS_NewUCString if the latter is successful; if it is, SpiderMonkey
handles the memory from then on.
Use libc routines instead of ELinks's routines to allocate and free the
string so that ELinks's memory debugging code does not try to keep track
of it after it has been handed to SpiderMonkey.
This commit fixes a bug introdued in
97d72d15a0.
This makes the option manager display it much better on an 80x24
terminal.
Alternatively, the "\n" newline characters within paragraphs could
have been removed entirely. ELinks would then have line-wrapped the
text to the appropriate width in the info window of the option
manager, but unfortunately not in --config-help.
AFAIK, all bugs in it have been fixed. Some bugs may still be lurking
but they are more likely to get caught if compression is enabled.
I also replaced COMP_NOTE with static text because xgettext does not
support macros in the argument of N_.
(cherry picked from commit 3a9b5d091d)
Check the return value of get_opt_rec on "document.browse.search.regex"
before dereferencing it. The option is not there if regular expression
support is disabled at build time.
This commit fixes a bug introduced in commit
b2d51c75ff0d6c52a4f6a2761801beb641cba3a2.
In bug 1067, dom_rss_pop_document() freed a node with done_dom_node()
even though call_dom_node_callbacks() was still using that node. This
made call_dom_node_callbacks() read a function pointer from beyond the
end of an array and call that. Add assertions to detect out-of-range
node types, and comments to warn about the bug.
Debian libmozjs-dev 1.9.0.4-2 has JS_ReportAllocationOverflow but
js-1.7.0 reportedly hasn't. Check at configure time whether that
function is available. If not, use JS_ReportOutOfMemory instead.
Reported by Witold Filipczyk.
I didn't read the code of the tre library, but I suppose that when sizes of
wchar_t and unicode_val_T are equal it will work fine.
[ From bug 1060 attachment 508. --KON ]
When the user tells ELinks to search for a regexp, ELinks 0.11.0
passes the regexp to regcomp() and the formatted document to
regexec(), both in the terminal charset. This works OK for unibyte
ASCII-compatible charsets because the regexp metacharacters are all in
the ASCII range. And ELinks 0.11.0 doesn't support multibyte or
ASCII-incompatible (e.g. EBCDIC) charsets in terminals, so it is no
big deal if regexp searches fail in such locales.
ELinks 0.12pre1 attempts to support UTF-8 as the terminal charset if
CONFIG_UTF8 is defined. Then, struct search contains unicode_val_T c
rather than unsigned char c, and get_srch() and add_srch_chr()
together save UTF-32 values there if the terminal charset is UTF-8.
In plain-text searches, is_in_range_plain() compares those values
directly if the search is case sensitive, or folds them to lower case
if the search is case insensitive: with towlower() if the terminal
charset is UTF-8, or with tolower() otherwise. In regexp searches
however, get_search_region_from_search_nodes() still truncates all
values to 8 bits in order to generate the string that
search_for_pattern() then passes to regexec(). In UTF-8 locales,
regexec() expects this string to be in UTF-8 and can't make sense of
the truncated characters. There is also a possible conflict in
regcomp() if the locale is UTF-8 but the terminal charset is not, or
vice versa.
Rejected ways of fixing the charset mismatches:
* When the terminal charset is UTF-8, recode the formatted document
from UTF-32 to UTF-8 for regexp searching. This would work if the
terminal and the locale both use UTF-8, or if both use unibyte
ASCII-compatible charsets, but not if only one of them uses UTF-8.
* Convert both the regexp and the formatted document to the charset of
the locale, as that is what regcomp() and regexec() expect. ELinks
would have to somehow keep track of which bytes in the converted
string correspond to which characters in the document; not entirely
trivial because convert_string() can replace a single unconvertible
character with a string of ASCII characters. If ELinks were
eventually changed to use iconv() for unrecognized charsets, such
tracking would become even harder.
* Temporarily switch to a locale that uses the charset of the
terminal. Unfortunately, it seems there is no portable way to
construct a name for such a locale. It is also possible that no
suitable locale is available; especially on Windows, whose C library
defines MB_LEN_MAX as 2 and thus cannot support UTF-8 locales.
Instead, this commit makes ELinks do the regexp matching with regwcomp
and regwexec from the TRE library. This way, ELinks can losslessly
recode both the pattern and the document to Unicode and rely on the
regexp code in TRE decoding them properly, regardless of locale.
There are some possible problems though:
1. ELinks stores strings as UTF-32 in arrays of unicode_val_T, but TRE
uses wchar_t instead. If wchar_t is UTF-16, as it is on Microsoft
Windows, then TRE will misdecode the strings. It wouldn't be too
hard to make ELinks convert to UTF-16 in this case, but (a) TRE
doesn't currently support UTF-16 either, and it seems possible that
wchar_t-independent UTF-32 interfaces will be added to TRE; and (b)
there seems to be little interest on using ELinks on Windows anyway.
2. The Citrus Project apparently wanted BSD to use a locale-dependent
wchar_t: e.g. UTF-32 in some locales and an ISO 2022 derivative in
others. Regexp searches in ELinks now do not support the latter.
[ Adapted to elinks-0.12 from bug 1060 attachment 506.
Commit message by me. --KON ]
In src/bookmarks/dialogs.c, do_add_bookmark() gets the title and URL
in the terminal charset and needs to know which one that is. When a
bookmark is being added, save the struct terminal * to dialog.udata2
and read the charset from there. When a bookmark is being edited,
dialog.udata2 is needed for the struct bookmark *, but there we always
have the parent struct dialog_data * in dialog.udata and can get the
terminal from that.
These functions now expect or return strings in UTF-8:
delete_folder_by_name (sneak in a const, too), bookmark_terminal_tabs,
open_bookmark_folder, and get_auto_save_bookmark_foldername_utf8 (new
function).
When setting the title or URL of a bookmark from SMJS user scripting,
use update_bookmark() instead of writing directly to struct bookmark.
It triggers the bookmark-update event and sets the bookmarks_dirty
flag.
SpiderMonkey uses UTF-16 and the strings in struct bookmark are in
UTF-8. Previously, the conversions behaved as if the strings had been
in ISO-8859-1.
SpiderMonkey also supports JS_SetCStringsAreUTF8(), which would make
the existing functions convert between UTF-16 and UTF-8, but that
effect is global so I dare not enable it yet. Besides, I don't know
if that function works in all the SpiderMonkey versions that ELinks
claims to work with.
This also makes the bookmark-update event carry strings in UTF-8.
The only current consumer of that event is bookmark_change_hook(),
which ignores the strings, so no changes are needed there.
When the file is being read, Expat provides the strings to ELinks in
UTF-8, so ELinks can put them in struct bookmark without conversions.
Make sure gettext returns any placeholder strings in UTF-8, too.
Replace '\r' with ' ' in bookmark titles and URLs.
When the file is being written, put encoding="UTF-8" in the XML
declaration, and then write out the strings from struct bookmark
without character set conversions. Do replace some characters
with entity references though, by calling add_html_to_string().
When ELinks is parsing an XML element in from an XBEL bookmark file,
it collects the attributes of the element to the current_node->attrs
list. Previously, struct attributes had room for one string only:
the last element of current_node->attrs was the name of the first
attribute, and it was preceded by the value of the first attribute,
the name of the second attribute, the value of the second attribute,
and so on. However, when get_attribute_value() was looking for a
given name, it compared the values as well. So, if you had for
example <bookmark id="href" href="http://elinks.cz/">, then
get_attribute_value("href") would incorrectly return "href".
To fix this confusion, store values in the new member
attributes.value, rather than in attributes.name.