Check the return value of get_opt_rec on "document.browse.search.regex"
before dereferencing it. The option is not there if regular expression
support is disabled at build time.
This commit fixes a bug introduced in commit
b2d51c75ff0d6c52a4f6a2761801beb641cba3a2.
In bug 1067, dom_rss_pop_document() freed a node with done_dom_node()
even though call_dom_node_callbacks() was still using that node. This
made call_dom_node_callbacks() read a function pointer from beyond the
end of an array and call that. Add assertions to detect out-of-range
node types, and comments to warn about the bug.
Debian libmozjs-dev 1.9.0.4-2 has JS_ReportAllocationOverflow but
js-1.7.0 reportedly hasn't. Check at configure time whether that
function is available. If not, use JS_ReportOutOfMemory instead.
Reported by Witold Filipczyk.
I didn't read the code of the tre library, but I suppose that when sizes of
wchar_t and unicode_val_T are equal it will work fine.
[ From bug 1060 attachment 508. --KON ]
When the user tells ELinks to search for a regexp, ELinks 0.11.0
passes the regexp to regcomp() and the formatted document to
regexec(), both in the terminal charset. This works OK for unibyte
ASCII-compatible charsets because the regexp metacharacters are all in
the ASCII range. And ELinks 0.11.0 doesn't support multibyte or
ASCII-incompatible (e.g. EBCDIC) charsets in terminals, so it is no
big deal if regexp searches fail in such locales.
ELinks 0.12pre1 attempts to support UTF-8 as the terminal charset if
CONFIG_UTF8 is defined. Then, struct search contains unicode_val_T c
rather than unsigned char c, and get_srch() and add_srch_chr()
together save UTF-32 values there if the terminal charset is UTF-8.
In plain-text searches, is_in_range_plain() compares those values
directly if the search is case sensitive, or folds them to lower case
if the search is case insensitive: with towlower() if the terminal
charset is UTF-8, or with tolower() otherwise. In regexp searches
however, get_search_region_from_search_nodes() still truncates all
values to 8 bits in order to generate the string that
search_for_pattern() then passes to regexec(). In UTF-8 locales,
regexec() expects this string to be in UTF-8 and can't make sense of
the truncated characters. There is also a possible conflict in
regcomp() if the locale is UTF-8 but the terminal charset is not, or
vice versa.
Rejected ways of fixing the charset mismatches:
* When the terminal charset is UTF-8, recode the formatted document
from UTF-32 to UTF-8 for regexp searching. This would work if the
terminal and the locale both use UTF-8, or if both use unibyte
ASCII-compatible charsets, but not if only one of them uses UTF-8.
* Convert both the regexp and the formatted document to the charset of
the locale, as that is what regcomp() and regexec() expect. ELinks
would have to somehow keep track of which bytes in the converted
string correspond to which characters in the document; not entirely
trivial because convert_string() can replace a single unconvertible
character with a string of ASCII characters. If ELinks were
eventually changed to use iconv() for unrecognized charsets, such
tracking would become even harder.
* Temporarily switch to a locale that uses the charset of the
terminal. Unfortunately, it seems there is no portable way to
construct a name for such a locale. It is also possible that no
suitable locale is available; especially on Windows, whose C library
defines MB_LEN_MAX as 2 and thus cannot support UTF-8 locales.
Instead, this commit makes ELinks do the regexp matching with regwcomp
and regwexec from the TRE library. This way, ELinks can losslessly
recode both the pattern and the document to Unicode and rely on the
regexp code in TRE decoding them properly, regardless of locale.
There are some possible problems though:
1. ELinks stores strings as UTF-32 in arrays of unicode_val_T, but TRE
uses wchar_t instead. If wchar_t is UTF-16, as it is on Microsoft
Windows, then TRE will misdecode the strings. It wouldn't be too
hard to make ELinks convert to UTF-16 in this case, but (a) TRE
doesn't currently support UTF-16 either, and it seems possible that
wchar_t-independent UTF-32 interfaces will be added to TRE; and (b)
there seems to be little interest on using ELinks on Windows anyway.
2. The Citrus Project apparently wanted BSD to use a locale-dependent
wchar_t: e.g. UTF-32 in some locales and an ISO 2022 derivative in
others. Regexp searches in ELinks now do not support the latter.
[ Adapted to elinks-0.12 from bug 1060 attachment 506.
Commit message by me. --KON ]
In src/bookmarks/dialogs.c, do_add_bookmark() gets the title and URL
in the terminal charset and needs to know which one that is. When a
bookmark is being added, save the struct terminal * to dialog.udata2
and read the charset from there. When a bookmark is being edited,
dialog.udata2 is needed for the struct bookmark *, but there we always
have the parent struct dialog_data * in dialog.udata and can get the
terminal from that.
These functions now expect or return strings in UTF-8:
delete_folder_by_name (sneak in a const, too), bookmark_terminal_tabs,
open_bookmark_folder, and get_auto_save_bookmark_foldername_utf8 (new
function).
When setting the title or URL of a bookmark from SMJS user scripting,
use update_bookmark() instead of writing directly to struct bookmark.
It triggers the bookmark-update event and sets the bookmarks_dirty
flag.
SpiderMonkey uses UTF-16 and the strings in struct bookmark are in
UTF-8. Previously, the conversions behaved as if the strings had been
in ISO-8859-1.
SpiderMonkey also supports JS_SetCStringsAreUTF8(), which would make
the existing functions convert between UTF-16 and UTF-8, but that
effect is global so I dare not enable it yet. Besides, I don't know
if that function works in all the SpiderMonkey versions that ELinks
claims to work with.
This also makes the bookmark-update event carry strings in UTF-8.
The only current consumer of that event is bookmark_change_hook(),
which ignores the strings, so no changes are needed there.
When the file is being read, Expat provides the strings to ELinks in
UTF-8, so ELinks can put them in struct bookmark without conversions.
Make sure gettext returns any placeholder strings in UTF-8, too.
Replace '\r' with ' ' in bookmark titles and URLs.
When the file is being written, put encoding="UTF-8" in the XML
declaration, and then write out the strings from struct bookmark
without character set conversions. Do replace some characters
with entity references though, by calling add_html_to_string().
When ELinks is parsing an XML element in from an XBEL bookmark file,
it collects the attributes of the element to the current_node->attrs
list. Previously, struct attributes had room for one string only:
the last element of current_node->attrs was the name of the first
attribute, and it was preceded by the value of the first attribute,
the name of the second attribute, the value of the second attribute,
and so on. However, when get_attribute_value() was looking for a
given name, it compared the values as well. So, if you had for
example <bookmark id="href" href="http://elinks.cz/">, then
get_attribute_value("href") would incorrectly return "href".
To fix this confusion, store values in the new member
attributes.value, rather than in attributes.name.
Change test/imgmap2.html so it can be used for testing this too.
Debian Iceweasel 3.0.4 does not appear to support such external
client-side image maps. Well, that's one place where ELinks is
superior, I guess. There might be a security problem though if ELinks
were to let scripts of the referring page examine the links in the
image map.
When ELinks runs in an X11 terminal emulator (e.g. xterm), or in GNU
Screen, it tries to update the title of the window to match the title
of the current document. To do this, ELinks sends an "OSC 1 ; Pt BEL"
sequence to the terminal. Unfortunately, xterm expects the Pt string
to be in the ISO-8859-1 charset, making it impossible to display e.g.
Cyrillic characters. In xterm patch #210 (2006-03-12) however, there
is a menu item and a resource that can make xterm take the Pt string
in UTF-8 instead, allowing characters from all around the world.
The downside is that ELinks apparently cannot ask xterm whether the
setting is on or off; so add a terminal._template_.latin1_title option
to ELinks and let the user edit that instead.
Complete list of changes:
- Add the terminal._template_.latin1_title option. But do not add
that to the terminal options window because it's already rather
crowded there.
- In set_window_title(), take a new codepage argument. Use it to
decode the title into Unicode characters, and remove only actual
control characters. For example, CP437 has graphical characters in
the 0x80...0x9F range, so don't remove those, even though ISO-8859-1
has control characters in the same range. Likewise, don't
misinterpret single bytes of UTF-8 characters as control characters.
- In set_window_title(), do not truncate the title to the width of the
window. The font is likely to be different and proportional anyway.
But do truncate before 1024 bytes, an xterm limit.
- In struct itrm, add a title_codepage member to remember which
charset the master said it was going to use in the terminal window
title. Initialize title_codepage in handle_trm(), update it in
dispatch_special() if the master sends the new request
TERM_FN_TITLE_CODEPAGE, and use it in most set_window_title() calls;
but not in the one that sets $TERM as the title, because that string
was not received from the master and should consist of ASCII
characters only.
- In set_terminal_title(), convert the caller-provided title to
ISO-8859-1 or UTF-8 if appropriate, and report the codepage to the
slave with the new TERM_FN_TITLE_CODEPAGE request. The conversion
can run out of memory, so return a success/error flag, rather than
void. In display_window_title(), check this result and don't update
caches on error.
- Add a NEWS entry for all of this.
This simplifies the callers a little and may help implement
simultaneous support for different charsets on different terminals
of the same type (bug 1064).
look_for_link() used to return 0 both when it found the closing </MAP>
tag, and when it hit the end of the file. In the first case, it also
added *menu to the memory_list; in the second case, it did not. The
caller get_image_map() supposedly distinguished between these cases by
checking whether pos >= eof, and freed *menu separately if so.
However, if the </MAP> was at the very end of the HTML file, so that
not even a newline followed it, then look_for_link() left pos == eof
even though it had found the </MAP> and added *menu to the memory_list.
This made get_image_map() misinterpret the result and mem_free(*menu)
even though *menu had already been freed as part of the memory_list;
thus the crash.
To fix this, make look_for_link() return -1 instead of 0 if it hits
EOF without finding the </MAP>. Then make get_image_map() check the
return value instead of comparing pos to eof. And add a test case,
although not an automated one.
Alternatively, look_for_link() could have been changed to decrement
pos between finding the </MAP> and returning 0. Then, the pos >= eof
comparison in get_image_map() would have been false. That scheme
would however have been a bit more difficult to understand and
maintain, I think.
Reported by Paul B. Mahol.
(cherry picked from commit a2404407ce)
Before this patch, if you first moved the cursor to link X with
move-cursor-up and similar actions, and then clicked link Y with the
mouse, ELinks would activate link X, i.e. not the one you clicked.
This happened because the NAVIGATE_CURSOR_ROUTING mode was left
enabled and made ELinks ignore the doc_view->vs->current_link
member that ELinks had updated according to the click.
Make ELinks return the session to NAVIGATE_LINKWISE mode, so that
the update takes effect.
Reported by Paul B. Mahol.
(cherry picked from commit 4086418069)
Peter Collingbourne posted to elinks-dev on 3 November 2008:
>> ELinks is currently licensed under GPLv2 only. I hope we can
>> eventually change the licence to also allow later versions of
>> GPL and permit linking with OpenSSL. Do you allow such
>> relicensing for your patch?
>
> I agree to this.
According to COPYING, such permissions should be listed in AUTHORS.
This patch fixes an issue whereby a newline character appearing within
a hidden input field is incorrectly reinterpreted as a space character.
The patch handles almost all cases, and includes a test case.
15/18 tests pass, but the remainder currently fail due to the fact
that ELinks does not currently support textarea scripting.
c_strcasecmp and c_strncasecmp were taken from GNU coreutils 6.9,
which is copyrighted by the Free Software Foundation and licensed
under GNU GPL version 2 or later. It seems the programs in coreutils
do not normally read commands interactively. So, including coreutils
code in an interactive program such as ELinks could trigger GPLv2
section 2. c), which would require ELinks to display a copyright
notice and a warranty disclaimer each time it is started. Rewrite
those functions to remove the FSF-copyrighted code and make ELinks
not a work based on GNU coreutils.
Avoiding FSF code has the additional benefit that we won't have to ask
FSF for permission if we want to add a licence exception that allows
linking ELinks with OpenSSL. So it seems a good idea even if my
interpretation of GPLv2 2. c) is overly strict. I haven't checked
though whether there are other FSF-copyrighted portions in ELinks.