Conflicts:
NEWS (bug 939 was listed twice)
doc/man/man5/elinks.conf.5 (regenerated)
po/fr.po (only in comments and such)
po/pl.po (only in comments and such)
src/protocol/fsp/fsp.c (the relevant changes were already here)
The fix itself is in the parent commit.
(cherry picked from commit 4c390589ea28eeba020357fd89841b0fc2be33b3,
rewriting the NEWS entry because the bug also occurred on Debian)
*fresult pointed to nowhere. On FreeBSD *fresult == NULL
and directories weren't displayed.
Check also if safe_write writes all data.
(cherry picked from commit 06bcc48487676e0ea113ed7ace63798dc0562694)
If the user opens the same file again after it is in the cache, then
ELinks does not always open a new connection, so download->conn can be
NULL in init_type_query(), and download->conn->cgi would crash.
Don't read that, then; instead add a new flag cache_entry.cgi, which
http_got_header() sets or clears as soon as possible after the cache
entry has been created.
CGI scripts are distinguishable from normal files. I hope that this
fixes the bug 991. This commit also reverts the previous revert.
(cherry picked from commit 7ceba1e46131a96ee78ce999dfbe14e359d8cbcb)
libsmbclient's stdout and stderr interferred with ELinks's stdout
and stdin. That caused an assertion failure. Now the ELinks uses
different streams for processing of the smb protocol.
This reverts commit 7ceba1e46131a96ee78ce999dfbe14e359d8cbcb,
which is causing an assertion to fail if I open the same PDF
twice in a row, even if I cancel the dialog box when ELinks
first asks which program to run:
INTERNAL ERROR at /home/Kalle/src/elinks-0.12/src/session/download.c:980: assertion download && download->conn failed!
Forcing core dump! Man the Lifeboats! Women and children first!
But please DO NOT report this as a segfault!!! It is an internal error, not a
normal segfault, there is a huge difference in these for us the developers.
Also, noting the EXACT error you got above is crucial for hunting the problem
down. Thanks, and please get in touch with us.
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1216698688 (LWP 17877)]
0xb7a02d76 in raise () from /lib/libc.so.6
(gdb) backtrace 6
at /home/Kalle/src/elinks-0.12/src/util/error.c:179
fmt=0x816984c "assertion download && download->conn failed!")
at /home/Kalle/src/elinks-0.12/src/util/error.c:122
cached=0x8253ca8) at /home/Kalle/src/elinks-0.12/src/session/download.c:980
cached=0x8253ca8, frame=0)
at /home/Kalle/src/elinks-0.12/src/session/download.c:1339
at /home/Kalle/src/elinks-0.12/src/session/task.c:493
(More stack frames follow...)
There is a fix available but I don't trust it yet.
This syncs some changes (ie. -> e.g. etc.) from elinks-0.12 or beyond.
I noticed them while updating the web pages, and apologize that I will
not spent the time to attribute it to the individual commits.
(cherry picked from commit 2bfc7b37241b88816cb0454399ec615b8511680a,
omitting generated files)
AFAIK, all bugs in it have been fixed. Some bugs may still be lurking
but they are more likely to get caught if compression is enabled.
I also replaced COMP_NOTE with static text because xgettext does not
support macros in the argument of N_.
The bug was reported by Paul B. Mahol on elinks-users. The example is
from the FTP site he provided:
ftp.freebsd.org/pub/FreeBSD/ISO-IMAGES-ia64/
Message-ID: <3a142e750802262008l6fd55be5v44207bc4479dd3fc@mail.gmail.com>
(cherry picked from commit c069403b75cdcbe3462f59969aaa4869ef548c26)
... so all the tests with responses stretching multiple lines are
actually tested in their entirety.
(cherry picked from commit aa9a847c00727ed5020e76c354a46a7fe8e0f4d2,
resolving a conflict due to the use of get_test_opt)
This fixes test 9 (Basic VMS responses) that uses time specs with
seconds left out (e.g. 17:44) for me.
(cherry picked from commit 397bef882bae5f2bdc7f09314a845bbaa01c922e)
Change char id to enum bittorrent_message_id id to prevent FTBFS on
powerpc and s390
(cherry picked from commit 01b0c812275c884823ec20d122a7035a7df3861f,
with a conflict)
On AMD64 apparently, off_t is long but ELinks detected SIZEOF_OFF_T == 8
and defined OFF_T_FORMAT as "lld", which expects long long and so causes
GCC to warn about a mismatching format specifier. Because --enable-debug
adds -Werror to $CFLAGS, this warning breaks the build. When both
SIZEOF_LONG and SIZEOF_LONG_LONG are 8, ELinks cannot know which type
it should use.
To fix this, do not attempt to find a format specifier for off_t itself.
Instead cast all printed off_t values to a new typedef off_print_T that
is large enough, and replace OFF_T_FORMAT with OFF_PRINT_FORMAT which
is suitable for off_print_T altough not necessarily for off_t. ELinks
already had a similar scheme with time_print_T and TIME_PRINT_FORMAT.
Previously, struct string was used here. However,
bittorrent_fetch_callback does not initialize response.magic,
and parse_bittorrent_tracker_response changes response->source
to point to data that must not be freed. So the util/string.h
functions are not actually safe to use on these objects.
For this reason, it is safer to use a separate type.
The previous check (integer > (off_t) integer * 10) did not detect all
overflows. Examples with 32-bit off_t:
integer = 0x1C71C71D (0x100000000/9 rounded up);
integer * 10 = 0x11C71C722, wraps to 0x1C71C722 which is > integer.
integer = 0x73333333;
integer * 10 = 0x47FFFFFFE, wraps to 0x7FFFFFFE which is > integer.
Examples with 64-bit off_t:
integer = 0x1C71C71C71C71C72 (0x10000000000000000/9 rounded up);
integer * 10 = 0x11C71C71C71C71C74, wraps to 0x1C71C71C71C71C74
which is > integer.
integer = 0x7333333333333333;
integer * 10 = 0x47FFFFFFFFFFFFFFE, wraps to 0x7FFFFFFFFFFFFFFE
which is > integer.
It is unclear to me what effect an undetected overflow would actually
have from the user's viewpoint, so I'm not adding a NEWS entry.
(cherry picked from commit a25fd18e56624cbb34c9bafec821b2c3bd0a3c28)
The compression support in ELinks has always been buggy, with some large pages
failing to decompress and containing garbage at the end instead. However,
with the recent attempts to fix the compression support, it has been actually
made *so* buggy that not only these cases seem to occur more often, but in
some cases, the page is just silently chopped and no content visible; in other
cases, "Resource temporarily unavailable" is displayed. Etc.
The compression support got now to the point where it is so awfully unstable
that it is actively harmful to have it enabled by default. I've been burnt by
it several times already and once made a very serious error because of page
being chopped silently.