Documentation strings of most options used to contain a "\n" at the
end of each source line. When the option manager displayed these
strings, it treated each "\n" as a hard newline. On 80x24 terminals
however, the option description window has only 60 columes available
for the text (with the default setup.h), and the hard newlines were
further apart, so the option manager wrapped the text a second time,
resulting in rather ugly output where long lones are interleaved with
short ones. This could also cause the text to take up too much
vertical space and not fit in the window.
Replace most of those hard newlines with spaces so that the option
manager (or perhaps BFU) will take care of the wrapping. At the same
time, rewrap the strings in source code so that the source lines are
at most 79 columns wide.
In some options though, there is a list of possible values and their
meanings. In those lists, if the description of one value does not
fit in one line, then continuation lines should be indented. The
option manager and BFU are not currently able to do that. So, keep
the hard newlines in those lists, but rewrap them to 60 columns so
that they are less likely to require further wrapping at runtime.
Call stacks reported by valgrind:
==14702== at 0x80DD791: read_from_socket (socket.c:945)
==14702== by 0x8104D0C: read_more_http_data (http.c:1180)
==14702== by 0x81052FE: read_http_data (http.c:1388)
==14702== by 0x80DD69B: read_select (socket.c:910)
==14702== by 0x80D27AA: select_loop (select.c:307)
==14702== by 0x80D1ADE: main (main.c:358)
==14702== Address 0x4F4E598 is 56 bytes inside a block of size 81 free'd
==14702== at 0x402210F: free (vg_replace_malloc.c:233)
==14702== by 0x812BED8: debug_mem_free (memdebug.c:484)
==14702== by 0x80D7C82: done_connection (connection.c:479)
==14702== by 0x80D8A44: abort_connection (connection.c:769)
==14702== by 0x80D99CE: cancel_download (connection.c:1053)
==14702== by 0x8110EB6: abort_download (download.c:143)
==14702== by 0x81115BC: download_data_store (download.c:337)
==14702== by 0x8111AFB: download_data (download.c:446)
==14702== by 0x80D7B33: notify_connection_callbacks (connection.c:458)
==14702== by 0x80D781E: set_connection_state (connection.c:388)
==14702== by 0x80D7132: set_connection_socket_state (connection.c:234)
==14702== by 0x80DD78D: read_from_socket (socket.c:943)
read_from_socket() attempted to read socket->fd in order to set
handlers on it, but the socket had already been freed. Incidentally,
socket->fd was -1, which would have resulted in an assertion failure
if valgrind hadn't caught the bug first.
To fix this, add a list of weak references to sockets.
read_from_socket() registers a weak reference on entry and unregisters
it before exit. done_socket() breaks any weak references to the
specified socket. read_from_socket() then checks whether the weak
reference was broken, and doesn't access the socket any more if so.
This reverts src/{network,sched}/connection.c CVS revision 1.43,
which was made on 2003-07-03 and converted to Git commit
cae65f7941628109b51ffb2e2d05882fbbdc73ef in elinks-history.
It is pointless to check whether (c == d && c->id == d->id).
If c == d, then surely c->id == d->id, and I wouldn't be surprised
to see a compiler optimize that out.
Whereas, by taking the id as a parameter, connection_disappeared()
can check whether the pointer now points to a new struct connection
with a different id.
make_bittorrent_peer_connection() used to construct a struct uri on
the stack. This was hacky but worked nicely because the struct uri
was not really accessed after make_connection() returned. However,
since commit a83ff1f565, the struct uri
is also needed when the connection is being closed. Valgrind shows:
Invalid read of size 2
at 0x8100764: get_blacklist_entry (blacklist.c:33)
by 0x8100985: del_blacklist_entry (blacklist.c:64)
by 0x80DA579: complete_connect_socket (socket.c:448)
by 0x80DA84A: connected (socket.c:513)
by 0x80D0DDF: select_loop (select.c:297)
by 0x80D00C6: main (main.c:353)
Address 0xBEC3BFAE is just below the stack ptr. To suppress, use: --workaround-gcc296-bugs=yes
To fix this, allocate the struct uri on the heap instead, by
constructing a string and giving that to get_uri(). This string
cannot use the "bittorrent" URI scheme because parse_uri() does not
recognize the host and port fields in that. (The "bittorrent" scheme
has protocol_backend.free_syntax = 1 in order to support strings like
"bittorrent:http://beta.legaltorrents.com/get/159-noisome-beasts".)
Instead, define a new "bittorrent-peer" URI scheme for this purpose.
If the user attempts to use this URI scheme, its handler aborts the
connection with an error; but when make_bittorrent_peer_connection()
uses a bittorrent-peer URI, the handler is not called.
This change also lets get_uri() set the ipv6 flag if peer_info->ip is
an IPv6 address literal.
Reported by Witold Filipczyk.
fsp_open_session() has a bug where it does not set errno if getaddrinfo fails.
Before the bug 1013 fix, this caused an assertion failure.
After the bug 1013 fix, this caused a "Success" error message.
Now it instead causes "FSP server not found".
Replace almost all uses of enum connection_state with struct
connection_status. This removes the assumption that errno values used
by the system are between 0 and 100000. The GNU Hurd uses values like
ENOENT = 0x40000002 and EMIG_SERVER_DIED = -308.
This commit is derived from my attachments 450 and 467 to bug 1013.
It seems GnuTLS is not as good at negotiating a supported protocol as
OpenSSL is. ELinks tries to work around that by retrying with a
different protocol if the SSL library reports an error. However,
ELinks must not automatically retry POST requests where some data may
have already reached the server; POST is not a safe method in HTTP.
So instead, collect the name of the TLS-incapable server in a blacklist
when ELinks e.g. loads an HTML form from it; the actual POST can then
immediately use the protocol that worked.
It's a bit ugly that src/network/socket.c now uses
protocol/http/blacklist.h. It might be better to move the blacklist
files out of the http directory, and perhaps merge them with the
BitTorrent blacklisting code.
I am reverting all copiousoutput support because of bug 917.
This reverts commit 4dc4ea47f2.
Conflicts:
src/network/connection.h: After the original commit, the declaration
of copiousoutput_data had been changed to use the LIST_OF macro.
Also, connection.cgi had been added next to the connection.popen
member added by the original commit.
src/session/download.c: After the original commit, the definition of
copiousoutput_data had been changed to use the INIT_LIST_OF macro.
CGI scripts are distinguishable from normal files. I hope that this
fixes the bug 991. This commit also reverts the previous revert.
(cherry picked from commit 7ceba1e461)
Previously, each progress timer function registered with
start_update_progress() was directly used as the timer function of
progress.timer, so it was responsible of erasing the expired timer ID
from that member. Failing to do this could result in heap corruption.
The progress timer functions normally fulfilled the requirement by
calling update_progress(), but one such function upload_stat_timer()
had to erase the timer ID on its own too.
Now instead, there is a wrapper function progress_timeout(), which
progress.c sets as the timer function of progress.timer. This wrapper
erases the expired timer ID from progress.timer and then calls the
progress timer function registered with start_update_progress(). So
the progress timer function is no longer responsible of erasing the
timer ID and there's no risk that it could fail to do that in some
error situation.
This commit introduces a new risk though. Previously, if the struct
progress was freed while the timer was running, the (progress) timer
function would still be called, and it would be able to detect that
the progress pointer is NULL and recover from this situation. Now,
the timer function progress_timeout() has a pointer to the struct
progress and will dereference that pointer without being able to check
whether the structure has been freed. Fortunately, done_progress()
asserts that the timer is not running, so this should not occur.
Move connection.post_fd to http_post.post_fd.
Make connection.done point to the new done_http_connection(),
which calls the new done_http_post(), which closes post_fd.
So done_connection() no longer needs to do that.
Now that done_http_post() exists, a later commit can add dynamically
allocated data in struct http_post and ensure that it will be freed.
If ELinks is being linked with SSL library, use its random number
generator.
Otherwise, try /dev/urandom and /dev/prandom. If they do not work,
fall back to rand(), calling srand() only once. This fallback is
mostly interesting for the Hurd and Microsoft Windows.
BitTorrent piece selection and dom/test/html-mangle.c still use rand()
(but not srand()) directly. Those would not benefit from being
unpredictable, I think.
CGI scripts are distinguishable from normal files. I hope that this
fixes the bug 991. This commit also reverts the previous revert.
(cherry picked from commit 7ceba1e461)
This reverts commit 7ceba1e461,
which is causing an assertion to fail if I open the same PDF
twice in a row, even if I cancel the dialog box when ELinks
first asks which program to run:
INTERNAL ERROR at /home/Kalle/src/elinks-0.12/src/session/download.c:980: assertion download && download->conn failed!
Forcing core dump! Man the Lifeboats! Women and children first!
But please DO NOT report this as a segfault!!! It is an internal error, not a
normal segfault, there is a huge difference in these for us the developers.
Also, noting the EXACT error you got above is crucial for hunting the problem
down. Thanks, and please get in touch with us.
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1216698688 (LWP 17877)]
0xb7a02d76 in raise () from /lib/libc.so.6
(gdb) backtrace 6
at /home/Kalle/src/elinks-0.12/src/util/error.c:179
fmt=0x816984c "assertion download && download->conn failed!")
at /home/Kalle/src/elinks-0.12/src/util/error.c:122
cached=0x8253ca8) at /home/Kalle/src/elinks-0.12/src/session/download.c:980
cached=0x8253ca8, frame=0)
at /home/Kalle/src/elinks-0.12/src/session/download.c:1339
at /home/Kalle/src/elinks-0.12/src/session/task.c:493
(More stack frames follow...)
There is a fix available but I don't trust it yet.
There are warnings about casts in the Debian amd64 build logs:
http://buildd.debian.org/fetch.cgi?&pkg=elinks&ver=0.11.3-2&arch=amd64&stamp=1200348983&file=log
[CC] src/intl/gettext/dcigettext.o
/build/buildd/elinks-0.11.3/src/intl/gettext/dcigettext.c: In function '_nl_find_msg':
/build/buildd/elinks-0.11.3/src/intl/gettext/dcigettext.c:745: warning: cast from pointer to integer of different size
/build/buildd/elinks-0.11.3/src/intl/gettext/dcigettext.c:746: warning: cast from pointer to integer of different size
...
[CC] src/network/ssl/socket.o
/build/buildd/elinks-0.11.3/src/network/ssl/socket.c: In function 'ssl_connect':
/build/buildd/elinks-0.11.3/src/network/ssl/socket.c:219: warning: cast to pointer from integer of different size
The warnings in _nl_find_msg were caused by alignof, which I already
fixed. This commit ought to fix the gnutls_transport_set_ptr call in
ssl_connect. This warning did not yet happen in bug 464384 because
the others broke the build before it got that far.
It is unlikely because the standard members of struct sockaddr_in
(sin_family, sin_port, sin_addr) already require at least 8 bytes
and I don't know of any system that has size_t larger than that.
Besides, at least glibc pads the structure to 16 bytes.
When get_pasv6_socket was merged into get_pasv_socket on 2005-04-15,
the AF_INET6 of get_pasv6_socket was lost and the merged function
always returned AF_INET sockets. This then made getsockname fill
only part of the struct sockaddr_in6, and ELinks sent to the server
an EPRT command that had half the bits missing from the IPv6 address.
At least ftp.funet.fi then rejected the command, helpfully saying
what the address should have been.
This commit fixes active FTP over IPv6. Passive FTP was already fixed
in 0.11.3.GIT (887d650efe), on 2007-05-01.
Revert part of commit 7215c964e40afe953787d7831b04182fbaba4662,
"Use real types (enum connection_{state,priority})." of 2005-06-14.
connection.pri[] is indexed by enum connection_priority, but its
elements are merely reference counts; they are never assigned from
or compared to enum connection_priority. Defining the elements
as int will result in more readable output from GDB.
Noted in bug 920.
Revert commit 2d6840b9bd9d3a7a45a5ad92b4e98ff7224d6d97. It is causing
passive FTP via IPv6 to fail on ftp.funet.fi. ELinks sends PASV and
the server says "425 You cannot use PASV on IPv6 connections. Use EPSV
instead."
... mainly bittorrent:// and bittorrent://x
The BitTorrent URL is supposed to contain an embedded URL pointing to a
metainfo file. If this is not the case a "custom" error message will be
shown. Also fixes calling of free_list() on an uninitialized list.
Closes bug 729.
- Include arpa/inet.h to get hton* ntoh* functions.
- Use socklen_t instead of int.
- Try to define PF_INET to AF_INET if it doesn't exist.
Reported-by: Andy Tanenbaum <ast@cs.vu.nl>
In revision 1.15 of dns.c (as it was called way back then), pasky
backported a fix from Links 0.97pre2 to try gethostbyaddr before
trying gethostbyname for DNS lookups:
MacOS address resolution fix (Aldy Hernandez) (from 0.97pre2)
However, that fix introduced a bug, because it was calling gethostbyaddr
on all addresses, not just IP addresses. Mikulas fixed that bug in Links
0.98:
Do not call gethostbyaddr when name is not ip address (it should avoid
some useless nameserver queries)'
This fix was never backported to ELinks. Until today.
This commit is functionally the same as the fix in Links 0.98, plus it uses
inet_aton for great correctness!
This fixes a bug reported in #elinks by tnks, whereby lookups for
yubnub.org resulted in 121.117.98.110 == 0x7975626E == 'y', 'u', 'b', 'n'.
I believe that it also fixes bug 691 (which is already closed with a
workaround).
using magic ;)
Now in resume mode connection is always interrupted
and resumed. Even when all file is downloaded from beginning
conection will be resumed from old end of file. Feel free to fix it.
This is another follow-up regression fix that made open and save actions in
the WTD-dialog not function correctly when the connection ended before they
were pressed.
Related: 347970988d
This makes move_download not assume that there is a connection attached
when it is called. This is the case pretty often for file:// downloads when
dialogs are involved (query file) and the reason why it explicitly checks
if the connection state is 'in result state'. Anyway, fill the new download
struct with variabled from the old one instead of taking variables from the
connection struct.
This patch also adds some assertions and a few comments.
This includes setting the new priority and adding the download to the list
of connection downloads. If the connection has no downloads set the
PRI_CANCEL priority; get_priority() requires that.
This simplifies unqueuing of downloads and makes it more obvious that
the 'change' being performed is to migrate or replace an old download
handle with a new one.
Use enum connection_state instead of int in load_uri,
proxy_uri, get_proxy_worker, and get_proxy_uri. See commit
d18809522e. I hope that satisfies TCC.
This changes the init target to be idempotent: most importantly it will now
never overwrite a Makefile if it exists. Additionally 'make init' will
generate the .vimrc files. Yay, no more stupid 'added fairies' commits! ;)
Mostly non-ANSI function declarations, using 0 as NULL and inline
function prototypes. Also removed unused S_HTTP_100 network state
enum type, which text message contained unknown escape sequence: '\?'.
Convert remaining conditional file building to use
OBJS-$(CONFIG_FOO) += foo.o
one problem with reverse meaining (in util/) fixed with local 'hack'.
Cleanup and remove stuff which is now default targets.
It is a little ugly since I couldn't get $(wildcard) to expand *.o files
so it just checks if there are any *.c files and then link in the lib.o
based on that.
Ditch the building of an archive (.a) in favour of linking all objects in a
directory into a lib.o file. This makes it easy to link in subdirectories
and more importantly keeps the build logic in the local subdirectories.
Note: after updating you will have to rm **/*.a if you do not make clean
before updating.