In particular:
- mkdist takes options and need not be run at elinks.cz
- git push just the single tag, rather than all --tags
- explicitly avoid multipart/signed
- add the release date from the elinks-users archive to NEWS
- update download.txt, release.html, and bugzilla/milestones/elinks.html
md5sum -c exits with code 1 if some of the files listed in the md5
file are missing, so each md5 file should list only those files that
the user is supposed to download together. This is also how
elinks-web/download.html has been set up.
.git/HEAD in elinks-0.12pre1.tar.gz broke git-import-orig in Debian's
git-buildpackage 0.4.33:
$ git init
Initialized empty Git repository in .git/
$ git-import-orig ~/src/elinks-seek/elinks-0.12pre1.tar.gz
Upstream version is 0.12pre1
Initial import of '/home/Kalle/src/elinks-seek/elinks-0.12pre1.tar.gz' ...
fatal: bad object HEAD
Traceback (most recent call last):
File "/usr/bin/git-import-orig", line 243, in <module>
sys.exit(main(sys.argv))
File "/usr/bin/git-import-orig", line 201, in main
import_upstream_tree(repo, orig_dir, version, options.filters, verbose=not is_empty)
File "/usr/bin/git-import-orig", line 65, in import_upstream_tree
if replace_source_tree(repo, src_dir, filters, verbose=True):
File "/var/lib/python-support/python2.5/gbp/git_utils.py", line 145, in replace_source_tree
return not repo.is_clean()[0]
File "/var/lib/python-support/python2.5/gbp/git_utils.py", line 78, in is_clean
if out[0].startswith('#') and out[1].strip().startswith(clean_msg):
IndexError: list index out of range
So let's try with a "git-commit-id" file outside of .git/ instead.
I also considered ".git-commit-id" but that could give the impression
that Git itself reads the file for some purpose.
Git does not appear to have anything like cg clone -s, which makes the
clone in the current directory rather than in a new directory. It
seems possible to work around that with:
git clone --bare http://elinks.cz/elinks.git .git
git --git-dir=.git config core.bare false
git reset --hard
git clean -f
git checkout elinks-0.12
but that's already so complex that I think it'll be easier to just
remove the whole directory and clone to a new one.
The whole concept of first downloading a snapshot and then updating
that from version control comes from the time when ELinks was in CVS.
It made sense then because CVS could download deltas based on the data
in the CVS subdirectories contained in the snapshots. However, Git
wants to get the whole history and does not benefit from having the
files of a single commit available in advance.
The documentation has version numbers in a few places and it's easier
to get those right this way than by building it elsewhere before
running mkdist. This change slows down mkdist but ccache can mitigate
some of that and snapshots use prebuilt documentation anyway.
The RISC OS Unix Porting Project has apparently moved its webpage
to <http://www.riscos.info/packages/SectionIndex.html#Browser>,
but ELinks is no longer listed there.
ftp.fu-berlin.de still has multiple copies of Links, but they seem to
be part of operating systems like NetBSD or Ubuntu; nothing there
looks like a mirror of the Links download site.
Final-Recipient: rfc822; listar@linuxfromscratch.org
Action: failed
Status: 5.0.0
Diagnostic-Code: X-Postfix; host smtp.linuxfromscratch.org[216.171.237.234]
said: 550 5.1.1 <listar@linuxfromscratch.org>: Recipient address rejected:
User unknown in local recipient table (in reply to RCPT TO command)
Do not retain changed values in form fields when the user reloads. Doing
so can be confusing or even cause data-loss when new default values are
specified in the updated document. For example, when editing an article on
Wikipedia, one loads the edit page for the article, makes and submits
changes, goes back to the edit page to make further modifications, and
reloads to get the new article text. Before this change, reloading the
edit page would not update the textarea on the page with the new article
source, which can lead one (and has led me) to make changes to the original
version of the article by accident.
This fixes bug 620.
(cherry picked from commit 9e1e94bee0)
I am reverting all /dev/fd recognition because of bug 917.
This reverts commit c283f8cfd9,
except src/protocol/file/file.c still needs #include "osdep/osdep.h"
for STRING_DIR_SEP.
I am reverting all copiousoutput support because of bug 917.
This reverts commit f6115e65ec.
Conflicts:
src/session/download.h: type_query.cgi, and Doxygen comments.
I am reverting all copiousoutput support because of bug 917.
This reverts commit a2c12d7653.
Conflicts:
src/session/download.c: The int_min vs. int_max change had
already been obsoleted by using safe_strncpy instead,
in commit efcd6c9758 for bug 896 on 2007-07-24.
Also, TERM_EXEC_FG and TERM_EXEC_BG had been added.
I am reverting all copiousoutput support because of bug 917.
This reverts commit 6ead4e9c65.
Conflicts:
src/session/download.c: TERM_EXEC_FG and TERM_EXEC_BG had been
added after the original commit.
I am reverting all copiousoutput support because of bug 917.
This reverts commit 4dc4ea47f2.
Conflicts:
src/network/connection.h: After the original commit, the declaration
of copiousoutput_data had been changed to use the LIST_OF macro.
Also, connection.cgi had been added next to the connection.popen
member added by the original commit.
src/session/download.c: After the original commit, the definition of
copiousoutput_data had been changed to use the INIT_LIST_OF macro.
If the user opens the same file again after it is in the cache, then
ELinks does not always open a new connection, so download->conn can be
NULL in init_type_query(), and download->conn->cgi would crash.
Don't read that, then; instead add a new flag cache_entry.cgi, which
http_got_header() sets or clears as soon as possible after the cache
entry has been created.
(cherry picked from commit 81f8ee1fa2)
CGI scripts are distinguishable from normal files. I hope that this
fixes the bug 991. This commit also reverts the previous revert.
(cherry picked from commit 7ceba1e461)
The comment said "it is not possible to call kill_timer from a timer
handler." Sure, such calls used to crash occasionally, but that was
bug 868 and has already been fixed.
The second argument of PERL_SYS_INIT3 should be a char ***
but ELinks was giving it a char *(*)[1].
Also, enlarge the array to 2 elements, so that my_argv[my_argc] == NULL
like in main(). PERL_SYS_INIT3 seems hardly documented at all so I'm
not sure this is necessary, but it shouldn't hurt.
(cherry picked from commit 8d0677e76a)
Without this patch, ELinks showed garbage at
<http://www.dwheeler.com/oss_fs_why.html> when bzip2 decompression was
enabled. safe_read() in bzip2_read() did not see all of the body
bytes that ELinks had received from the server. After bzip2_read()
received EAGAIN from safe_read() and returned 0, something skipped
1460 bytes.
decompress_data() apparently assumed that read_encoded() returning 0
meant the end of the file, and returned even though len still was
nonzero, i.e. it had not yet written to the pipe all the data that
the caller (read_chunked_http_data() or read_normal_http_data()) had
provided. The caller did not know this, and discarded the data.
(cherry picked from commit 7e5e05ca60)
The older help2doc script is no longer used for anything.
To make future cherry-picking easier, this commit does not include the
resulting changes in generated files.
The intention is to convert --config-help and --long-help outputs to
DocBook XML and XHTML rather than AsciiDoc, so that the converter does
not have to work around the intricate AsciiDoc syntax. However, this
commit does not yet connect the script to doc/Makefile.
XHTML could be generated from DocBook XML, but the script outputs it
directly because our DocBook is primarily intended for manual pages
and so does not have all the links that are useful in HTML.