I highly suspect this is not needed but it's too tempting to just bump
three ports and forget about it.
No objection from tb@ (py-tlslite-ng maintainer)
LASzip quickly turns bulky LAS files into compact LAZ files without
information loss and is the de-facto standard for LiDAR compression.
LASzip compression can be many times smaller and many times faster than
generic compressors like bz2, gzip, and rar because it knows what the
different bytes in a LAS file represent.
with tweaks from & ok sthen@ jca@
CVE-2018-1000035 (heap overflow in processing password-protected archives)
CVE-2019-13232 (mishandles the overlapping of files inside a ZIP container)
From Moritz Buhl
revert to using MASTER_SITES again
successfully extracts a few test archives from GOG.com
While here, add comments with reason for COMPILER=base-clang ports-gcc
(C++17), and reason for WANTLIBs boost_system-mt and pthread which
portcheck lists as "Extra"
Changelog: https://constexpr.org/innoextract/changelog
This is used for -T0 which auto-detects the number of cores for parallel
compression. xz prefers sysctl over sysconf (see m4/tuklib_cpucores.m4
and src/common/tuklib_cpucores.c for reasons) but this doesn't work for
us; just set an autoconf cache variable to force sysconf which works
better for us.
Remove the qt5 FLAVOR; qt5 is the only supported/used version these days.
Disable the tests; they've not been updated for qt5 and aren't accessible
from the standard cmake build system.
Update all dependent ports to cope with the FLAVOR removal.
Requested by rsadowski@; improvements by sthen@
Python 3.3 onwards includes module lzma in the standard library,
providing support for working with LZMA and XZ compressed files via the
XZ Utils C library (XZ Utils is in a sense LZMA v2).
ok sthen@
Blosc is a high performance compressor optimized for binary data. It has been
designed to transmit data to the processor cache faster than the traditional,
non-compressed, direct memory fetch approach via a memcpy() OS call. Blosc is
meant not only to reduce the size of large datasets on-disk or in-memory, but
also to accelerate memory-bound computations.
It uses the blocking technique so as to reduce activity in the memory bus as
much as possible. In short, this technique works by dividing datasets in blocks
that are small enough to fit in caches of modern processors and perform
compression/decompression there. It also leverages, if available, SIMD
instructions (SSE2, AVX2) and multi-threading capabilities of CPUs, in order to
accelerate the compression/decompression process to a maximum.
From martin@; input and OK from bcallah@, feinerer@, and sthen@
* Fix CVE-2015-1197
* Fix CVE-2016-2037
* Fix CVE-2019-14866
* Remove --extract-over-symlinks option again, which was part of an earlier
non-upstream fix for CVE-2015-1197.
Version 3.1
DESCR:
rarfile is a Python module for RAR archive reading. The interface
is made as zipfile like as possible.
Supports both RAR3 and RAR5 format archives, multi volume archives,
Unicode filenames, password-protected archives, archive and file
comments, and archive parsing and non-compressed files handled with
pure Python code, Compressed files are extracted by executing either
unrar from RARLAB or bsdtar from libarchive. Works with both Python
2.7 and 3.x.
ok landry
ghc and the hs-packages now simply include the necessary (haskell)
package description files in lib/ghc/package.conf.d and update the
package.cache by running ghc-pkg recache at the end. register and
unregister scripts are no longer needed.