yt-dlp/youtube_dl/extractor/xhamster.py
Aakash Gajjar b827ee921f
pull changes from remote master (#190)
* [scrippsnetworks] Add new extractor(closes #19857)(closes #22981)

* [teachable] Improve locked lessons detection (#23528)

* [teachable] Fail with error message if no video URL found

* [extractors] add missing import for ScrippsNetworksIE

* [brightcove] cache brightcove player policy keys

* [prosiebensat1] improve geo restriction handling(closes #23571)

* [soundcloud] automatically update client id on failing requests

* [spankbang] Fix extraction (closes #23307, closes #23423, closes #23444)

* [spankbang] Improve removed video detection (#23423)

* [brightcove] update policy key on failing requests

* [pornhub] Fix extraction and add support for m3u8 formats (closes #22749, closes #23082)

* [pornhub] Improve locked videos detection (closes #22449, closes #22780)

* [brightcove] invalidate policy key cache on failing requests

* [soundcloud] fix client id extraction for non fatal requests

* [ChangeLog] Actualize
[ci skip]

* [devscripts/create-github-release] Switch to using PAT for authentication

Basic authentication will be deprecated soon

* release 2020.01.01

* [redtube] Detect private videos (#23518)

* [vice] improve extraction(closes #23631)

* [devscripts/create-github-release] Remove unused import

* [wistia] improve format extraction and extract subtitles(closes #22590)

* [nrktv:seriebase] Fix extraction (closes #23625) (#23537)

* [discovery] fix anonymous token extraction(closes #23650)

* [scrippsnetworks] add support for www.discovery.com videos

* [scrippsnetworks] correct test case URL

* [dctp] fix format extraction(closes #23656)

* [pandatv] Remove extractor (#23630)

* [naver] improve extraction

- improve geo-restriction handling
- extract automatic captions
- extract uploader metadata
- extract VLive HLS formats

* [naver] improve metadata extraction

* [cloudflarestream] improve extraction

- add support for bytehighway.net domain
- add support for signed URLs
- extract thumbnail

* [cloudflarestream] import embed URL extraction

* [lego] fix extraction and extract subtitle(closes #23687)

* [safari] Fix kaltura session extraction (closes #23679) (#23670)

* [orf:fm4] Fix extraction (#23599)

* [orf:radio] Clean description and improve extraction

* [twitter] add support for promo_video_website cards(closes #23711)

* [vodplatform] add support for embed.kwikmotion.com domain

* [ndr:base:embed] Improve thumbnails extraction (closes #23731)

* [canvas] Add support for new API endpoint and update tests (closes #17680, closes #18629)

* [travis] Add flake8 job (#23720)

* [yourporn] Fix extraction (closes #21645, closes #22255, closes #23459)

* [ChangeLog] Actualize
[ci skip]

* release 2020.01.15

* [soundcloud] Restore previews extraction (closes #23739)

* [orf:tvthek] Improve geo restricted videos detection (closes #23741)

* [zype] improve extraction

- extract subtitles(closes #21258)
- support URLs with alternative keys/tokens(#21258)
- extract more metadata

* [americastestkitchen] fix extraction

* [nbc] add support for nbc multi network URLs(closes #23049)

* [ard] improve extraction(closes #23761)

- simplify extraction
- extract age limit and series
- bypass geo-restriction

* [ivi:compilation] Fix entries extraction (closes #23770)

* [24video] Add support for 24video.vip (closes #23753)

* [businessinsider] Fix jwplatform id extraction (closes #22929) (#22954)

* [ard] add a missing condition

* [azmedien] fix extraction(closes #23783)

* [voicerepublic] fix extraction

* [stretchinternet] fix extraction(closes #4319)

* [youtube] Fix sigfunc name extraction (closes #23819)

* [ChangeLog] Actualize
[ci skip]

* release 2020.01.24

* [soundcloud] imporve private playlist/set tracks extraction

https://github.com/ytdl-org/youtube-dl/issues/3707#issuecomment-577873539

* [svt] fix article extraction(closes #22897)(closes #22919)

* [svt] fix series extraction(closes #22297)

* [viewlift] improve extraction

- fix extraction(closes #23851)
- add add support for authentication
- add support for more domains

* [vimeo] fix album extraction(closes #23864)

* [tva] Relax _VALID_URL (closes #23903)

* [tv5mondeplus] Fix extraction (closes #23907, closes #23911)

* [twitch:stream] Lowercase channel id for stream request (closes #23917)

* [sportdeutschland] Update to new sportdeutschland API

They switched to SSL, but under a different host AND path...
Remove the old test cases because these videos have become unavailable.

* [popcorntimes] Add extractor (closes #23949)

* [thisoldhouse] fix extraction(closes #23951)

* [toggle] Add support for mewatch.sg (closes #23895) (#23930)

* [compat] Introduce compat_realpath (refs #23991)

* [update] Fix updating via symlinks (closes #23991)

* [nytimes] improve format sorting(closes #24010)

* [abc:iview] Support 720p (#22907) (#22921)

* [nova:embed] Fix extraction (closes #23672)

* [nova:embed] Improve (closes #23690)

* [nova] Improve extraction (refs #23690)

* [jpopsuki] Remove extractor (closes #23858)

* [YoutubeDL] Fix playlist entry indexing with --playlist-items (closes #10591, closes #10622)

* [test_YoutubeDL] Fix get_ids

* [test_YoutubeDL] Add tests for #10591 (closes #23873)

* [24video] Add support for porn.24video.net (closes #23779, closes #23784)

* [npr] Add support for streams (closes #24042)

* [ChangeLog] Actualize
[ci skip]

* release 2020.02.16

* [tv2dk:bornholm:play] Fix extraction (#24076)

* [imdb] Fix extraction (closes #23443)

* [wistia] Add support for multiple generic embeds (closes #8347, closes #11385)

* [teachable] Add support for multiple videos per lecture (closes #24101)

* [pornhd] Fix extraction (closes #24128)

* [options] Remove duplicate short option -v for --version (#24162)

* [extractor/common] Convert ISM manifest to unicode before processing on python 2 (#24152)

* [YoutubeDL] Force redirect URL to unicode on python 2

* Remove no longer needed compat_str around geturl

* [youjizz] Fix extraction (closes #24181)

* [test_subtitles] Remove obsolete test

* [zdf:channel] Fix tests

* [zapiks] Fix test

* [xtube] Fix metadata extraction (closes #21073, closes #22455)

* [xtube:user] Fix test

* [telecinco] Fix extraction (refs #24195)

* [telecinco] Add support for article opening videos

* [franceculture] Fix extraction (closes #24204)

* [xhamster] Fix extraction (closes #24205)

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.01

* [vimeo] Fix subtitles URLs (#24209)

* [servus] Add support for new URL schema (closes #23475, closes #23583, closes #24142)

* [youtube:playlist] Fix tests (closes #23872) (#23885)

* [peertube] Improve extraction

* [peertube] Fix issues and improve extraction (closes #23657)

* [pornhub] Improve title extraction (closes #24184)

* [vimeo] fix showcase password protected video extraction(closes #24224)

* [youtube] Fix age-gated videos support without login (closes #24248)

* [youtube] Fix tests

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.06

* [nhk] update API version(closes #24270)

* [youtube] Improve extraction in 429 error conditions (closes #24283)

* [youtube] Improve age-gated videos extraction in 429 error conditions (refs #24283)

* [youtube] Remove outdated code

Additional get_video_info requests don't seem to provide any extra itags any longer

* [README.md] Clarify 429 error

* [pornhub] Add support for pornhubpremium.com (#24288)

* [utils] Add support for cookies with spaces used instead of tabs

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.08

* Revert "[utils] Add support for cookies with spaces used instead of tabs"

According to [1] TABs must be used as separators between fields.
Files produces by some tools with spaces as separators are considered
malformed.

1. https://curl.haxx.se/docs/http-cookies.html

This reverts commit cff99c91d1.

* [utils] Add reference to cookie file format

* Revert "[vimeo] fix showcase password protected video extraction(closes #24224)"

This reverts commit 12ee431676.

* [nhk] Relax _VALID_URL (#24329)

* [nhk] Remove obsolete rtmp formats (closes #24329)

* [nhk] Update m3u8 URL and use native hls (#24329)

* [ndr] Fix extraction (closes #24326)

* [xtube] Fix formats extraction (closes #24348)

* [xtube] Fix typo

* [hellporno] Fix extraction (closes #24399)

* [cbc:watch] Add support for authentication

* [cbc:watch] Fix authenticated device token caching (closes #19160)

* [soundcloud] fix download url extraction(closes #24394)

* [limelight] remove disabled API requests(closes #24255)

* [bilibili] Add support for new URL schema with BV ids (closes #24439, closes #24442)

* [bilibili] Add support for player.bilibili.com (closes #24402)

* [teachable] Extract chapter metadata (closes #24421)

* [generic] Look for teachable embeds before wistia

* [teachable] Update upskillcourses domain

New version does not use teachable platform any longer

* [teachable] Update gns3 domain

* [teachable] Update test

* [ChangeLog] Actualize
[ci skip]

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.24

* [spankwire] Fix extraction (closes #18924, closes #20648)

* [spankwire] Add support for generic embeds (refs #24633)

* [youporn] Add support form generic embeds

* [mofosex] Add support for generic embeds (closes #24633)

* [tele5] Fix extraction (closes #24553)

* [extractor/common] Skip malformed ISM manifest XMLs while extracting ISM formats (#24667)

* [tv4] Fix ISM formats extraction (closes #24667)

* [twitch:clips] Extend _VALID_URL (closes #24290) (#24642)

* [motherless] Fix extraction (closes #24699)

* [nova:embed] Fix extraction (closes #24700)

* [youtube] Skip broken multifeed videos (closes #24711)

* [soundcloud] Extract AAC format

* [soundcloud] Improve AAC format extraction (closes #19173, closes #24708)

* [thisoldhouse] Fix video id extraction (closes #24548)

Added support for:
with of without "www."
and either  ".chorus.build" or ".com"

It now validated correctly on older URL's
```
<iframe src="https://thisoldhouse.chorus.build/videos/zype/5e33baec27d2e50001d5f52f
```
and newer ones
```
<iframe src="https://www.thisoldhouse.com/videos/zype/5e2b70e95216cc0001615120
```

* [thisoldhouse] Improve video id extraction (closes #24549)

* [youtube] Fix DRM videos detection (refs #24736)

* [options] Clarify doc on --exec command (closes #19087) (#24883)

* [prosiebensat1] Improve extraction and remove 7tv.de support (#24948)

* [prosiebensat1] Extract series metadata

* [tenplay] Relax _VALID_URL (closes #25001)

* [tvplay] fix Viafree extraction(closes #15189)(closes #24473)(closes #24789)

* [yahoo] fix GYAO Player extraction and relax title URL regex(closes #24178)(closes #24778)

* [youtube] Use redirected video id if any (closes #25063)

* [youtube] Improve player id extraction and add tests

* [extractor/common] Extract multiple JSON-LD entries

* [crunchyroll] Fix and improve extraction (closes #25096, closes #25060)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.03

* [puhutv] Remove no longer available HTTP formats (closes #25124)

* [utils] Improve cookie files support

+ Add support for UTF-8 in cookie files
* Skip malformed cookie file entries instead of crashing (invalid entry len, invalid expires at)

* [dailymotion] Fix typo

* [compat] Introduce compat_cookiejar_Cookie

* [extractor/common] Use compat_cookiejar_Cookie for _set_cookie (closes #23256, closes #24776)

To always ensure cookie name and value are bytestrings on python 2.

* [orf] Add support for more radio stations (closes #24938) (#24968)

* [uol] fix extraction(closes #22007)

* [downloader/http] Finish downloading once received data length matches expected

Always do this if possible, i.e. if Content-Length or expected length is known, not only in test.
This will save unnecessary last extra loop trying to read 0 bytes.

* [downloader/http] Request last data block of exact remaining size

Always request last data block of exact size remaining to download if possible not the current block size.

* [iprima] Improve extraction (closes #25138)

* [youtube] Improve signature cipher extraction (closes #25188)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.08

* [spike] fix Bellator mgid extraction(closes #25195)

* [bbccouk] PEP8

* [mailru] Fix extraction (closes #24530) (#25239)

* [README.md] flake8 HTTPS URL (#25230)

* [youtube] Add support for yewtu.be (#25226)

* [soundcloud] reduce API playlist page limit(closes #25274)

* [vimeo] improve format extraction and sorting(closes #25285)

* [redtube] Improve title extraction (#25208)

* [indavideo] Switch to HTTPS for API request (#25191)

* [utils] Fix file permissions in write_json_file (closes #12471) (#25122)

* [redtube] Improve formats extraction and extract m3u8 formats (closes #25311, closes #25321)

* [ard] Improve _VALID_URL (closes #25134) (#25198)

* [giantbomb] Extend _VALID_URL (#25222)

* [postprocessor/ffmpeg] Embed series metadata with --add-metadata

* [youtube] Add support for more invidious instances (#25417)

* [ard:beta] Extend _VALID_URL (closes #25405)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.29

* [jwplatform] Improve embeds extraction (closes #25467)

* [periscope] Fix untitled broadcasts (#25482)

* [twitter:broadcast] Add untitled periscope broadcast test

* [malltv] Add support for sk.mall.tv (#25445)

* [brightcove] Fix subtitles extraction (closes #25540)

* [brightcove] Sort imports

* [twitch] Pass v5 accept header and fix thumbnails extraction (closes #25531)

* [twitch:stream] Fix extraction (closes #25528)

* [twitch:stream] Expect 400 and 410 HTTP errors from API

* [tele5] Prefer jwplatform over nexx (closes #25533)

* [jwplatform] Add support for bypass geo restriction

* [tele5] Bypass geo restriction

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.06

* [kaltura] Add support for multiple embeds on a webpage (closes #25523)

* [youtube] Extract chapters from JSON (closes #24819)

* [facebook] Support single-video ID links

I stumbled upon this at https://www.facebook.com/bwfbadminton/posts/10157127020046316 . No idea how prevalent it is yet.

* [youtube] Fix playlist and feed extraction (closes #25675)

* [youtube] Fix thumbnails extraction and remove uploader id extraction warning (closes #25676)

* [youtube] Fix upload date extraction

* [youtube] Improve view count extraction

* [youtube] Fix uploader id and uploader URL extraction

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.16

* [youtube] Fix categories and improve tags extraction

* [youtube] Force old layout (closes #25682, closes #25683, closes #25680, closes #25686)

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.16.1

* [brightcove] Improve embed detection (closes #25674)

* [bellmedia] add support for cp24.com clip URLs(closes #25764)

* [youtube:playlists] Extend _VALID_URL (closes #25810)

* [youtube] Prevent excess HTTP 301 (#25786)

* [wistia] Restrict embed regex (closes #25969)

* [youtube] Improve description extraction (closes #25937) (#25980)

* [youtube] Fix sigfunc name extraction (closes #26134, closes #26135, closes #26136, closes #26137)

* [ChangeLog] Actualize
[ci skip]

* release 2020.07.28

* [xhamster] Extend _VALID_URL (closes #25789) (#25804)

* [xhamster] Fix extraction (closes #26157) (#26254)

* [xhamster] Extend _VALID_URL (closes #25927)

Co-authored-by: Remita Amine <remitamine@gmail.com>
Co-authored-by: Sergey M․ <dstftw@gmail.com>
Co-authored-by: nmeum <soeren+github@soeren-tempel.net>
Co-authored-by: Roxedus <me@roxedus.dev>
Co-authored-by: Singwai Chan <c.singwai@gmail.com>
Co-authored-by: cdarlint <cdarlint@users.noreply.github.com>
Co-authored-by: Johannes N <31795504+jonolt@users.noreply.github.com>
Co-authored-by: jnozsc <jnozsc@gmail.com>
Co-authored-by: Moritz Patelscheck <moritz.patelscheck@campus.tu-berlin.de>
Co-authored-by: PB <3854688+uno20001@users.noreply.github.com>
Co-authored-by: Philipp Hagemeister <phihag@phihag.de>
Co-authored-by: Xaver Hellauer <software@hellauer.bayern>
Co-authored-by: d2au <d2au.dev@gmail.com>
Co-authored-by: Jan 'Yenda' Trmal <jtrmal@gmail.com>
Co-authored-by: jxu <7989982+jxu@users.noreply.github.com>
Co-authored-by: Martin Ström <name@my-domain.se>
Co-authored-by: The Hatsune Daishi <nao20010128@gmail.com>
Co-authored-by: tsia <github@tsia.de>
Co-authored-by: 3risian <59593325+3risian@users.noreply.github.com>
Co-authored-by: Tristan Waddington <tristan.waddington@gmail.com>
Co-authored-by: Devon Meunier <devon.meunier@gmail.com>
Co-authored-by: Felix Stupp <felix.stupp@outlook.com>
Co-authored-by: tom <tomster954@gmail.com>
Co-authored-by: AndrewMBL <62922222+AndrewMBL@users.noreply.github.com>
Co-authored-by: willbeaufoy <will@willbeaufoy.net>
Co-authored-by: Philipp Stehle <anderschwiedu@googlemail.com>
Co-authored-by: hh0rva1h <61889859+hh0rva1h@users.noreply.github.com>
Co-authored-by: comsomisha <shmelev1996@mail.ru>
Co-authored-by: TotalCaesar659 <14265316+TotalCaesar659@users.noreply.github.com>
Co-authored-by: Juan Francisco Cantero Hurtado <iam@juanfra.info>
Co-authored-by: Dave Loyall <dave@the-good-guys.net>
Co-authored-by: tlsssl <63866177+tlsssl@users.noreply.github.com>
Co-authored-by: Rob <ankenyr@gmail.com>
Co-authored-by: Michael Klein <github@a98shuttle.de>
Co-authored-by: JordanWeatherby <47519158+JordanWeatherby@users.noreply.github.com>
Co-authored-by: striker.sh <19488257+strikersh@users.noreply.github.com>
Co-authored-by: Matej Dujava <mdujava@gmail.com>
Co-authored-by: Glenn Slayden <5589855+glenn-slayden@users.noreply.github.com>
Co-authored-by: MRWITEK <mrvvitek@gmail.com>
Co-authored-by: JChris246 <43832407+JChris246@users.noreply.github.com>
Co-authored-by: TheRealDude2 <the.real.dude@gmx.de>
2020-08-25 20:23:34 +05:30

394 lines
15 KiB
Python

from __future__ import unicode_literals
import itertools
import re
from .common import InfoExtractor
from ..compat import compat_str
from ..utils import (
clean_html,
determine_ext,
dict_get,
extract_attributes,
ExtractorError,
int_or_none,
parse_duration,
try_get,
unified_strdate,
url_or_none,
)
class XHamsterIE(InfoExtractor):
_DOMAINS = r'(?:xhamster\.(?:com|one|desi)|xhms\.pro|xhamster\d+\.com)'
_VALID_URL = r'''(?x)
https?://
(?:.+?\.)?%s/
(?:
movies/(?P<id>[\dA-Za-z]+)/(?P<display_id>[^/]*)\.html|
videos/(?P<display_id_2>[^/]*)-(?P<id_2>[\dA-Za-z]+)
)
''' % _DOMAINS
_TESTS = [{
'url': 'https://xhamster.com/videos/femaleagent-shy-beauty-takes-the-bait-1509445',
'md5': '98b4687efb1ffd331c4197854dc09e8f',
'info_dict': {
'id': '1509445',
'display_id': 'femaleagent-shy-beauty-takes-the-bait',
'ext': 'mp4',
'title': 'FemaleAgent Shy beauty takes the bait',
'timestamp': 1350194821,
'upload_date': '20121014',
'uploader': 'Ruseful2011',
'duration': 893,
'age_limit': 18,
},
}, {
'url': 'https://xhamster.com/videos/britney-spears-sexy-booty-2221348?hd=',
'info_dict': {
'id': '2221348',
'display_id': 'britney-spears-sexy-booty',
'ext': 'mp4',
'title': 'Britney Spears Sexy Booty',
'timestamp': 1379123460,
'upload_date': '20130914',
'uploader': 'jojo747400',
'duration': 200,
'age_limit': 18,
},
'params': {
'skip_download': True,
},
}, {
# empty seo, unavailable via new URL schema
'url': 'http://xhamster.com/movies/5667973/.html',
'info_dict': {
'id': '5667973',
'ext': 'mp4',
'title': '....',
'timestamp': 1454948101,
'upload_date': '20160208',
'uploader': 'parejafree',
'duration': 72,
'age_limit': 18,
},
'params': {
'skip_download': True,
},
}, {
# mobile site
'url': 'https://m.xhamster.com/videos/cute-teen-jacqueline-solo-masturbation-8559111',
'only_matching': True,
}, {
'url': 'https://xhamster.com/movies/2272726/amber_slayed_by_the_knight.html',
'only_matching': True,
}, {
# This video is visible for marcoalfa123456's friends only
'url': 'https://it.xhamster.com/movies/7263980/la_mia_vicina.html',
'only_matching': True,
}, {
# new URL schema
'url': 'https://pt.xhamster.com/videos/euro-pedal-pumping-7937821',
'only_matching': True,
}, {
'url': 'https://xhamster.one/videos/femaleagent-shy-beauty-takes-the-bait-1509445',
'only_matching': True,
}, {
'url': 'https://xhamster.desi/videos/femaleagent-shy-beauty-takes-the-bait-1509445',
'only_matching': True,
}, {
'url': 'https://xhamster2.com/videos/femaleagent-shy-beauty-takes-the-bait-1509445',
'only_matching': True,
}, {
'url': 'https://xhamster11.com/videos/femaleagent-shy-beauty-takes-the-bait-1509445',
'only_matching': True,
}, {
'url': 'https://xhamster26.com/videos/femaleagent-shy-beauty-takes-the-bait-1509445',
'only_matching': True,
}, {
'url': 'http://xhamster.com/movies/1509445/femaleagent_shy_beauty_takes_the_bait.html',
'only_matching': True,
}, {
'url': 'http://xhamster.com/movies/2221348/britney_spears_sexy_booty.html?hd',
'only_matching': True,
}, {
'url': 'http://de.xhamster.com/videos/skinny-girl-fucks-herself-hard-in-the-forest-xhnBJZx',
'only_matching': True,
}]
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('id') or mobj.group('id_2')
display_id = mobj.group('display_id') or mobj.group('display_id_2')
desktop_url = re.sub(r'^(https?://(?:.+?\.)?)m\.', r'\1', url)
webpage, urlh = self._download_webpage_handle(desktop_url, video_id)
error = self._html_search_regex(
r'<div[^>]+id=["\']videoClosed["\'][^>]*>(.+?)</div>',
webpage, 'error', default=None)
if error:
raise ExtractorError(error, expected=True)
age_limit = self._rta_search(webpage)
def get_height(s):
return int_or_none(self._search_regex(
r'^(\d+)[pP]', s, 'height', default=None))
initials = self._parse_json(
self._search_regex(
r'window\.initials\s*=\s*({.+?})\s*;', webpage, 'initials',
default='{}'),
video_id, fatal=False)
if initials:
video = initials['videoModel']
title = video['title']
formats = []
for format_id, formats_dict in video['sources'].items():
if not isinstance(formats_dict, dict):
continue
for quality, format_item in formats_dict.items():
if format_id == 'download':
# Download link takes some time to be generated,
# skipping for now
continue
if not isinstance(format_item, dict):
continue
format_url = format_item.get('link')
filesize = int_or_none(
format_item.get('size'), invscale=1000000)
else:
format_url = format_item
filesize = None
format_url = url_or_none(format_url)
if not format_url:
continue
formats.append({
'format_id': '%s-%s' % (format_id, quality),
'url': format_url,
'ext': determine_ext(format_url, 'mp4'),
'height': get_height(quality),
'filesize': filesize,
'http_headers': {
'Referer': urlh.geturl(),
},
})
self._sort_formats(formats)
categories_list = video.get('categories')
if isinstance(categories_list, list):
categories = []
for c in categories_list:
if not isinstance(c, dict):
continue
c_name = c.get('name')
if isinstance(c_name, compat_str):
categories.append(c_name)
else:
categories = None
return {
'id': video_id,
'display_id': display_id,
'title': title,
'description': video.get('description'),
'timestamp': int_or_none(video.get('created')),
'uploader': try_get(
video, lambda x: x['author']['name'], compat_str),
'thumbnail': video.get('thumbURL'),
'duration': int_or_none(video.get('duration')),
'view_count': int_or_none(video.get('views')),
'like_count': int_or_none(try_get(
video, lambda x: x['rating']['likes'], int)),
'dislike_count': int_or_none(try_get(
video, lambda x: x['rating']['dislikes'], int)),
'comment_count': int_or_none(video.get('views')),
'age_limit': age_limit,
'categories': categories,
'formats': formats,
}
# Old layout fallback
title = self._html_search_regex(
[r'<h1[^>]*>([^<]+)</h1>',
r'<meta[^>]+itemprop=".*?caption.*?"[^>]+content="(.+?)"',
r'<title[^>]*>(.+?)(?:,\s*[^,]*?\s*Porn\s*[^,]*?:\s*xHamster[^<]*| - xHamster\.com)</title>'],
webpage, 'title')
formats = []
format_urls = set()
sources = self._parse_json(
self._search_regex(
r'sources\s*:\s*({.+?})\s*,?\s*\n', webpage, 'sources',
default='{}'),
video_id, fatal=False)
for format_id, format_url in sources.items():
format_url = url_or_none(format_url)
if not format_url:
continue
if format_url in format_urls:
continue
format_urls.add(format_url)
formats.append({
'format_id': format_id,
'url': format_url,
'height': get_height(format_id),
})
video_url = self._search_regex(
[r'''file\s*:\s*(?P<q>["'])(?P<mp4>.+?)(?P=q)''',
r'''<a\s+href=(?P<q>["'])(?P<mp4>.+?)(?P=q)\s+class=["']mp4Thumb''',
r'''<video[^>]+file=(?P<q>["'])(?P<mp4>.+?)(?P=q)[^>]*>'''],
webpage, 'video url', group='mp4', default=None)
if video_url and video_url not in format_urls:
formats.append({
'url': video_url,
})
self._sort_formats(formats)
# Only a few videos have an description
mobj = re.search(r'<span>Description: </span>([^<]+)', webpage)
description = mobj.group(1) if mobj else None
upload_date = unified_strdate(self._search_regex(
r'hint=["\'](\d{4}-\d{2}-\d{2}) \d{2}:\d{2}:\d{2} [A-Z]{3,4}',
webpage, 'upload date', fatal=False))
uploader = self._html_search_regex(
r'<span[^>]+itemprop=["\']author[^>]+><a[^>]+><span[^>]+>([^<]+)',
webpage, 'uploader', default='anonymous')
thumbnail = self._search_regex(
[r'''["']thumbUrl["']\s*:\s*(?P<q>["'])(?P<thumbnail>.+?)(?P=q)''',
r'''<video[^>]+"poster"=(?P<q>["'])(?P<thumbnail>.+?)(?P=q)[^>]*>'''],
webpage, 'thumbnail', fatal=False, group='thumbnail')
duration = parse_duration(self._search_regex(
[r'<[^<]+\bitemprop=["\']duration["\'][^<]+\bcontent=["\'](.+?)["\']',
r'Runtime:\s*</span>\s*([\d:]+)'], webpage,
'duration', fatal=False))
view_count = int_or_none(self._search_regex(
r'content=["\']User(?:View|Play)s:(\d+)',
webpage, 'view count', fatal=False))
mobj = re.search(r'hint=[\'"](?P<likecount>\d+) Likes / (?P<dislikecount>\d+) Dislikes', webpage)
(like_count, dislike_count) = (mobj.group('likecount'), mobj.group('dislikecount')) if mobj else (None, None)
mobj = re.search(r'</label>Comments \((?P<commentcount>\d+)\)</div>', webpage)
comment_count = mobj.group('commentcount') if mobj else 0
categories_html = self._search_regex(
r'(?s)<table.+?(<span>Categories:.+?)</table>', webpage,
'categories', default=None)
categories = [clean_html(category) for category in re.findall(
r'<a[^>]+>(.+?)</a>', categories_html)] if categories_html else None
return {
'id': video_id,
'display_id': display_id,
'title': title,
'description': description,
'upload_date': upload_date,
'uploader': uploader,
'thumbnail': thumbnail,
'duration': duration,
'view_count': view_count,
'like_count': int_or_none(like_count),
'dislike_count': int_or_none(dislike_count),
'comment_count': int_or_none(comment_count),
'age_limit': age_limit,
'categories': categories,
'formats': formats,
}
class XHamsterEmbedIE(InfoExtractor):
_VALID_URL = r'https?://(?:.+?\.)?%s/xembed\.php\?video=(?P<id>\d+)' % XHamsterIE._DOMAINS
_TEST = {
'url': 'http://xhamster.com/xembed.php?video=3328539',
'info_dict': {
'id': '3328539',
'ext': 'mp4',
'title': 'Pen Masturbation',
'timestamp': 1406581861,
'upload_date': '20140728',
'uploader': 'ManyakisArt',
'duration': 5,
'age_limit': 18,
}
}
@staticmethod
def _extract_urls(webpage):
return [url for _, url in re.findall(
r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?xhamster\.com/xembed\.php\?video=\d+)\1',
webpage)]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_url = self._search_regex(
r'href="(https?://xhamster\.com/(?:movies/{0}/[^"]*\.html|videos/[^/]*-{0})[^"]*)"'.format(video_id),
webpage, 'xhamster url', default=None)
if not video_url:
vars = self._parse_json(
self._search_regex(r'vars\s*:\s*({.+?})\s*,\s*\n', webpage, 'vars'),
video_id)
video_url = dict_get(vars, ('downloadLink', 'homepageLink', 'commentsLink', 'shareUrl'))
return self.url_result(video_url, 'XHamster')
class XHamsterUserIE(InfoExtractor):
_VALID_URL = r'https?://(?:.+?\.)?%s/users/(?P<id>[^/?#&]+)' % XHamsterIE._DOMAINS
_TESTS = [{
# Paginated user profile
'url': 'https://xhamster.com/users/netvideogirls/videos',
'info_dict': {
'id': 'netvideogirls',
},
'playlist_mincount': 267,
}, {
# Non-paginated user profile
'url': 'https://xhamster.com/users/firatkaan/videos',
'info_dict': {
'id': 'firatkaan',
},
'playlist_mincount': 1,
}]
def _entries(self, user_id):
next_page_url = 'https://xhamster.com/users/%s/videos/1' % user_id
for pagenum in itertools.count(1):
page = self._download_webpage(
next_page_url, user_id, 'Downloading page %s' % pagenum)
for video_tag in re.findall(
r'(<a[^>]+class=["\'].*?\bvideo-thumb__image-container[^>]+>)',
page):
video = extract_attributes(video_tag)
video_url = url_or_none(video.get('href'))
if not video_url or not XHamsterIE.suitable(video_url):
continue
video_id = XHamsterIE._match_id(video_url)
yield self.url_result(
video_url, ie=XHamsterIE.ie_key(), video_id=video_id)
mobj = re.search(r'<a[^>]+data-page=["\']next[^>]+>', page)
if not mobj:
break
next_page = extract_attributes(mobj.group(0))
next_page_url = url_or_none(next_page.get('href'))
if not next_page_url:
break
def _real_extract(self, url):
user_id = self._match_id(url)
return self.playlist_result(self._entries(user_id), user_id)