Skip to content

Commit

Permalink
add v.0.0.2 fix bug plays first episode of season
Browse files Browse the repository at this point in the history
  • Loading branch information
phoenixthrush committed Jul 30, 2024
1 parent 3560381 commit e15a2ab
Show file tree
Hide file tree
Showing 4 changed files with 23 additions and 2 deletions.
13 changes: 13 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,21 @@
AniWorld Downloader is a command-line tool designed to download and stream content from aniworld.to.
It offers various features, including fetching single episodes, downloading entire seasons, organizing downloads into structured directories, and supporting multiple operating systems.

Windows is currently not supported as npyscreen does not work on Windows due to the curses module.
I will switch from npyscreen to another library that is maintained and supports Windows.

## Usage

> Install from pypi
```shell
pip install -U aniworld
```

> Run
```shell
python -m aniworld
```

## Contributing

Contributions to AniWorld Downloader are welcome!
Expand Down
7 changes: 6 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,12 +1,17 @@
[project]
name = "aniworld"
version = "0.0.1"
version = "0.0.2"
authors = [
{ name="Phoenixthrush UwU", email="[email protected]" },
]
description = "Command line tool designed to download and stream content from aniworld.to"
readme = "README.md"
requires-python = ">=3.8"
dependencies = [
'requests',
'bs4',
'npyscreen'
]
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
Expand Down
4 changes: 3 additions & 1 deletion src/aniworld/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,14 +203,16 @@ def clean_up_leftovers(self, directory):
print(f"Error removing directory {directory}: {e}")

def get_season_episodes(self, season_url):
season_url_old = season_url
season_url = season_url[:-2]
season_html = self.make_request(season_url)
if season_html is None:
return []
season_soup = BeautifulSoup(season_html, 'html.parser')
episodes = season_soup.find_all('meta', itemprop='episodeNumber')
episode_numbers = [int(episode['content']) for episode in episodes]
highest_episode = max(episode_numbers, default=None)
return [f"{season_url}/staffel-{season_url.split('/')[-1]}/episode-{num}" for num in range(1, highest_episode + 1)]
return [f"{season_url}/staffel-{season_url_old.split('/')[-1]}/episode-{num}" for num in range(1, highest_episode + 1)]

def get_season_data(self):
main_html = self.make_request(self.base_url)
Expand Down
1 change: 1 addition & 0 deletions src/aniworld/extractors/doodstream.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
from time import time
from urllib.parse import urlparse

# use urllib in future
from requests import Session


Expand Down

0 comments on commit e15a2ab

Please sign in to comment.