Skip to content

Commit

Permalink
chore: update lint (#172)
Browse files Browse the repository at this point in the history
* chore: update lint and pre-commit

* chore: update mdformat

* chore: update mdformat pre-commit

* chore: mdformat

* chore: run mdformat

* fix(worker): add shutdown_event to worker handler per TaskIQ v0.11.9

* fix(deps): force Ape v0.8.19 minimum pin

* fix(deps): upgrade to websockets 14.x

---------

Co-authored-by: Juliya Smith <[email protected]>
Co-authored-by: fubuloubu <[email protected]>
  • Loading branch information
3 people authored Nov 24, 2024
1 parent 6a2df5b commit 282e910
Show file tree
Hide file tree
Showing 6 changed files with 26 additions and 24 deletions.
14 changes: 7 additions & 7 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,33 +1,33 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
rev: v5.0.0
hooks:
- id: check-yaml

- repo: https://github.com/pre-commit/mirrors-isort
rev: v5.10.1
- repo: https://github.com/PyCQA/isort
rev: 5.13.2
hooks:
- id: isort

- repo: https://github.com/psf/black
rev: 23.12.0
rev: 24.10.0
hooks:
- id: black
name: black

- repo: https://github.com/pycqa/flake8
rev: 6.1.0
rev: 7.1.1
hooks:
- id: flake8

- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.7.1
rev: v1.13.0
hooks:
- id: mypy
additional_dependencies: [types-setuptools, pydantic]

- repo: https://github.com/executablebooks/mdformat
rev: 0.7.17
rev: 0.7.19
hooks:
- id: mdformat
additional_dependencies: [mdformat-gfm, mdformat-frontmatter]
Expand Down
2 changes: 1 addition & 1 deletion docs/userguides/deploying.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ docker push your-registry-url/project/botA:latest

TODO: The ApeWorX team has github actions definitions for building, pushing and deploying.

If you are unfamiliar with docker and container registries, you can use the \[\[github-action\]\].
If you are unfamiliar with docker and container registries, you can use the \[[github-action]\].

You do not need to build using this command if you use the github action, but it is there to help you if you are having problems figuring out how to build and run your bot images on the cluster successfully.

Expand Down
10 changes: 5 additions & 5 deletions docs/userguides/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,13 +141,13 @@ def handle_on_worker_shutdown(state):

This function comes a parameter `state` that you can use for storing the results of your startup computation or resources that you have provisioned.

It's import to note that this is useful for ensuring that your workers (of which there can be multiple) have the resources necessary to properly handle any updates you want to make in your handler functions, such as connecting to the Telegram API, an SQL or NoSQL database connection, or something else. **This function will run on every worker process**.
It's import to note that this is useful for ensuring that your workers (of which there can be multiple) have the resources necessary to properly handle any updates you want to make in your handler functions, such as connecting to the Telegram API, an SQL or NoSQL database connection, or something else. **This function will run on every worker process**.

*New in 0.2.0*: These events moved from `on_startup()` and `on_shutdown()` for clarity.

#### Worker State

The `state` variable is also useful as this can be made available to each handler method so other stateful quantities can be maintained for other uses. Each distributed worker has its own instance of state.
The `state` variable is also useful as this can be made available to each handler method so other stateful quantities can be maintained for other uses. Each distributed worker has its own instance of state.

To access the state from a handler, you must annotate `context` as a dependency like so:

Expand All @@ -163,7 +163,7 @@ def block_handler(block, context: Annotated[Context, TaskiqDepends()]):

### Bot Events

You can also add an bot startup and shutdown handler that will be **executed once upon every bot startup**. This may be useful for things like processing historical events since the bot was shutdown or other one-time actions to perform at startup.
You can also add an bot startup and shutdown handler that will be **executed once upon every bot startup**. This may be useful for things like processing historical events since the bot was shutdown or other one-time actions to perform at startup.

```py
@bot.on_startup()
Expand All @@ -180,7 +180,7 @@ def handle_on_shutdown():
...
```

*Changed in 0.2.0*: The behavior of the `@bot.on_startup()` decorator and handler signature have changed. It is now executed only once upon bot startup and worker events have moved on `@bot.on_worker_startup()`.
*Changed in 0.2.0*: The behavior of the `@bot.on_startup()` decorator and handler signature have changed. It is now executed only once upon bot startup and worker events have moved on `@bot.on_worker_startup()`.

## Bot State

Expand Down Expand Up @@ -271,7 +271,7 @@ Use segregated keys and limit your risk by controlling the amount of funds that
Using only the `silverback run ...` command in a default configuration executes everything in one process and the job queue is completely in-memory with a shared state.
In some high volume environments, you may want to deploy your Silverback bot in a distributed configuration using multiple processes to handle the messages at a higher rate.

The primary components are the client and workers. The client handles Silverback events (blocks and contract event logs) and creates jobs for the workers to process in an asynchronous manner.
The primary components are the client and workers. The client handles Silverback events (blocks and contract event logs) and creates jobs for the workers to process in an asynchronous manner.

For this to work, you must configure a [TaskIQ broker](https://taskiq-python.github.io/guide/architecture-overview.html#broker) capable of distributed processing.
Additonally, it is highly suggested you should also configure a [TaskIQ result backend](https://taskiq-python.github.io/guide/architecture-overview.html#result-backend) in order to process and store the results of executing tasks.
Expand Down
17 changes: 9 additions & 8 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,15 @@
"hypothesis-jsonschema", # Generate strategies for pydantic models
],
"lint": [
"black>=24", # Auto-formatter and linter
"mypy>=1.10", # Static type analyzer
"black>=24.10.0,<25", # Auto-formatter and linter
"mypy>=1.13.0,<2", # Static type analyzer
"types-setuptools", # Needed for mypy type shed
"flake8>=7", # Style linter
"isort>=5.13", # Import sorting linter
"mdformat>=0.7", # Auto-formatter for markdown
"flake8>=7.1.1,<8", # Style linter
"isort>=5.13.2,<6", # Import sorting linter
"mdformat>=0.7.19", # Auto-formatter for markdown
"mdformat-gfm>=0.3.6", # Needed for formatting GitHub-flavored markdown
"mdformat-frontmatter>=2.0", # Needed for frontmatters-style headers in issue templates
"mdformat-pyproject>=0.0.1", # Allows configuring in pyproject.toml
"mdformat-pyproject>=0.0.2", # Allows configuring in pyproject.toml
],
"doc": ["sphinx-ape"],
"release": [ # `release` GitHub Action job uses this
Expand Down Expand Up @@ -63,14 +63,15 @@
install_requires=[
"apepay>=0.3.2,<1",
"click", # Use same version as eth-ape
"eth-ape>=0.7,<1.0",
"eth-ape>=0.8.19,<1.0",
"ethpm-types>=0.6.10", # lower pin only, `eth-ape` governs upper pin
"eth-pydantic-types", # Use same version as eth-ape
"packaging", # Use same version as eth-ape
"pydantic_settings", # Use same version as eth-ape
"taskiq[metrics]>=0.11.3,<0.12",
"taskiq[metrics]>=0.11.9,<0.12",
"tomlkit>=0.12,<1", # For reading/writing global platform profile
"fief-client[cli]>=0.19,<1", # for platform auth/cluster login
"websockets>=14.1,<15", # For subscriptions
],
entry_points={
"console_scripts": ["silverback=silverback._cli:cli"],
Expand Down
4 changes: 2 additions & 2 deletions silverback/subscriptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

from ape.logging import logger
from websockets import ConnectionClosedError
from websockets import client as ws_client
from websockets.asyncio import client as ws_client


class SubscriptionType(Enum):
Expand All @@ -26,7 +26,7 @@ def __init__(self, ws_provider_uri: str):
self._ws_provider_uri = ws_provider_uri

# Stateful
self._connection: ws_client.WebSocketClientProtocol | None = None
self._connection: ws_client.ClientConnection | None = None
self._last_request: int = 0
self._subscriptions: dict[str, asyncio.Queue] = {}
self._rpc_msg_buffer: list[dict] = []
Expand Down
3 changes: 2 additions & 1 deletion silverback/worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@


async def run_worker(broker: AsyncBroker, worker_count=2, shutdown_timeout=90):
shutdown_event = asyncio.Event()
try:
tasks = []
with ThreadPoolExecutor(max_workers=worker_count) as pool:
Expand All @@ -19,7 +20,7 @@ async def run_worker(broker: AsyncBroker, worker_count=2, shutdown_timeout=90):
max_prefetch=0,
)
broker.is_worker_process = True
tasks.append(receiver.listen())
tasks.append(receiver.listen(shutdown_event))

await asyncio.gather(*tasks)
finally:
Expand Down

0 comments on commit 282e910

Please sign in to comment.