Skip to content

Commit

Permalink
ci.yml: Discover typos with codespell
Browse files Browse the repository at this point in the history
  • Loading branch information
cclauss committed Nov 2, 2021
1 parent cd64eaf commit 2184a55
Show file tree
Hide file tree
Showing 11 changed files with 22 additions and 19 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,9 @@ jobs:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: codespell-project/actions-codespell@2391250ab05295bddd51e36a8c6295edb6343b0e
with:
ignore_words_list: datas
- name: Set up Python ${{ env.PYTHON_DEFAULT_VERSION }}
uses: actions/setup-python@v2
with:
Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
* `b2sdk.v1.sync` refactored to reflect `b2sdk.sync` structure
* Make `B2Api.get_bucket_by_id` return populated bucket objects in v2
* Add proper support of `recommended_part_size` and `absolute_minimum_part_size` in `AccountInfo`
* Refactored `minimum_part_size` to `recommended_part_size` (tha value used stays the same)
* Refactored `minimum_part_size` to `recommended_part_size` (the value used stays the same)
* Encryption settings, types and providers are now part of the public API

### Removed
Expand Down
2 changes: 1 addition & 1 deletion b2sdk/_v3/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@
from b2sdk.transfer.emerge.planner.upload_subpart import CachedBytesStreamOpener
from b2sdk.transfer.emerge.write_intent import WriteIntent

# trasfer
# transfer

from b2sdk.transfer.inbound.downloader.abstract import AbstractDownloader
from b2sdk.transfer.outbound.large_file_upload_state import LargeFileUploadState
Expand Down
2 changes: 1 addition & 1 deletion b2sdk/bucket.py
Original file line number Diff line number Diff line change
Expand Up @@ -817,7 +817,7 @@ def copy(
automatically determined
:param dict,None file_info: file_info for the new file, if ``None`` will and ``b2_copy_file`` will be used
file_info will be copied from source file - otherwise it will be set to empty dict
:param int offset: offset of exisiting file that copy should start from
:param int offset: offset of existing file that copy should start from
:param int,None length: number of bytes to copy, if ``None`` then ``offset`` have to be ``0`` and it will
use ``b2_copy_file`` without ``range`` parameter so it may fail if file is too large.
For large files length have to be specified to use ``b2_copy_part`` instead.
Expand Down
4 changes: 2 additions & 2 deletions b2sdk/exception.py
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ class FileNameNotAllowed(NotAllowedByAppKeyError):


class FileNotPresent(FileOrBucketNotFound):
def __str__(self): # overriden to retain message across prev versions
def __str__(self): # overridden to retain message across prev versions
return "File not present%s" % (': ' + self.file_id_or_name if self.file_id_or_name else "")


Expand Down Expand Up @@ -378,7 +378,7 @@ class MissingPart(B2SimpleError):


class NonExistentBucket(FileOrBucketNotFound):
def __str__(self): # overriden to retain message across prev versions
def __str__(self): # overridden to retain message across prev versions
return "No such bucket%s" % (': ' + self.bucket_name if self.bucket_name else "")


Expand Down
4 changes: 2 additions & 2 deletions b2sdk/transfer/emerge/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -323,7 +323,7 @@ def _find_unfinished_file_by_plan_id(

if file_retention != file_.file_retention:
# if `file_.file_retention` is UNKNOWN then we skip - lib user can still
# pass UKNOWN file_retention here - but raw_api/server won't allow it
# pass UNKNOWN file_retention here - but raw_api/server won't allow it
# and we don't check it here
continue
finished_parts = {}
Expand Down Expand Up @@ -385,7 +385,7 @@ def _match_unfinished_file_if_possible(

if file_retention != file_.file_retention:
# if `file_.file_retention` is UNKNOWN then we skip - lib user can still
# pass UKNOWN file_retention here - but raw_api/server won't allow it
# pass UNKNOWN file_retention here - but raw_api/server won't allow it
# and we don't check it here
continue
files_match = True
Expand Down
14 changes: 7 additions & 7 deletions b2sdk/transfer/emerge/planner/planner.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ def _get_emerge_parts(self, intent_fragments_iterator):
# if this is a copy intent and we want to copy it server-side, then we have to
# flush the whole upload buffer we accumulated so far, but OTOH we may decide that we just want to
# append it to upload buffer (see complete, untrivial logic below) and then maybe
# flush some upload parts from upload bufffer (if there is enough in the buffer)
# flush some upload parts from upload buffer (if there is enough in the buffer)

current_len = current_end - upload_buffer.end_offset
# should we flush the upload buffer or do we have to add a chunk of the copy first?
Expand Down Expand Up @@ -291,7 +291,7 @@ def _buff_split(self, upload_buffer):
if tail_buffer.length < self.recommended_upload_part_size + self.min_part_size:
# `EmergePlanner_buff_partition` can split in such way that tail part
# can be smaller than `min_part_size` - to avoid unnecessary download of possible
# incoming copy intent, we don't split futher
# incoming copy intent, we don't split further
yield tail_buffer
return
head_buff, tail_buffer = self._buff_partition(tail_buffer)
Expand All @@ -300,7 +300,7 @@ def _buff_split(self, upload_buffer):
def _buff_partition(self, upload_buffer):
""" Split upload buffer to two parts (smaller upload buffers).
In result left part cannot be splitted more, and nothing can be assumed about right part.
In result left part cannot be split more, and nothing can be assumed about right part.
:rtype tuple(b2sdk.transfer.emerge.planner.planner.UploadBuffer,
b2sdk.transfer.emerge.planner.planner.UploadBuffer):
Expand Down Expand Up @@ -331,7 +331,7 @@ def _select_intent_fragments(self, write_intent_iterator):
would be merged again by higher level iterator that produces emerge parts, but
in principle this merging can happen here. Not merging it is a code design decision
to make this function easier to implement and also it would allow yielding emerge parts
a bit quickier.
a bit quicker.
"""

# `protected_intent_length` for upload state is 0, so it would generate at most single intent fragment
Expand Down Expand Up @@ -384,7 +384,7 @@ def _select_intent_fragments(self, write_intent_iterator):
def _merge_intent_fragments(self, start_offset, upload_intents, copy_intents):
""" Select "competing" upload and copy fragments.
Upload and copy fragments may overlap so we nedd to choose right one
Upload and copy fragments may overlap so we need to choose right one
to use - copy fragments are prioritized unless this fragment is unprotected
(we use "protection" as an abstract for "short copy" fragments - meaning upload
fragments have higher priority than "short copy")
Expand Down Expand Up @@ -475,7 +475,7 @@ def state_update(self, last_sent_offset, incoming_offset):
would not be added to this intents state. It would yield a state of this stream
of intents (like copy or upload) from ``last_sent_offset`` to ``incoming_offset``.
So here happens the first stage of solving overlapping intents selection - but
write intent iterator can be splitted to multiple substreams (like copy and upload)
write intent iterator can be split to multiple substreams (like copy and upload)
so additional stage is required to cover this.
"""
if self._current_intent is not None:
Expand Down Expand Up @@ -558,7 +558,7 @@ def _is_current_intent_protected(self):
we need to know for fragment if it is a "small copy" or not. In result of solving
overlapping intents selection there might be a situation when original intent was not
a small copy, but in effect it will be used only partially and in effect it may be a "small copy".
Algorithm attempts to aviod using smaller fragments than ``protected_intent_length`` but
Algorithm attempts to avoid using smaller fragments than ``protected_intent_length`` but
sometimes it may be impossible. So if this function returns ``False`` it means
that used length of this intent is smaller than ``protected_intent_length`` and the algorithm
was unable to avoid this.
Expand Down
2 changes: 1 addition & 1 deletion b2sdk/transfer/outbound/large_file_upload_state.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def set_error(self, message):

def has_error(self):
"""
Check whether an error occured.
Check whether an error occurred.
:rtype: bool
"""
Expand Down
4 changes: 2 additions & 2 deletions doc/markup-test.rst
Original file line number Diff line number Diff line change
Expand Up @@ -100,13 +100,13 @@ In other words, if you pin your dependencies to

>=4.5.6;<5.0.0

.. note:: b2sdk.*._something and b2sdk.*.*._something, having a name which begins with an underscore, are NOT considred public interface.
.. note:: b2sdk.*._something and b2sdk.*.*._something, having a name which begins with an underscore, are NOT considered public interface.


Protected
~~~~~~~~~

Things which sometimes might be necssary to use that are NOT considered public interface (and may change in a non-major version):
Things which sometimes might be necessary to use that are NOT considered public interface (and may change in a non-major version):
* B2Session
* B2RawHTTPApi
* B2Http
Expand Down
2 changes: 1 addition & 1 deletion doc/source/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -318,7 +318,7 @@ In order to continue a simple upload session, **b2sdk** checks for any available

To support automatic continuation, some advanced methods create a plan before starting copy/upload operations, saving the hash of that plan in ``file_info`` for increased reliability.

If that is not available, ``large_file_id`` can be extracted via callback during the operation start. It can then be passed into the subsequent call to continue the same task, though the responsibility for passing the exact same input is then on the user of the function. Please see :ref:`advanced method support table <advanced_methods_support_table>` to see where automatic continuation is supported. ``large_file_id`` can also be passed if automatic continuation is available in order to avoid issues where multiple matchin upload sessions are matching the transfer.
If that is not available, ``large_file_id`` can be extracted via callback during the operation start. It can then be passed into the subsequent call to continue the same task, though the responsibility for passing the exact same input is then on the user of the function. Please see :ref:`advanced method support table <advanced_methods_support_table>` to see where automatic continuation is supported. ``large_file_id`` can also be passed if automatic continuation is available in order to avoid issues where multiple matching upload sessions are matching the transfer.


Continuation of create/concantenate
Expand Down
2 changes: 1 addition & 1 deletion test/unit/bucket/test_bucket.py
Original file line number Diff line number Diff line change
Expand Up @@ -1448,7 +1448,7 @@ def test_v1_return_types(self):
def test_download_file_version(self):
self.file_version.download().save(self.bytes_io)
assert self.bytes_io.getvalue() == self.DATA.encode()
# self._verify preforms different checks based on apiver,
# self._verify performs different checks based on apiver,
# but this is a new feature so it works the same on v2, v1 and v0

def test_download_by_id_no_progress(self):
Expand Down

0 comments on commit 2184a55

Please sign in to comment.