Skip to content

Commit 555f355

Browse files
authored
fixed bert indenting (#875)
* fixed bert indenting * fixed indentation * fixed spacing
1 parent 49564bc commit 555f355

File tree

2 files changed

+7
-10
lines changed

2 files changed

+7
-10
lines changed

transformer_lens/BertNextSentencePrediction.py

+4-6
Original file line numberDiff line numberDiff line change
@@ -72,11 +72,10 @@ def to_tokens(
7272
sides or prepend_bos.
7373
Args:
7474
input: List[str]]: The input to tokenize.
75-
move_to_device (bool): Whether to move the output tensor of tokens to the device the
76-
model lives on. Defaults to True
75+
move_to_device (bool): Whether to move the output tensor of tokens to the device the model lives on. Defaults to True
7776
truncate (bool): If the output tokens are too long, whether to truncate the output
78-
tokens to the model's max context window. Does nothing for shorter inputs.
79-
Defaults to True.
77+
tokens to the model's max context window. Does nothing for shorter inputs. Defaults to
78+
True.
8079
"""
8180

8281
if len(input) != 2:
@@ -143,8 +142,7 @@ def forward(
143142
144143
Args:
145144
input: The input to process. Can be one of:
146-
- List[str]: A list of two strings representing the two sentences NSP
147-
should be performed on
145+
- List[str]: A list of two strings representing the two sentences NSP should be performed on
148146
- torch.Tensor: Input tokens as integers with shape (batch, position)
149147
return_type: Optional[str]: The type of output to return. Can be one of:
150148
- None: Return nothing, don't calculate logits

transformer_lens/HookedEncoder.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -102,11 +102,10 @@ def to_tokens(
102102
sides or prepend_bos.
103103
Args:
104104
input (Union[str, List[str]]): The input to tokenize.
105-
move_to_device (bool): Whether to move the output tensor of tokens to the device the
106-
model lives on. Defaults to True
105+
move_to_device (bool): Whether to move the output tensor of tokens to the device the model lives on. Defaults to True
107106
truncate (bool): If the output tokens are too long, whether to truncate the output
108-
tokens to the model's max context window. Does nothing for shorter inputs.
109-
Defaults to True.
107+
tokens to the model's max context window. Does nothing for shorter inputs. Defaults to
108+
True.
110109
"""
111110

112111
assert self.tokenizer is not None, "Cannot use to_tokens without a tokenizer"

0 commit comments

Comments
 (0)