Skip to content

Commit

Permalink
clean code
Browse files Browse the repository at this point in the history
  • Loading branch information
zzhhjjj committed Apr 26, 2024
1 parent 5f025dc commit 24cfbbd
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 8 deletions.
4 changes: 2 additions & 2 deletions run_generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ def main():
"The future of AI is",
"Passage: Daniel went back to the garden. Mary travelled to the kitchen. Sandra journeyed to the kitchen. Sandra went to the hallway. John went to the bedroom. Mary went back to the garden. Where is Mary?\nAnswer:",
"def fib(n)",
"This film was probably inspired by Godzilla",
'Here is an extract from a webpage: "Have you ever experienced heel pain after a heavy physical activity, or even right after a long period of standing? If you regard this as something usual and normal, then think again. Miscalled as heel pain, plantar fasciitis causes these frequent mild pains experienced in the soles of the feet. It is the inflammation and enlargement the plantar fascia tissue that is located in the heels of the feet, stretching to the base of the toes. This tissue is responsible for absorbing shock in the feet and for supporting the arches. It also plays a vital role in foot movements during walking and standing. Many factors such as excessive walking, standing, and running trigger heel pain and plantar fasciitis. A sudden increase in intensity of activities, increase in weight, and abrupt change of footwear also cause the swelling of the ligament. Non-supportive footwear lacking arch cushions and improper and worn out running or training can also lead to the problem. It is also most evident among those". Write an extensive and detailed course unit suitable for a textbook targeted at college students, related to the given extract, within the context of "Medicine". Do not just list concepts, but develop each one in detail before moving to the next, as we prioritize depth of understanding and comprehensive exploration of the subject matter over breadth. Focus on: - Rigor: Ensure in-depth coverage of the concepts/sections. - Engagement: Write with an academic, professional and engaging tone that captivates interest. - Application: Incorporate specific, practical examples, such as proofs in calculus or critical dates and figures in history. Do not include a title or an introduction, simply write the content without headlines and introductory phrases. Do not use images.',
"Advancements in technology will lead to",
"Tomorrow's world is shaped by",
]
Expand All @@ -180,7 +180,7 @@ def main():
parallel_context=parallel_context,
max_new_tokens=args.max_new_tokens,
max_micro_batch_size=2,
generation_config=GenerationArgs(sampler="top_k", top_k=50, use_cache=False, n_samples=2),
generation_config=GenerationArgs(sampler="greedy", use_cache=True),
tokenizer_config=TokenizerConfig(max_input_length=None),
is_bench=os.environ.get("USE_BENCH", "0") == "1",
)
Expand Down
8 changes: 3 additions & 5 deletions src/nanotron/generation/decode.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,11 +194,9 @@ def decode_text(
if generation_config and generation_config.n_samples:
if sampler_type != SamplerType.TOP_P and sampler_type != SamplerType.TOP_K:
raise ValueError("Only support n_samples for TOP_P and TOP_K sampler")
new_input_iter = []
for input in input_iter:
for _ in range(generation_config.n_samples):
new_input_iter.append(GenerationInput(text=input.text))
input_iter = new_input_iter
input_iter = [
GenerationInput(text=input.text) for input in input_iter for _ in range(generation_config.n_samples)
]

# That's annoying but I need this as soon as there's a change communication "cross"
pipeline_state = PipelineEvalBatchState()
Expand Down
1 change: 0 additions & 1 deletion src/nanotron/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -656,7 +656,6 @@ def init_model(self) -> Union[NanotronModel, DistributedDataParallel]:
rank=0,
)
else:
# Rather than indirectly setting max_position_embeddings to tokens.sequence_length, we should directly assert their equality.
assert (
self.config.tokens.sequence_length == self.model_config.max_position_embeddings
), "The tokenizer's sequence length does not match the model's maximum position embeddings."
Expand Down

0 comments on commit 24cfbbd

Please sign in to comment.