Skip to content

Commit

Permalink
Fixed some grammatical and spelling errors (#10595)
Browse files Browse the repository at this point in the history
Fixed some grammatical and spelling errors
  • Loading branch information
ShorthillsAI authored Sep 15, 2023
1 parent 5e50b89 commit f9f1340
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Head to [Integrations](/docs/integrations/memory/) for documentation on built-in
:::

One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class.
This is a super lightweight wrapper which provides convenience methods for saving HumanMessages, AIMessages, and then fetching them all.
This is a super lightweight wrapper that provides convenience methods for saving HumanMessages, AIMessages, and then fetching them all.

You may want to use this class directly if you are managing memory outside of a chain.

Expand Down
6 changes: 3 additions & 3 deletions docs/extras/integrations/providers/predictionguard.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ It is broken into two parts: installation and setup, and then references to spec

## Installation and Setup
- Install the Python SDK with `pip install predictionguard`
- Get an Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
- Get a Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)

## LLM Wrapper

Expand Down Expand Up @@ -49,7 +49,7 @@ Context: EVERY comment, DM + email suggestion has led us to this EXCITING announ
Exclusive Candle Box - $80
Monthly Candle Box - $45 (NEW!)
Scent of The Month Box - $28 (NEW!)
Head to stories to get ALLL the deets on each box! πŸ‘† BONUS: Save 50% on your first box with code 50OFF! πŸŽ‰
Head to stories to get ALL the deets on each box! πŸ‘† BONUS: Save 50% on your first box with code 50OFF! πŸŽ‰
Query: {query}
Expand Down Expand Up @@ -97,4 +97,4 @@ llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"

llm_chain.predict(question=question)
```
```
2 changes: 1 addition & 1 deletion libs/experimental/langchain_experimental/smart_llm/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ def get_prompt_strings(
(
HumanMessagePromptTemplate,
"You are a resolved tasked with 1) finding which of "
f"the {self.n_ideas} anwer options the researcher thought was "
f"the {self.n_ideas} answer options the researcher thought was "
"best,2) improving that answer and 3) printing the answer in full. "
"Don't output anything for step 1 or 2, only the full answer in 3. "
"Let's work this out in a step by step way to be sure we have "
Expand Down

0 comments on commit f9f1340

Please sign in to comment.