Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: allow malformed json parsing #292

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

TwistingTwists
Copy link

@TwistingTwists TwistingTwists commented Feb 10, 2025

addresses #227

This PR aims at more robust parsing of LLM text to yield Json. and hence helps in parsing the struct.

There are many ways json could be malformed when returned by LLM.

  1. ``` backticks
let response = r#"
    ```
    {
        "name": "Bob",
        "age": 42,
        "address": {
            "street": "789 Pine Rd",
            "city": "Metropolis",
            "country": "USA",
            "coordinates": {"lat": 40.7128, "lng": -74.0060}
        },
        "hobbies": [
            {"name": "Cooking", "years_active": 5, "proficiency": "intermediate"},
            {"name": "Cycling", "years_active": 10, "proficiency": "expert"}
        ]
    }
    ```
    "#;

  1. ``` json backticks with tag json
  2. introductory string
let response = r#"
    Here is your json response:
    
    {
        "name": "Bob",
        "age": 42,
        "address": {
            "street": "789 Pine Rd",
            "city": "Metropolis",
            "country": "USA",
            "coordinates": {"lat": 40.7128, "lng": -74.0060}
        },
        "hobbies": [
            {"name": "Cooking", "years_active": 5, "proficiency": "intermediate"},
            {"name": "Cycling", "years_active": 10, "proficiency": "expert"}
        ]
    }
    
    You can use this now.
    "#;

Current approach using serde_json::from_str does not cover these cases. When parsing with json_partial, all these fixes are handled to give a struct.


P.S. I am the author of json_partial

more tests related to parsing json using json_partial can be seen at https://github.com/TwistingTwists/aiworker/blob/36eedcc280a624c3f30cfd5fc712a522d8dbb938/examples/structured-jsonish/src/main.rs#L117-L261

- allows parsing with malformed json returned by LLM
@cvauclair cvauclair added this to the 2025-02-24 milestone Feb 10, 2025
Copy link
Contributor

@0xMochan 0xMochan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey! We really appreciate your PR, your crate looks very useful for LLM applications! For rig specifically, it's important that we are very careful with which dependencies we are using for our projects as we strive to be lean (and are continuing auditing our current set of deps!)

For this PR specifically, I think the idea of having more flexible json parsing for LLMs is a great idea but it's something we want to logic and handle within the rig repo. We also are looking on adding better support for structured outputs as several providers now have direct ways of ensuring proper json and even schema'd output!

@TwistingTwists
Copy link
Author

Thanks for the feedback @0xMochan

I appreciate the 'low dependency ' scenario.

Here are my thoughts:

  1. I could raise another PR to include the code of jsonish crate in rig itself. That way, you don't have another dependency.
  2. larger models claim to give structured outputs, smaller and cheaper models cannot follow the instruction accurately. Having worked on LLMs for the past year or so, this is a repeated feedback I've had for libraries i have used.

Here is the scenario that bites often Enough, in my experience.

I have a pipeline using large model and everything works fine. The moment I switch to smaller model, things go haywire.

  1. Would it be possible for you to make extractor a trait ? That way, I can write a rig-json-partial-extractor and it doesn't have to be included in the rig itself.

People can use the default extractor or any other crate.

Let me know your thoughts.

@0xMochan
Copy link
Contributor

  1. I could raise another PR to include the code of jsonish crate in rig itself. That way, you don't have another dependency.

This is interesting, I'll bring it up with the team. I don't want the norm to be us usurping crates from the community but there might be some way we can co-maintain or something. From our POV, it's a big ask for rig users to add a new dep to their tree and it's our responsibilities as maintainers to ensure we are diligent with every required dep we add (and optional ones too!)

  1. larger models claim to give structured outputs, smaller and cheaper models cannot follow the instruction accurately. Having worked on LLMs for the past year or so, this is a repeated feedback I've had for libraries i have used.

Correct, and i also agree that smaller models are going to be a big deal in larger, multi-agent systems, the type where rig could be extremely proficient in!

  1. Would it be possible for you to make extractor a trait ? That way, I can write a rig-json-partial-extractor and it doesn't have to be included in the rig itself.

There actually was a sizable attempt to revamp extracts to be better integrated into the agent stack. I had a StructuredPrompt and StructuredChat trait defined to allow you to dynamically cast the output of an agent. You would do:

// Roughly
let agent = client.agent().cast::<Vec<MyType>>().build();
let output: Vec<MyType> = agent.prompt("...")

There was some issues we didn't resolve before I started work on the more urgent #199, and now I'd like to come back to that so extractors feel a lot more at "home". there's also the case of structured outputs not being able to be configured and for that I'm inclined to look into #149 Client traits as a way for providers to super-power agents with extra, defined behavior.

@TwistingTwists
Copy link
Author

Thanks @0xMochan for the detailed explanations.

Let me know with any updates in future. Tag me in the the comments :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants