Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug with async runtime when using defer #2

Open
mortenhaahr opened this issue Dec 20, 2024 · 1 comment
Open

Bug with async runtime when using defer #2

mortenhaahr opened this issue Dec 20, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@mortenhaahr
Copy link
Collaborator

I have found a bug within the TC that I believe is somewhere within the async runtime.
It is not urgent for Madrid but it is quite significant for our work with dynamic properties.

Reproduce:

The bug can be reproduced by running the defer example program on the main branch (fbfb769) with the async runtime:
cargo run -- --input-file examples/defer.input examples/defer.lola

Expected output:

The expected output can be found by running the queueing runtime:
cargo run -- --input-file examples/defer.input examples/defer.lola --runtime queuing

Output:

z[0] = Unknown
z[1] = Int(2)
z[2] = Int(3)
...
z[13] = Int(14)
z[14] = Int(15)

Actual output:

From running with the async runtime:

z[0] = Unknown
z[1] = Int(1)
z[2] = Int(2)
z[3] = Int(3)
...
z[14] = Int(14)
z[15] = Int(15)
^C

Explanation:

There are two bugs when running with the async runtime.

  1. The program does not terminate as expected when the input stream is emptied (hence the ^C in the actual output).
  2. The produced outputs are slightly wrong. Notice the difference between the two outputs at z[1]. Also, notice that z[15] is present on the async output.
    • From the async output, it is as if the z-stream uses the x[0] value, despite it having already produced an Unknown value at that timestep.

Further digging:

For the first bug, I have identified that the bug was introduced in the commit be5c41b.
This is the commit where we started using stream! instead of unfold (see references below).
However, even with that commit, the second bug is present.

One difference between the two implementations is that with the unfold version we yield until the prop_stream terminates, whereas in the stream! implementation we yield from the eval_output_stream. (I believe this is technically a small bug in the old implementation).

I tried implementing a version with the following while-let version to make them more similar:
let (Some(eval_res), Some(_)) = join!(eval_output_stream.next(), prop_stream.next()) {

This terminates the program as expected, but oddly enough it makes z contain only 14 samples at termination instead of 16.
I.e., one too few compared to the queuing runtime instead of one too many.

References:

stream! implementation:

pub fn defer(
    ctx: &dyn StreamContext<Value>,
    mut prop_stream: OutputStream<Value>,
    history_length: usize,
) -> OutputStream<Value> {
    /* Subcontext with current values only*/
    let subcontext = ctx.subcontext(history_length);

    Box::pin(stream! {
        let mut eval_output_stream: Option<OutputStream<Value>> = None;

        // Yield Unknown until we have a value to evaluate, then evaluate it
        while let Some(current) = prop_stream.next().await {
            match current {
                Value::Str(defer_s) => {
                    // We have a string to evaluate so do so
                    let defer_parse = &mut defer_s.as_str();
                    let expr = lola_expression.parse_next(defer_parse)
                        .expect("Invalid eval str");
                    eval_output_stream = Some(UntimedLolaSemantics::to_async_stream(expr, subcontext.deref()));
                    break;
                }
                Value::Unknown => {
                    // Consume a sample from the subcontext but return Unknown (aka. Waiting)
                    subcontext.advance();
                    yield Value::Unknown;
                }
                _ => panic!("Invalid defer property type {:?}", current)
            }
        }
        let mut eval_output_stream = eval_output_stream.expect("No eval stream");

        // Yield the saved value until the inner stream is done
        while let Some(eval_res) = eval_output_stream.next().await {
            subcontext.advance();
            yield eval_res;
        }
    })
}

unfold implementation:

pub fn defer(
    ctx: &dyn StreamContext<Value>,
    prop_stream: OutputStream<Value>,
    history_length: usize,
) -> OutputStream<Value> {
    /* Subcontext with current values only*/
    let subcontext = ctx.subcontext(history_length);
    /*unfold() creates a Stream from a seed value.*/
    Box::pin(stream::unfold(
        (subcontext, prop_stream, None::<Value>),
        |(subcontext, mut x, saved)| async move {
            /* x.next() returns None if we are done unfolding. Return in that case.*/
            let current = x.next().await?;
            /* If we have a saved state then use that otherwise use current */
            let defer_str = saved.unwrap_or_else(|| current);

            match defer_str {
                Value::Str(defer_s) => {
                    let defer_parse = &mut defer_s.as_str();
                    let expr = match lola_expression.parse_next(defer_parse) {
                        Ok(expr) => expr,
                        Err(_) => unimplemented!("Invalid eval str"),
                    };
                    let mut es = UntimedLolaSemantics::to_async_stream(expr, subcontext.deref());
                    let eval_res = es.next().await?;
                    subcontext.advance();
                    return Some((eval_res, (subcontext, x, Some(Value::Str(defer_s)))));
                }
                Value::Unknown => {
                    // Consume a sample from the subcontext but return Unknown (aka. Waiting)
                    subcontext.advance();
                    Some((Value::Unknown, (subcontext, x, None)))
                }
                _ => panic!("We did not have memory and defer_str was not a Str"),
            }
        },
    ))
}
@mortenhaahr mortenhaahr added the bug Something isn't working label Dec 20, 2024
@mortenhaahr
Copy link
Collaborator Author

A somewhat similar example that fails with both defer and eval for only the async runtime:
Input:

0: x = 0
   e = "x + 1"

Spec: (note, eval can be replaced with defer for similar behavior)

in x
in e
out z
z = update(eval(e), 1)

Expected output (can be replicated with queuing runtime):

z[0] = Int(1)
z[1] = Int(1)
z[2] = Int(1)
...

Actual output:

Model: LOLASpecification { input_vars: [VarName("x"), VarName("e")], output_vars: [VarName("z")], exprs: {VarName("z"): Update(Eval(Var(VarName("e"))), Val(Int(1)))}, type_annotations: {} }
z[0] = Int(1)
z[1] = Int(1)
z[2] = Int(1)
z[3] = Int(1)
z[4] = Int(1)
z[5] = Int(1)
z[6] = Int(1)
z[7] = Int(1)
z[8] = Int(1)
z[9] = Int(1)
z[10] = Int(1)
z[11] = Int(1)
z[12] = Int(1)
z[13] = Int(1)
z[14] = Int(1)
z[15] = Int(1)
z[16] = Int(1)
z[17] = Int(1)
z[18] = Int(1)
z[19] = Int(1)
z[20] = Int(1)
z[21] = Int(1)
z[22] = Int(1)
^C

For some reason it hangs on the 22nd data point

@twright

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants