Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contributing back from fork #6

Open
wants to merge 29 commits into
base: master
Choose a base branch
from
Open

Conversation

sehe
Copy link

@sehe sehe commented Jan 26, 2021

This is my invitation to accept my forked changes.

The fork builds on top of the "regular" PR #5.

If you are not interested, please feel free to close this PR, but I felt bad to fork without this invitation.


This generally modernizes, reduces allocation, thread contention, prefers std::* over boost::* stuff (e.g. shared_ptr) and enhances error handling.

The wire protocol is 100% compatible, the tests run and documentation has been updated.

One stress test saw the old msghub take >6GiB of RAM:
image

as opposed to <200MiB with the new version:

image

sehe added 11 commits January 20, 2021 00:55
This can be important. Also, realize that destruction order is the same
in reverse.
Note that there is missin error handling as the TODO already indicated
Avoid non standard code to allow portability.

Made the headers into a named struct and members. Reworded most _data
accesses to use `headers()` accessor.
Adds join to ~msghub_impl
Adds a work_ object to avoid prematurely exiting io_service
Graceful shutdown requires closing the hubconnection and any acceptor if
applicable. Only then will a join() complete.

Actually this needs bit more work, because one does not by default want
to cut operations in the middle of sending output.

I suggest giving `do_close` a bool flag `forced` to indicate whether
pending write operations (operations for which we own the initiative)
are to be forcibly interrupted.

Existing locations to invoke `do_close` in error conditions shall supply
`forced = true`. Other cases should default to `forced = false` so as to
make sure that simple client programs will work as expected:

```c++
    msghub msghub(io_service);
    msghub.connect("localhost", 1334);
    msghub.publish("Publish", "new message");
    //msghub.join(); // implied
```

Here the message should be expected to be delivered, regardless of
timings. Of course barring connectivity issues, in which case
`do_close(forced = true)` should happen because of the error condition.
@sehe sehe force-pushed the std-over-boost branch 2 times, most recently from a8f6c3b to e03e1af Compare January 26, 2021 01:49
sehe added 16 commits February 9, 2021 23:44
To use:

    cmake .
    make
    make test

Run the example in separate terminals:

    ./examples/server
    ./examples/client
Somehow the runner was segfaulting with the CMake steps. Didn't want to
find out why, instead used the default runner main with auto test
suites/cases.
Dropping mutex in favor of strand. Meaning no locking in single threaded
situations, and optimized composite operations
 1. Msghub no longer tries to meddle with threads.

    This fixes many issues (like a failure to handle exceptions
    emanating from IO handlers or a general loss of control over io
    service lifetimes).

    join() is now stop(), because it will not block. Instead, a typical
    pattern would be:

    ```c++
    {
        boost::asio::thread_pool io(3);
        msghub hub(io.get_executor());

        // ... stuff, potentially adding clients and publisher

        hub.stop(); // Make sure  acceptors stop, client connections
                    // and publishers are requested to drain their write
                    // queues

        io.join(); // actually await all extant operations
    }
    ```

    To run on a single thread:

    ```c++
        boost::asio::io_context io;
        msghub msghub(io.get_executor());
        msghub.connect("localhost", 1334);
        msghub.publish("Publish", "message");

        io.run_for(1s);
    ```

  2. `hubconnection` and `hubclient` replaced explicit locking with strands

  3. `test_subscribe` no longer uses `unit_test_monitor_t` to do timed
  execution, because IO contexts have `run_for(1s)` now.
std::string_view, std::span

Since span is c++20, we "abstract" it behind `charbuf`/`const_charbuf`
so that we can provide minimal replacement for backwards compiler
support.
This commit

 - deletes > 100 lines of code
 - moves all buffer calculations inside hubmessage
 - hubmessage no longer uses union or other
 - instead it uses scatter/gather IO:
   ```c++
    auto on_the_wire() const {
        return std::vector {
            boost::asio::buffer(&headers_, sizeof(headers_)),
            boost::asio::buffer(payload_, headers_.topiclen + headers_.bodylen),
        };
    }
   ```

All tests still pass and the on-the-wire format is 100% compatible.

The messagesize verification (that previously had a bug) is now a
natural:

   ```c++
   if (topic.size() + msg.size() > payload_.max_size()) {
   	throw std::length_error("messagesize");
   }
   ```
This prevents memcpy-ing huge swathes of memory, at the cost of
allocating doubly for messages ~242 bytes in topic/body
  msghub, hub, hubmessage -> msghublib
  hubclient, hubconnection, msghub_impl, ihub -> msghublib::detail

  msghub::span is alias for std::span or msghublib::detail::span
  depending on compiler support
This gives the compiler more opportunities to inline/optimize
The old situation had easy-to-ignore boolean return values.

Ihe new interface has tich information, are harder to ignore, but
convenient to use correctly (not providing error_code will raise
system_error exception)
@sehe sehe force-pushed the std-over-boost branch 3 times, most recently from 7052c14 to 77d4586 Compare February 10, 2021 00:18
Based on ubuntu 20.04

Usage:

    docker build . -f containers/std-over-boost -t msghub:tester
    docker run --rm -ti msghub:tester
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant