Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Since all Client_session::async_connect() (x3) and Native_socket_stream::async_connect() are, locally, instant, turned them into synchronous, non-blocking sync_connect()s. #77

Merged
merged 14 commits into from
Feb 29, 2024
Merged
27 changes: 14 additions & 13 deletions src/doc/manual/b-api_overview.dox.txt
Original file line number Diff line number Diff line change
Expand Up @@ -89,15 +89,17 @@ Having obtained a `Session`, the application can open transport channels (and, i
> Session session(nullptr, APP_B_AS_CLI, // This is us (connector).
> APP_A_AS_SRV, // The opposing application (acceptor).
> ...);
> session.async_connect([...](const auto& err_code) { if (!err_code) { ...`session` is ready.... } });
> // ^-- asio-style API. Non-blocking/synchronous API also available (integrate directly with poll(), epoll_...(), etc.).
> Error_code err_code;
> session.sync_connect(&err_code); // Synchronous, non-blocking.
> if (!err_code) { ...`session` is ready.... } else { ...opposing server not active/listening... }
>
> // Server side:
> Session_server session_srv(nullptr, APP_A_AS_SRV, // This is us (acceptor).
> { { APP_B_AS_CLI.m_name, APP_B_AS_CLI } }); // These are potential opposing apps.
> Session session;
> session_srv.async_accept(&session,
> [...](const auto& err_code) { if (!err_code) { ...`session` is ~ready.... } });
> // ^-- asio-style API. Non-blocking/synchronous API also available (integrate directly with poll(), epoll_...(), etc.).
>
> // NOTE: Upon opening session, capabilities of `session` on either side are **exactly the same**.
> // Client/server status matters only when establishing the IPC conversation;
Expand All @@ -111,17 +113,15 @@ Having obtained a `Session`, the application can open transport channels (and, i
> // Client side: Example: Expect 1 client-requested channel; N server-requested channels.
> Session::Channels my_init_channels(1);
> Session::Channels their_init_channels;
> session.async_connect(..., &my_init_channels, ..., &their_init_channels,
> [...](const auto& err_code)
> Error_code err_code;
> session.sync_connect(..., &my_init_channels, ..., &their_init_channels, &err_code); // Synchronous, non-blocking.
> if (!err_code)
> {
> if (!err_code)
> {
> // `session` is ready, but we might not even care that much because:
> // my_init_channels[0] = The 1 pre-opened Session::Channel_obj we requested.
> // their_init_channels = vector<Session::Channel_obj> = The N pre-opened channels opposing side requested.
> // Sessions are mostly a way to establish channels after all, and we've got ours!
> }
> });
> // `session` is ready, but we might not even care that much because:
> // my_init_channels[0] = The 1 pre-opened Session::Channel_obj we requested.
> // their_init_channels = vector<Session::Channel_obj> = The N pre-opened channels opposing side requested.
> // Sessions are mostly a way to establish channels after all, and we've got ours!
> }
> // Server side: Example: Expect <cmd line arg> server-requested channels; N client-requested channels.
> Session::Channels my_init_channels(lexical_cast<size_t>(argv[2]));
> Session::Channels their_init_channels;
Expand Down Expand Up @@ -257,7 +257,8 @@ All transport APIs at this layer, as well the structured layer (see below), have
> const auto SRV_ADDR = util::Shared_name::ct("sharedSockNameSrv");
> // ^-- Client and server need to agree on this, and it must not conflict. ipc::session would take care of this.
> Native_socket_stream sock(nullptr, "cool_a_nickname");
> sock.async_connect(SRV_ADDR, [...](const auto& err_code) { ... });
> Error_code err_code;
> sock.sync_connect(SRV_ADDR, &err_code); // Synchronous, non-blocking.
>
> // Client/server: server. A dedicated server-acceptor class exists for this.
>
Expand Down
6 changes: 1 addition & 5 deletions src/doc/manual/d-async_loop.dox.txt
Original file line number Diff line number Diff line change
Expand Up @@ -241,10 +241,6 @@ The exceptions:
- (**) Not being perf-sensitive, internally the `sync_io::Native_socket_stream_acceptor` is actually built around an async-I/O `Native_socket_stream_acceptor`. Makes no difference to the user: the APIs work regardless of how it is implemented internally. That said one cannot construct one from the other or vice versa.
- (*) Not being perf-sensitive, all the `Session` and `Session_server` types are written primarily in async-I/O fashion. However, for those wishing to more easily integrate a `poll()`-y event loop with session-connect (client) and session-accept (server) operations, a `sync_io` API is available. Use `Client_session_adapter` to adapt *any* `Client_session`-like type (namely the templates listed there) -- simply supply the latter type as the template arg. Ditto `Server_session_adapter` for *any* `Server_session`-like type. Lastly `Session_server_adapter` adapts *any* (of the 3) `Session_server`-like types.

In addition:
- (+) As one can construct a `Native_socket_stream` from a `sync_io::Native_socket_stream` core (again -- true of many other types as shown), one can *also* (in sane conditions) "eject" the core and use it by itself. This is `ipc::transport::Native_socket_stream::release()`.
- As of this writing this operation is only available for that type (although for consistency we could add it to others). Rationale: `Native_socket_stream` can optionally be created in NULL state so as to then `Native_socket_stream::async_connect()` itself into PEER state. This might be most conveniently done via the async-I/O API. However, having accomplished this, it may *then* be desirable to bundle it into a `sync_io`-pattern `Channel`; or otherwise give it to some API that wishes to use the lighter-weight `sync_io`-pattern core. By using `.release()` one can easily and performantly "downgrade" to a `sync_io::Native_socket_stream`.

Lastly: for generic programming each relevant type features a couple of alias members:
- `Sync_io_obj`: In an `X`: this is `sync_io::X`. In a `sync_io::X`: this is empty type `Null_peer`.
- `Async_io_obj`: Conversely in a `sync_io::X`: this is `X`. In an `X`: this is empty type `Null_peer`.
Expand All @@ -264,7 +260,7 @@ As of this writing these are such cases:
In the latter case: the generator guy (whether vanilla/async-I/O `Session_server` or `sync_io::Session_server_adapter<Session_server>`) generates an object of the same API-type. I.e., async-I/O guy generates async-I/O guys; `sync_io` guy generates `sync_io` guys. Rationale: These are not perf-critical, and we reasoned that if one went to the trouble of wanting to use a `sync_io` server, then they'll want it to generate `sync_io` sessions to integrate into the same, or similarly built, event loop.

@par
In the former three cases: the generator guy (whether async-I/O-pattern or not!) generates `sync_io` cores. I.e., acceptor fills-out a `sync_io` socket-stream; open-channel method/passive-open handler/session-async-accept/session-async-connect each fills-out a `sync_io` channel object. The subsumer guy (whether async-I/O-pattern or not!) subsumes a `sync_io` core. I.e., a structured-channel ctor takes over a `sync_io` channel (which may have in fact been generated by `Session::open_channel()`... synergy!). Rationale: `sync_io` cores are lighter-weight to generate and pass around, and one can always very easily then "upgrade" them to async-I/O objects as needed:
In the former three cases: the generator guy (whether async-I/O-pattern or not!) generates `sync_io` cores. I.e., acceptor fills-out a `sync_io` socket-stream; open-channel method/passive-open handler/session-async-accept/session-sync-connect each fills-out a `sync_io` channel object. The subsumer guy (whether async-I/O-pattern or not!) subsumes a `sync_io` core. I.e., a structured-channel ctor takes over a `sync_io` channel (which may have in fact been generated by `Session::open_channel()`... synergy!). Rationale: `sync_io` cores are lighter-weight to generate and pass around, and one can always very easily then "upgrade" them to async-I/O objects as needed:
- Use ipc::transport::Native_socket_stream ctor that takes a `sync_io::Native_socket_stream&&`.
- Use ipc::transport::Channel::async_io_obj().

Expand Down
47 changes: 19 additions & 28 deletions src/doc/manual/e-session_setup.dox.txt
Original file line number Diff line number Diff line change
Expand Up @@ -107,15 +107,15 @@ Lifecycle of process versus lifecycle of session: Overview
The IPC-related lifecycle of any session-client process is particularly simple:
-# Start.
-# Create the @ref universe_desc "IPC universe description".
-# Construct @link ipc::session::Client_session Client_session C@endlink (ideally SHM-backed extension thereof); and `C.async_connect()` to the server.
-# Construct @link ipc::session::Client_session Client_session C@endlink (ideally SHM-backed extension thereof); and `C.sync_connect()` to the server.
-# If this fails, server is likely inactive; hence sleep perhaps a second; then retry&lowast;&lowast;. Repeat until success.
-# Engage in IPC via now-open `Session C` until (gracefully) told to exit (such as via SIGINT/SIGTERM)&lowast;; or:
-# Is informed by `Session C` on-error handler that the session was closed by the opposing side; therefore:
-# Destroy `C`.
-# Go back to step 3.

&lowast; - In this case destroy `C` and exit process.<br>
&lowast;&lowast; - See ipc::session::Client_session::async_connect() doc note regarding dealing with inactive opposing server.
&lowast;&lowast; - See @link ipc::session::Client_session_mv::sync_connect() Client_session::sync_connect() Reference doc@endlink note regarding dealing with inactive opposing server.

Thus a typical session-client is, as far as IPC is concerned, always either trying to open a session or is engaging in IPC via exactly one open session; and it only stops doing the latter if it itself exits entirely; or the session is closed by the opposing side. There is, in production, no known good reason to end a session otherwise nor to create simultaneous 2+ `Client_session`s in one process.

Expand Down Expand Up @@ -170,40 +170,31 @@ So now we can construct our `Client_session` (which we named just `Session` sinc
session(..., // Logger.
APP_B_AS_CLI, // Part of the IPC universe description: Our own info.
APP_A_AS_SRV, // Part of the IPC universe description: The opposing server's info.
...); // On-error handler (discussed separately). This is relevant only after async_connect() success.
...); // On-error handler (discussed separately). This is relevant only after sync_connect() success.
~~~

Now we can attempt connection via `async_connect()`. There are 2 forms, the simple and the advanced. Ironically, due to its ability to pre-open channels, the advanced form is in most cases the *easier* one to use all-in-all -- it makes it unnecessary to do annoying possibly-async channel-opening later -- but we aren't talking about channel opening yet, so that part is not relevant; hence for now we'll go with the simpler API. (@ref chan_open gets into all that.)
Now we can attempt connection via `sync_connect()`. There are 2 forms, the simple and the advanced. Ironically, due to its ability to pre-open channels, the advanced form is in most cases the *easier* one to use all-in-all -- it makes it unnecessary to do annoying possibly-async channel-opening later -- but we aren't talking about channel opening yet, so that part is not relevant; hence for now we'll go with the simpler API. (@ref chan_open gets into all that.)

Using it is simple:

~~~
// Thread U.
session.async_connect([...](const Error_code& err_code)
Error_code err_code;
session.sync_connect(&err_code);
if (err_code)
{
if (err_code == ipc::session::error::Code::S_OBJECT_SHUTDOWN_ABORTED_COMPLETION_HANDLER)
{
return;
}
post([..., err_code]()
{
// Thread U.
if (err_code)
{
// async_connect() failed. Assuming everything is configured okay, this would usually only happen
// if the opposing server is currently inactive. Therefore it's not a great idea to immediately
// async_connect() again. A reasonable plan is to schedule another attempt in 250-1000ms.

// ...;
return;
}
// else: Success!

// Operate on `session` in here, long-term, until it is hosed. Once it is hosed, probably repeat this
// entire procedure.
go_do_ipc_yay(...);
});
});
// sync_connect() failed. Assuming everything is configured okay, this would usually only happen
// if the opposing server is currently inactive. Therefore it's not a great idea to immediately
// sync_connect() again. A reasonable plan is to schedule another attempt in 250-5000ms.

// ...;
return;
}
// else: Success!

// Operate on `session` in here, long-term, until it is hosed. Once it is hosed, probably repeat this
// entire procedure.
go_do_ipc_yay(...);
~~~

`Session_server` and `Server_session` setup
Expand Down
52 changes: 22 additions & 30 deletions src/doc/manual/f-chan_open.dox.txt
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ That said please remember the following limitation:
In practice this should be more than sufficient for all but the wildest scenarios (in fact exactly 1 such segment should be plenty), as we'll informally show below.

### Opening init-channels ###
Simply: you can optionally have N channels available as soon as the session becomes available in opened state (i.e., upon `Client_session::async_connect()` success and `Session_server::async_accept()` success + `Session_server::init_handlers()` sync return). Then you can immediately begin to use them on either side!
Simply: you can optionally have N channels available as soon as the session becomes available in opened state (i.e., upon `Client_session::sync_connect()` sync success and `Session_server::async_accept()` success + `Session_server::init_handlers()` sync return). Then you can immediately begin to use them on either side!

The question is, of course: how many is N, and subsequently which one to use for what purpose. Naturally the two sides must agree on both aspects, else chaos ensues. The answer can range from
- very simple (N=1!); to
Expand All @@ -98,12 +98,12 @@ The question is, of course: how many is N, and subsequently which one to use for
This requires large amounts of flexibility despite involving zero asynchronicity. What it does *not* require is back-and-forth negotiation. (If that is truly required, you'll need to consider @ref on_demand "opening on-demand channels".)

So here's how it works in the client->server direction:
- Client side uses 2nd overload of @link ipc::session::Client_session::async_connect() Client_session::async_connect()@endlink, specifying how many channels -- call it C -- it wants opened *on behalf of the client*. C can be zero. Plus, optionally, via the same overload:
- Client side uses 2nd overload of @link ipc::session::Client_session_mv::sync_connect() Client_session::sync_connect()@endlink, specifying how many channels -- call it C -- it wants opened *on behalf of the client*. C can be zero. Plus, optionally, via the same overload:
- client side specifies via a **client-behalf channel-open metadatum** Mc any other information it deems worthwhile -- especially having to do with the nature of the C channels it is mandating be init-opened.
- As a result:
- C **client-behalf init-channels** are made available to server side via 2nd overload of @link ipc::session::Session_server::async_accept() Session_server::async_accept()@endlink.
- **Client-behalf channel-open metadatum** Mc is made available, optionally, alongside those C init-channels.
- The same C client-behalf init-channels are made available to client side via `async_connect()` itself, locally.
- The same C client-behalf init-channels are made available to client side via `sync_connect()` itself, locally.

The server->client direction is similar but reversed; the data transmitted server->client being specified via 2nd @link ipc::session::Session_server::async_accept() Session_server::async_accept()@endlink overload and, in summary, comprising:
- S **server-behalf init-channels** and
Expand All @@ -129,7 +129,7 @@ The metadata requires a capnp input file whose resulting capnp-generated .c++ fi
}
~~~

We are now ready to make the more-complex `async_connect()` call. This is a different overload than the one used in the simpler example in @ref session_setup.
We are now ready to make the more-complex `sync_connect()` call. This is a different overload than the one used in the simpler example in @ref session_setup.

~~~
namespace my_meta_app
Expand All @@ -147,32 +147,24 @@ We are now ready to make the more-complex `async_connect()` call. This is a dif
// Expected channel-count is specified via the .size() of this vector<Channel>, filled with empty `Channel`s.
Session::Channels init_channels(boost::lexical_cast<size_t>(argv[1])); // <-- ATTN: runtime config.

session.async_connect(mdt, // Client-to-server metadata.
&init_channels, // Out-arg for cli->srv init-channels as well as in-arg for how many we want.
nullptr, nullptr, // In our example we expect no srv->cli metadata nor srv->cli init-channels.
[...](const Error_code& err_code)
Error_code err_code;
session.sync_connect(mdt, // Client-to-server metadata.
&init_channels, // Out-arg for cli->srv init-channels as well as in-arg for how many we want.
nullptr, nullptr, // In our example we expect no srv->cli metadata nor srv->cli init-channels.
&err_code);
if (err_code);
{
if (err_code != ipc::session::error::Code::S_OBJECT_SHUTDOWN_ABORTED_COMPLETION_HANDLER)
{
return;
}
post([..., err_code]()
{
if (err_code)
{
// ...
return;
}
// else

// `session` is open!
// `init_channels` are open! Probably, since we specified num_structured_channels to server, we would
// now upgrade that many of leading `init_channels` to a struc::Channel each (exercise left to reader).

// Operate on `session` (etc.) in here, long-term, until it is hosed.
go_do_ipc_yay(...);
});
});
// ...
return;
}
// else:

// `session` is open!
// `init_channels` are open! Probably, since we specified num_structured_channels to server, we would
// now upgrade that many of leading `init_channels` to a struc::Channel each (exercise left to reader).

// Operate on `session` (etc.) in here, long-term, until it is hosed.
go_do_ipc_yay(...);
} // namespace my_meta_app
~~~

Expand Down Expand Up @@ -249,7 +241,7 @@ While either side *can* accept passive-opens, it will only do so if indeed a pas
- Session-server: It is an argument to the @link ipc::session::Server_session::init_handlers() Server_session::init_handlers()@endlink method overload. Again, in our earlier example code we chose the other overload thus disabling server-side passive-opening; by adding an arg we would've indicated otherwise.

This may seem asymmetrical, contrary to the spirit of the allegedly-side-agnostic `Session` concept, but that is not so:
- `Session_client` ctor is invoked before the session is "opened," as `async_connect()` has not been called let alone succeeded.
- `Session_client` ctor is invoked before the session is "opened," as `sync_connect()` has not been called let alone succeeded.
- `Session_server::init_handlers()` is invoked just before the session is opened, as `init_handlers()` *is* the step that promotes the session to "opened" state.

---
Expand Down
Loading
Loading