Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom vtable API #567

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from
Draft

Conversation

HyeonuPark
Copy link

This is my suggestion about constructing Bytes with custom vtable. Since it's not a small modification, I want to share my direction to maintainers before writing more docs or bikeshedding etc. You can consider this PR as an executable RFC.

Related issues: #359 #437 #526

The base idea is to expose a trait purely consists of associated functions. It's not a breaking change to add associated function with default impl to existing trait. Default impl being mandatory can be a problem, but it's an inherent problem of exposing custom vtable.

At this point this PR is consists of 5 commits. They're quite independent that I think each deserves its own PR after the consensus is made. It's possible that some of them will be advanced while others are rejected. First 2 commits are about improving current vtable signatures. it's hard to do so after stabilizing them. Next 2 commits implements custom vtable API. The last one implements downcasting.

The first commit is about changing the name of the vtable entry to_vec to into_vec to match its semantics. Changes in its signature also reduces atomic operations, though performance wasn't a goal here. I hardly expect it actually improve performance.

The second commit is to add will_truncate function to vtable. Currently promotable buffers can't discard bytes from the last without promotion, so Bytes::trucate specialize them by equality checking vtable address with 2 promotable vtable static variables. It doesn't seems like a good idea with open set of vtables.

The third commit implements the BytesImpl trait and the Bytes::with_impl constructor. It would be the main commit of this PR.

The fourth commit occupies majority of diffs this PR have, but all it does it to move functions into impl blocks. It should not add or change any behavior.

The fifth commit is about downcasting. I think round tripping between custom buffers and the Bytes is the next step people would expect. Potentially try_unsplit can also be built on top of it.

Thanks for reading. Any feedback is welcome.

@HyeonuPark
Copy link
Author

Fixed minor CI issues.

  • Replace mem::take with mem::replace
  • Replace AtomicPtr::get_mut with with_mut for the loom test

@ipoupaille
Copy link

There is another related request: #558

@Darksonn
Copy link
Contributor

The implementation seems ok at a cursory glance. To me the main question is whether we want this or not, and if so, how should the public API of it look.

@HyeonuPark
Copy link
Author

To clarify, I need this feature to run hyper with zero copy networking. The hyper uses types from the http crate which uses Bytes type everywhere, which means the HTTP headers are copied at least once unless the OS gives you incoming bytes as Bytes type. But to use io_uring optimally you need to pre-register buffers and keep reusing them over and over again, which is not possible with current Bytes interface as it deallocates buffer after the last use. With AF_XDP buffer reuse is mandatory and it has more restrictions like all the IO buffers should be equally sized slices of single large pre-registered allocation.

With custom vtables for Bytes, low level IO wrapper crates can provide its own buffer type with proper restrictions and user of that crate can convert it into Bytes. That's why impl BytesImpl requires tons of unsafe but using such types is totally safe with this PR.

I'm still not sure about those methods of the BytesImpl trait. Currently they're mostly copied as-is from the existing Vtable struct with some fixes.

@rrichardson
Copy link
Contributor

@HyeonuPark - This looks great. One concern I have is to ensure that 3rd party container implementors have everything they need to convert their containers to and from Bytes. Other than the clone method in the BytesImpl trait, I think that they should be able to. I'll try to create an example/test that demonstrates this.

Regarding the Doc comments, I am implementing a branch against your branch that implements documentation that is more closely aligned with Tokio's conventions, and is clippy compliant. I'll make a PR to your repo and you're welcome to merge them in if you want.

@seanmonstar - Is there a way that we can put this implementation in a feature gate, or maybe even a major version RC so that we can incorporate it into projects for better exposure and vetting?

unsafe fn from_bytes_parts(data: &mut AtomicPtr<()>, ptr: *const u8, len: usize) -> Self;

/// Returns new `Bytes` based on the current parts.
unsafe fn clone(data: &AtomicPtr<()>, ptr: *const u8, len: usize) -> Bytes;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HyeonuPark - Is there a case in which one would need to clone bytes using the components?
If so, is there another way to go about this?
A third party container implementor (say, Mmap https://docs.rs/memmap/latest/memmap/struct.Mmap.html) isn't going to be able to construct a Bytes instance because they won't have access to the internals of Bytes

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the current design cloning bytes should use the component, since cloning mechanisms like reference counting are implemented as component. For example they should do things like Arc::increment_strong_count() within it. Currently the Bytes is shipped with number of builtin components:

  • Static component backed by &'static [u8] and does nothing on cloning
  • Shared component which increase refcount on cloning.
  • Promotable component which initially store bytes without Arc and be promoted to shared component on first cloning.

And the BytesMut has number of components similar but slightly different stretagies. And the promotable one is the only reason we have &AtomicPtr<()> not *const (). Is it necessary? I'm not sure, but it was one of the goals of this PR to not change existing behaviors.

For the mmap part, I built my own buffer type from aligned slices of giant single mmap region. First few bytes of each slices are considered header and one of its fields are used as refcount since I don't want additional allocation for it. This refcount is modified on clone and drop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was referring to the fact that clone returns Bytes. A 3rd party implementation isn't going to be able to construct Bytes using the parts, because they don't have access to the internals of Bytes.

We would have to provide some sort of public Bytes::from_parts but I think that isn't the best approach.

Better would be that the trait doesn't specify that the implementation returns Bytes. Instead, the consumer of the clone function (a Bytes instance) would receive the new parts, and know how to construct them.

/// All implementations must fulfill the following requirements:
/// - They are cheaply cloneable and thereby shareable between an unlimited amount
/// of components, for example by modifying a reference count.
/// - Instances can be sliced to refer to a subset of the the original buffer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The trait and its methods all require a # Safety section in order to comply with the Repo's clippy config.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to say this PR itself wasn't intended to be merged as-is, mostly because I was afraid it's too big to be merged at once. I planned to split it into multiple PRs once it got enough support. Also I want comments from the original author since the original logics are untouched and well, it's the # Safety section of the public trait document.

I checked your PR to this branch. It looks good for me except for the default clone impl, for the reason I've described in above comment. The default "do nothing on clone" strategy would be only valid for &'static [u8] buffers which is covered by builtin impl.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. That default impl was sort of a placeholder.

I think we need to come up with a layered strategy, though.
We should offer a default refcounted implementation for those that don't care about:

  1. Don't care about copy-on-write optimizations
  2. Special rules for cloning,

}
unsafe impl BytesImpl for StaticImpl {
fn into_bytes_parts(this: Self) -> (AtomicPtr<()>, *const u8, usize) {
let mut bytes = mem::ManuallyDrop::new(Bytes::from_static(this.0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

into_bytes_parts is used to construct a bytes instance using with_impl, correct?
So this constructs a Bytes instance in order to decompose the Bytes instance so that we can construct a Bytes instance?
That is a tad confusing.

Do we even need to have a StaticImpl?
Since from_static "manually" constructs a VTable and the Bytes struct, it could just use free method versions of into_bytes_parts, clone, drop, into_vec.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree things are confusing with StaticImpl but it's because Bytes::from_static() is const fn so we can't use traits within it on stable. And I tried to DRY as much as possible in other places. Maybe it would be better to have right dependencies between functions even if it duplicates some logic?

StaticImpl would be needed to allow downcasting Bytes to &'static [u8] without reinventing the wheel. Actual API for it is not included. I'm not sure which one is better, adding Bytes::try_into_static() or impl TryFrom<Bytes> for &'static [u8] or exposing SharedImpl to public.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. I didn't notice the const. I think further decomposing the logic into utility functions, and then duplicating the minimum amount is the best approach.

... or exposing SharedImpl to public.

I think the solution here is "layers". We offer some default implementations, similar to SharedImpl, as a lower-level public API so that people can compose their own higher-level implementations of BytesImpl

///
/// Useful if you want to construct `Bytes` from your own buffer implementation.
#[inline]
pub fn with_impl<T: BytesImpl>(bytes_impl: T) -> Bytes {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the namefrom_impl might be more clear, since it consumes T and converts to Bytes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. I agree on it.

@rrichardson
Copy link
Contributor

rrichardson commented Jan 12, 2023

Thinking more about fn clone in the impl... Maybe there should be a default implementation of clone, so that the implementor doesn't have to implement their own refcounting.

Or maybe we could implement BytesImpl for Arc where T meets certain criteria?

*edit*
I think that most basic buffer types could just leverage the SharedImpl type to do the heavy lifting. It'd be nice to wrap all of that in some slightly more ergonomic types..

@rrichardson
Copy link
Contributor

One more question: Should we specify that from_bytes_parts should panic if refcount > 1, or should it return a Result/Option?

@rrichardson
Copy link
Contributor

For the AtomicPtr bits, we might need to re-export the relevant types fromloom::sync::atomic or core::sync::atomic to be safe. ... and AtomicMut.

I was able to make a working impl of BytesImpl for mmap::Mmap rather easily.
I just needed to add a from_parts method to Bytes. (see below)

All that said, I would definitely consider this the low-level and error-prone and certainly not DRY approach to incorporating a buffer type into a Bytes container.

For basic buffer types, (both mutable and immutable) we should be able to make a much simpler interface. clone, into_vec and from_bytes_parts could have default implementations. We'd just need a simple intermediate wrapper type like SharedImpl.

I also think that downcast_impl and from_bytes_parts can be a footgun. If you convert a Bytes instance back into T, but there are still clones of Bytes around, then you create a scenario in which two different objects are responsible for dropping the buffer owned by T. It can only work correctly if T itself had its own internal refcounting that BytesImpl uses, but if it did, why would they need Bytes?

      pub unsafe fn from_parts<T: BytesImpl>(data: &AtomicPtr<()>, ptr: *const u8, len: usize) -> Self {
          let dptr = data.load(Ordering::Acquire);

          Bytes {
              ptr,
              len,
              data: AtomicPtr::new(dptr.cast()),
              vtable: &Vtable {
                  type_id: TypeId::of::<T>,
                  clone: T::clone,
                  will_truncate: T::will_truncate,
                  into_vec: T::into_vec,
                  drop: T::drop,
              },
          }
      }

@HyeonuPark
Copy link
Author

HyeonuPark commented Jan 15, 2023

It would be nice to make SharedImpl generic over impl AsRef<[u8]> not just for Vec<u8>. Note that it should be one from the bytes_mut.rs since the one with same name in bytes.rs is heavily optimized for Box<[u8]>. Though I'm not sure it's much beneficial since if you have custom buffer type you may want to implement refcount directly on it. It's not that hard when you have working examples on std::sync::Arc and bytes::Bytes code and you don't need to handle weak handles.

refcount > 1 on from_bytes_parts is expected on downcasting. For example if you make fresh struct MyBuffer(Arc<[u8]>, Range<usize>);, convert it into Bytes, clone it 2 times and downcast it back its refcount will be 3 on from_bytes_parts.

I haven't concerned exposing loom AtomicPtr. Maybe we need to newtype it like tokio::time::Instant?

I also think that downcast_impl and from_bytes_parts can be a footgun. If you convert a Bytes instance back into T, but there are still clones of Bytes around, then you create a scenario in which two different objects are responsible for dropping the buffer owned by T. It can only work correctly if T itself had its own internal refcounting that BytesImpl uses, but if it did, why would they need Bytes?

I really should have document if more, that multiple Bytes cloned from the same Bytes doesn't share same instance T: BytesImpl. It's like if you have #[derive(Clone)] struct Wrapper(Arc<[u8]>); multiple clones of it would share [u8] but each have its own instance of Arc<[u8]>.

The reasons why clone and drop exist as own trait methods not just implied from supertrait and automatic drop glue is to allow users to create their own StaticImpl and PromotableImpl. It's possible to make helper struct and trait to remove those methods and all that AtomicPtr and just use &self / &mut self instead. Or, maybe it would be better to make that helper trait as a sole public API, reducing API complexity a lot by sacrificing custom static/promotable buffers.

Edit: StaticImpl still is possible with that API. It can't be made in const context but it's impossible without const traits anyway. So the question is all about custom promotable buffers.

@rrichardson
Copy link
Contributor

I really should have documented if more, that multiple Bytes cloned from the same Bytes doesn't share same instance T: BytesImpl. It's like if you have #[derive(Clone)] struct Wrapper(Arc<[u8]>); multiple clones of it would share [u8] but each have its own instance of Arc<[u8]>.

Right. When you convert one of the instances of Bytes back to T, but other instances of Bytes remain, you now have more than one instance that owns the inner buffer.

Let's say you have:

struct MyBuf {
    inner: Box<[u8]>, 
}

When we convert MyBuf to a Bytes instance, the new instance takes control of the [u8] from inner. When we clone the Bytes instance, we still have only 1 copy of [u8]. This is good.

Now let's convert one of those Bytes instances back into MyBuf We break down the Bytes instance into its parts, then reconstruct MyBuf, giving the ownership of [u8] back to MyBuf. So it controls the lifetime of the inner [u8]].

However, there is still an instance of Bytes, and its copy of SharedImpl also owns the [u8].
So we have a dangling pointer if either MyBuf or that instance of Bytes are destroyed.


We need to find a way to design the API such that it is impossible to convert from Bytes to T if the refcount is > 1, similar to the design of Rc and RefCell. Also, we would need to disallow conversion from Bytes to Bytes Mut if refcount > 1. We can use the same underlying mechanism to enforce both invariants.

If we just built and published the "Inner" Container States API: Shared, SharedMut Promotable, PromotableMut Static, StaticMut (or whatever we end up needing to cover all use cases), then we control the transitions that are possible from each state, using a Type State.

The 3rd party implementors would be able to easily adapt their implementation without having to worry about the internals of memory management, alignment, thread-safety, etc.

Perhaps all of those above types implement the same type, e.g. ManagedBuffer, then Bytes and BytesMut could just store a dyn ManagedBuffer, we wouldn't need to manage the VTable manually. Is this sort of what you were referring to when you said:

It's possible to make helper struct and trait to remove those methods and all that AtomicPtr and just use &self / &mut self instead.

@rrichardson
Copy link
Contributor

rrichardson commented Jan 17, 2023

At a high level, we'd need a 4d matrix of functionality. These wouldn't necessarily all be separate buffer types, but we need to ensure that we can handle the use-cases.

(de)Allocation Style:

  • Global Allocator (Global Free)
  • 3rd Party Managed (User Supplied Drop)
  • Static (Null Drop)

Mutability:

  • Immutable
  • Mutable

Immutable -> Mutable Conversion Method

  • Copy on Write: Copy Buffer to Convert
  • Mutate Self: Assert Refcount == 1 to Convert

Clone Operation:

  • Naive/Optimized: Convert to into Refcounted and Increment
  • Refcounted: Just Increment
  • Static: No refcounting needed.

I think that we'd also need to add a method to the vtable to accomodate the potential of converting to mutable.
It'd need to be able to communicate whether or not it could mutate without copying, then the actual operation to convert to mutable.

Converting from Mutable to immutable shouldn't need to access the VTable.. I don't think..

@rrichardson
Copy link
Contributor

rrichardson commented Jan 17, 2023

I'm going to attempt a version that formalizes this lower-level API, and also uses more a Typestate scheme and enum based dispatch, which, among other things, is going to be faster than any vtable. (not that the current impl is an actual ptr based vtable) It should be quite a bit safer, too. Certain state transitions will be inexpressible.

@HyeonuPark
Copy link
Author

HyeonuPark commented Jan 18, 2023

Let's say you have:

struct MyBuf {
inner: Box<[u8]>,
}

When we convert MyBuf to a Bytes instance, the new instance takes control of the [u8] from inner. When we clone the Bytes instance, we still have only 1 copy of [u8]. This is good.

MyBuf should not be converted to a Bytes since it doesn't satisfy requirements of the Bytes implementations.

All Bytes implementations must fulfill the following requirements:

  • They are cheaply cloneable and thereby shareable between an unlimited amount of components, for example by modifying a reference count.
  • Instances can be sliced to refer to a subset of the the original buffer.

Bytes doesn't have builtin refcounting mechanism. SharedImpl provides one and PromotableImpl relies on it by storing it on one if its (logical) enum variant. If MyBuf is converted to Bytes the Bytes::clone() should clone the stored MyBuf itself. Note that I used struct MyBuffer(Arc<[u8]>, Range<usize>); as an example. It provides cheap cloning via refcounting and subslicing on itself.

Now let's convert one of those Bytes instances back into MyBuf We break down the Bytes instance into its parts, then reconstruct MyBuf, giving the ownership of [u8] back to MyBuf. So it controls the lifetime of the inner [u8]].

However, there is still an instance of Bytes, and its copy of SharedImpl also owns the [u8].
So we have a dangling pointer if either MyBuf or that instance of Bytes are destroyed.

Maybe you want to convert SharedImpl<MyBuf> into Bytes instead? SharedImpl already provides Arc<Vec<u8>>-ish functionality and it's desirable to exploit it but replace the storage type. But in this case downcasting will returns SharedImpl<MyBuf> not MyBuf and like Arc<MyBuf> you should not be able to take ownership of inner type while other shared handle to it is still alive.

We need to find a way to design the API such that it is impossible to convert from Bytes to T if the refcount is > 1, similar to the design of Rc and RefCell. Also, we would need to disallow conversion from Bytes to Bytes Mut if refcount > 1. We can use the same underlying mechanism to enforce both invariants.

BytesMut can have refcount > 1 by ensuring each instance owns disjoint slice of the shared storage. I don't think it's possible to support un-freeze Bytes back to BytesMut without changing the BytesMut's structure. But disallowing refcount > 1 means BytesMut::split() allocate every time.

At this point I think those promotable impls are special enough that 3rd party promotables are not something we need. I think its purpose, besides historical reasons, is to make round trip between Vec<u8> and Bytes as cheap as possible for interoperability. If we remove it from the custom vtable's goal, the BytesImpl trait can be something like this:

unsafe trait BytesImpl: Clone + 'static where Vec<u8>: From<Self> {
    fn into_bytes_parts(this: Self) -> (*mut (), *const u8, usize);
    unsafe fn from_bytes_parts(data: *mut (), ptr: *const u8, len: usize) -> Self;
}

I don't think we need abstraction on mutable conversion. BytesMut doesn't seems for unified mutable buffer type with custom implementations. Each BytesImpl implementations can handle it after downcasting.

@HyeonuPark
Copy link
Author

It wasn't intended to make generic SharedImpl a first class citizen. If we decide to make it, it would be a helper type on top of the BytesImpl API for users who don't want to reinvent whole refcounting stuff on their own since we already have one. It at least doesn't cover my own use case.

Let me describe my own use case. First user creates a context. The context mmap a large memory, do some mlock stuff, split them into aligned chunks and push them all into its own queue. Each chunk stores inline refcount and pointer to the context on its first few bytes. User acquire new mutable buffer from context by pop the queue, write into it, and freeze it to construct clonable, sliceable and immutable buffer type. On drop with refcount 0 it pushes itself into the context queue. And I want to make this immutable buffer BytesImpl.

I don't think that the users of MMap and others care if they get an optimization that skips refcounting. They're converting their buffer to Bytes instance expecting it to get the Atomic refcounting baggage.

I'm not sure about it. Single thread runtime or runtime per thread users may want to have hybrid refcount and optimistically use unsynchronized field on original thread. Will it actually boost performance? I don't know but I don't want to prevent it. Also my own use case is an example to manage refcount internally to not have separate allocation for refcount. If user don't want to handle atomic refcounting baggage they may use helper libraries, like generic SharedImpl.

It is perfectly okay for the result of try_downcast() to be Err(Error("refcount > 1")) similar to arc/rc/refcell.

Semantically it's perfectly ok, but is it useful? One of the major use case of BytesMut is to reduce allcations on handling unsized data. To do so each allocations should be large enough to host about 4 to dozen expected size of BytesMut handles. Which means they rarely have refcount 1.

@rrichardson
Copy link
Contributor

rrichardson commented Jan 18, 2023

It wasn't intended to make generic SharedImpl a first class citizen. If we decide to make it, it would be a helper type on top of the BytesImpl API for users who don't want to reinvent whole refcounting stuff on their own since we already have one. It at least doesn't cover my own use case.

I was hoping that we could offer some intermediate types to make it easier for implementors to integrate with Bytes, but I think you're right, it's not possible in most cases. Later, we could offer a public version that is similar to SharedImpl to make it easy for people with trivial buffer types (that can be freed with the global allocator). It should not be in scope right now.

After thinking about it more, I think the fundamental dilemma of 3rd party integration comes down to the drop() impl. We'll need to call the correct drop method in order to return the buffer back to its pool or whatnot. This means we can't lose the type, or we need to be able to reconstruct it easily on a drop. This, IMO is the single best reason to make the BytesImpl trait public, and most other bits private (as private as possible)

You also make a great case regarding refcounting baggage, etc. That should also be up to the implementor.

I think I'm finally convinced that your design is the best approach.

That said, there are a few things we still need to nail down..

It is perfectly okay for the result of try_downcast() to be Err(Error("refcount > 1")) similar to arc/rc/refcell.

Semantically it's perfectly ok, but is it useful? One of the major use case of BytesMut is to reduce allcations on handling unsized data. To do so each allocations should be large enough to host about 4 to dozen expected size of BytesMut handles. Which means they rarely have refcount 1.

The use-case for try_unfreeze is almost always to reduce allocations in the BytesMut::with_capacity or from_vec_mut case.

  • They construct a BytesMut that performs its own allocation.
  • They copy their data into it.
  • Pass it to their socket::write call (which drops its copy when it's done).
  • Now they still have this allocated buffer in their (single) copy of Bytes. Why allocate a new one? Just convert that Bytes instance back into BytesMut and re-use it. There are 4 or 5 issues that requested that the Bytes -> BytesMut functionality be re-created after it was removed.

Understanding that it is a draft, I'd say that a final implementation should include the following: (in order of priority)

  • Any method that could cause an allocation should take a force: bool parameter, and return the newly constructed type or an Err(Error::WouldAllocate)
    • The allocation requirement could be due to refcount > 1, or it requires some sort of conversion, etc.
  • The BytesImpl trait should have a method for Immutable -> Mutable conversion. (try_mut ?)
  • Don't let types convert themselves in place (like Promotable)
    • We can improve the efficiency, safety, and clarity of Promotable* by actually converting into SharedImpl on mutate/clone. (the BytesImpl contract will have to change slightly to reflect this)
    • This means that we would have to alter methods in the BytesImpl trait to accomodate for the alteration of types.
      • will_truncate
      • clone
      • to_mut(?)
    • This should also help with Bytes <--> BytesMut conversions in 3rd party implementations.
  • Module Structure: It should change a bit to conform to API layering best practices.
    • The trait would be consumed by Bytes/Mut. So Bytes and Bytes mute would be at the top of the layers, and the trait would live below.
    • The trait impls would be siblings of Bytes/Mut, they could theoretically reference the Bytes/Mut types, but I'm not sure if they'd need to.
    • This means that BytesImpl and implementors would know nothing about the Bytes/Mut types.
  • Naming:
    • rename BytesImpl into SharedBuf or ManagedBuf or something more descriptive than BytesImpl
    • "Impl" usually has a specific meaning in Rust (and other languages) and I don't think that this qualifies

** Edit **
It would also be great to add support for try_unsplit as per #287 - This relates back to your use case, actually. If you split a buffer into a bunch of mutable buffers to pass to multiple writers, but now they're all done writing, and you want to recycle the buffer. try_unsplit was also explicitly mentioned in the scope of @carllerche 's meta ticket for VTable exposure.

@rrichardson
Copy link
Contributor

You mention the downcasting approach for Bytes -> ByteMut conversion. I guess we should try to implement that for SharedImpl and see how it works.

@rrichardson
Copy link
Contributor

I made a PR with my proposed module and naming changes: HyeonuPark#2

I will make an additional PR following that with some takes at implementing try_resize and try_into_mut to support the unfreeze and unsplit methods.

@rrichardson
Copy link
Contributor

One other side effect of this module refactoring is that it makes it very simple to put any allocations behind a feature flag.

@zeevm

This comment has been minimized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants