Skip to content

Choose Non-moving Policy based on features #1308

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 15 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -224,14 +224,17 @@ malloc_jemalloc = ["dep:jemalloc-sys"]
# is not compiled in default builds.
malloc_native_mimalloc = []

# If there are more groups, they should be inserted above this line
# Group:end

# Group:marksweepallocation
# default is native allocator with lazy sweeping
eager_sweeping = []
# Use library malloc as the freelist allocator for mark sweep. This will makes mark sweep slower. As malloc may return addresses outside our
# normal heap range, we will have to use chunk-based SFT table. Turning on this feature will use a different SFT map implementation on 64bits,
# and will affect all the plans in the build. Please be aware of the consequence, and this is only meant to be experimental use.
malloc_mark_sweep = []

# Group:nonmovingspace
immortal_as_nonmoving = []
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit awkward. Ideally, we should have 3 features immix_as_nonmoving, immortal_as_nonmoving and marksweep_as_nonmoving, and we use immix_as_nonmoving as a default feature. But we need to ensure that for every build, exactly one feature from those 3 features is enabled. This also applies to the mark sweep features above, which should be 3 features lazy_sweeping, eager_sweeping and malloc_mark_sweep, and we need to choose exactly one.

So basically we need mutually exclusive features, and we need to make sure the features are mandatory that you need to use one of them. We cannot do this with cargo. In our test script, we try to achieve this when we cover all the builds with different features, but our test script only supports optional mutually exclusive features. So for now, we assume the default (without any feature to control it), and make the other two features optional and mutually exclusive.

I would like to do this refactoring in a separate PR.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default features can be disabled (see https://doc.rust-lang.org/cargo/reference/features.html#the-default-feature). mmtk-core currently has one default feature: builtin_env_logger, and VM bindings that use their own logging frameworks (such as OpenJDK) could disable default features and wire Rust logs into their own.

If the user disables default features, there will be neither immix_as_nonmoving, immortal_as_nonmoving nor marksweep_as_nonmoving. So we need to consider the case that there is no non-moving space.

Actually I think the default should be "no non-moving space". Not all VMs need that. Neither OpenJDK, JikesRVM nor Ruby need it. We can make all of them optional, and use the trick described in https://doc.rust-lang.org/cargo/reference/features.html#mutually-exclusive-features to reject multiple non-moving spaces.

marksweep_as_nonmoving = []

# If there are more groups, they should be inserted above this line
# Group:end
6 changes: 6 additions & 0 deletions docs/userguide/src/tutorial/code/mygc_semispace/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,12 @@ impl<VM: VMBinding> Plan for MyGC<VM> {
}
// ANCHOR_END: release

// ANCHOR: end_of_gc
fn end_of_gc(&mut self, tls: VMWorkerThread) {
self.common.end_of_gc(tls);
}
// ANCHOR_END: end_of_gc

// Modify
// ANCHOR: plan_get_collection_reserve
fn get_collection_reserved_pages(&self) -> usize {
Expand Down
8 changes: 8 additions & 0 deletions docs/userguide/src/tutorial/mygc/ss/collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,14 @@ with `&mygc_mutator_release`. This function will be called at the release stage
allocation semantics to the new tospace. When the mutator threads resume, any new allocations for
`Default` will then go to the new tospace.

### End of GC

Find the method `end_of_gc()` in `mygc/global.rs`. Call `end_of_gc` from the common plan instead.

```rust
{{#include ../../code/mygc_semispace/global.rs:end_of_gc}}
```

## ProcessEdgesWork for MyGC

[`ProcessEdgesWork`](https://docs.mmtk.io/api/mmtk/scheduler/gc_work/trait.ProcessEdgesWork.html)
Expand Down
6 changes: 3 additions & 3 deletions src/plan/generational/copying/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -114,9 +114,9 @@ impl<VM: VMBinding> Plan for GenCopy<VM> {
}
}

fn end_of_gc(&mut self, _tls: VMWorkerThread) {
self.gen
.set_next_gc_full_heap(CommonGenPlan::should_next_gc_be_full_heap(self));
fn end_of_gc(&mut self, tls: VMWorkerThread) {
let next_gc_full_heap = CommonGenPlan::should_next_gc_be_full_heap(self);
self.gen.end_of_gc(tls, next_gc_full_heap);
}

fn get_collection_reserved_pages(&self) -> usize {
Expand Down
9 changes: 6 additions & 3 deletions src/plan/generational/copying/mutator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ use super::GenCopy;
use crate::plan::barriers::ObjectBarrier;
use crate::plan::generational::barrier::GenObjectBarrierSemantics;
use crate::plan::generational::create_gen_space_mapping;
use crate::plan::mutator_context::unreachable_prepare_func;
use crate::plan::mutator_context::common_prepare_func;
use crate::plan::mutator_context::common_release_func;
use crate::plan::mutator_context::Mutator;
use crate::plan::mutator_context::MutatorBuilder;
use crate::plan::mutator_context::MutatorConfig;
Expand All @@ -13,7 +14,7 @@ use crate::util::{VMMutatorThread, VMWorkerThread};
use crate::vm::VMBinding;
use crate::MMTK;

pub fn gencopy_mutator_release<VM: VMBinding>(mutator: &mut Mutator<VM>, _tls: VMWorkerThread) {
pub fn gencopy_mutator_release<VM: VMBinding>(mutator: &mut Mutator<VM>, tls: VMWorkerThread) {
// reset nursery allocator
let bump_allocator = unsafe {
mutator
Expand All @@ -23,6 +24,8 @@ pub fn gencopy_mutator_release<VM: VMBinding>(mutator: &mut Mutator<VM>, _tls: V
.downcast_mut::<BumpAllocator<VM>>()
.unwrap();
bump_allocator.reset();

common_release_func(mutator, tls);
}

pub fn create_gencopy_mutator<VM: VMBinding>(
Expand All @@ -36,7 +39,7 @@ pub fn create_gencopy_mutator<VM: VMBinding>(
mmtk.get_plan(),
&gencopy.gen.nursery,
)),
prepare_func: &unreachable_prepare_func,
prepare_func: &common_prepare_func,
release_func: &gencopy_mutator_release,
};

Expand Down
5 changes: 5 additions & 0 deletions src/plan/generational/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,11 @@ impl<VM: VMBinding> CommonGenPlan<VM> {
self.nursery.release();
}

pub fn end_of_gc(&mut self, tls: VMWorkerThread, next_gc_full_heap: bool) {
self.set_next_gc_full_heap(next_gc_full_heap);
self.common.end_of_gc(tls);
}

/// Independent of how many pages remain in the page budget (a function of heap size), we must
/// ensure we never exhaust virtual memory. Therefore we must never let the nursery grow to the
/// extent that it can't be copied into the mature space.
Expand Down
8 changes: 4 additions & 4 deletions src/plan/generational/immix/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ impl<VM: VMBinding> Plan for GenImmix<VM> {
if full_heap {
self.immix_space.prepare(
full_heap,
crate::policy::immix::defrag::StatsForDefrag::new(self),
Some(crate::policy::immix::defrag::StatsForDefrag::new(self)),
);
}
}
Expand All @@ -146,9 +146,9 @@ impl<VM: VMBinding> Plan for GenImmix<VM> {
.store(full_heap, Ordering::Relaxed);
}

fn end_of_gc(&mut self, _tls: VMWorkerThread) {
self.gen
.set_next_gc_full_heap(CommonGenPlan::should_next_gc_be_full_heap(self));
fn end_of_gc(&mut self, tls: VMWorkerThread) {
let next_gc_full_heap = CommonGenPlan::should_next_gc_be_full_heap(self);
self.gen.end_of_gc(tls, next_gc_full_heap);

let did_defrag = self.immix_space.end_of_gc();
self.last_gc_was_defrag.store(did_defrag, Ordering::Relaxed);
Expand Down
9 changes: 6 additions & 3 deletions src/plan/generational/immix/mutator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ use crate::plan::barriers::ObjectBarrier;
use crate::plan::generational::barrier::GenObjectBarrierSemantics;
use crate::plan::generational::create_gen_space_mapping;
use crate::plan::generational::immix::GenImmix;
use crate::plan::mutator_context::unreachable_prepare_func;
use crate::plan::mutator_context::common_prepare_func;
use crate::plan::mutator_context::common_release_func;
use crate::plan::mutator_context::Mutator;
use crate::plan::mutator_context::MutatorBuilder;
use crate::plan::mutator_context::MutatorConfig;
Expand All @@ -13,7 +14,7 @@ use crate::util::{VMMutatorThread, VMWorkerThread};
use crate::vm::VMBinding;
use crate::MMTK;

pub fn genimmix_mutator_release<VM: VMBinding>(mutator: &mut Mutator<VM>, _tls: VMWorkerThread) {
pub fn genimmix_mutator_release<VM: VMBinding>(mutator: &mut Mutator<VM>, tls: VMWorkerThread) {
// reset nursery allocator
let bump_allocator = unsafe {
mutator
Expand All @@ -23,6 +24,8 @@ pub fn genimmix_mutator_release<VM: VMBinding>(mutator: &mut Mutator<VM>, _tls:
.downcast_mut::<BumpAllocator<VM>>()
.unwrap();
bump_allocator.reset();

common_release_func(mutator, tls);
}

pub fn create_genimmix_mutator<VM: VMBinding>(
Expand All @@ -36,7 +39,7 @@ pub fn create_genimmix_mutator<VM: VMBinding>(
mmtk.get_plan(),
&genimmix.gen.nursery,
)),
prepare_func: &unreachable_prepare_func,
prepare_func: &common_prepare_func,
release_func: &genimmix_mutator_release,
};

Expand Down
1 change: 0 additions & 1 deletion src/plan/generational/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ pub const GEN_CONSTRAINTS: PlanConstraints = PlanConstraints {
may_trace_duplicate_edges: ACTIVE_BARRIER.equals(BarrierSelector::ObjectBarrier),
max_non_los_default_alloc_bytes:
crate::plan::plan_constraints::MAX_NON_LOS_ALLOC_BYTES_COPYING_PLAN,
needs_prepare_mutator: false,
..PlanConstraints::default()
};

Expand Down
98 changes: 86 additions & 12 deletions src/plan/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ pub trait Plan: 'static + HasSpaces + Sync + Downcast {

/// Inform the plan about the end of a GC. It is guaranteed that there is no further work for this GC.
/// This is invoked once per GC by one worker thread. `tls` is the worker thread that executes this method.
fn end_of_gc(&mut self, _tls: VMWorkerThread) {}
fn end_of_gc(&mut self, _tls: VMWorkerThread);

/// Notify the plan that an emergency collection will happen. The plan should try to free as much memory as possible.
/// The default implementation will force a full heap collection for generational plans.
Expand Down Expand Up @@ -511,6 +511,10 @@ impl<VM: VMBinding> BasePlan<VM> {
self.vm_space.release();
}

pub fn end_of_gc(&mut self, _tls: VMWorkerThread) {
// Do nothing here. None of the spaces needs end_of_gc.
}

pub(crate) fn collection_required<P: Plan>(&self, plan: &P, space_full: bool) -> bool {
let stress_force_gc =
crate::util::heap::gc_trigger::GCTrigger::<VM>::should_do_stress_gc_inner(
Expand Down Expand Up @@ -542,6 +546,17 @@ impl<VM: VMBinding> BasePlan<VM> {
}
}

cfg_if::cfg_if! {
// Use immortal or mark sweep as the non moving space if the features are enabled. Otherwise use Immix.
if #[cfg(feature = "immortal_as_nonmoving")] {
pub type NonMovingSpace<VM> = crate::policy::immortalspace::ImmortalSpace<VM>;
} else if #[cfg(feature = "marksweep_as_nonmoving")] {
pub type NonMovingSpace<VM> = crate::policy::marksweepspace::native_ms::MarkSweepSpace<VM>;
} else {
pub type NonMovingSpace<VM> = crate::policy::immix::ImmixSpace<VM>;
}
}

/**
CommonPlan is for representing state and features used by _many_ plans, but that are not fundamental to _all_ plans. Examples include the Large Object Space and an Immortal space. Features that are fundamental to _all_ plans must be included in BasePlan.
*/
Expand All @@ -551,9 +566,12 @@ pub struct CommonPlan<VM: VMBinding> {
pub immortal: ImmortalSpace<VM>,
#[space]
pub los: LargeObjectSpace<VM>,
// TODO: We should use a marksweep space for nonmoving.
#[space]
pub nonmoving: ImmortalSpace<VM>,
#[cfg_attr(
not(any(feature = "immortal_as_nonmoving", feature = "marksweep_as_nonmoving")),
post_scan
)] // Immix space needs post_scan
pub nonmoving: NonMovingSpace<VM>,
#[parent]
pub base: BasePlan<VM>,
}
Expand All @@ -571,12 +589,7 @@ impl<VM: VMBinding> CommonPlan<VM> {
args.get_space_args("los", true, false, VMRequest::discontiguous()),
false,
),
nonmoving: ImmortalSpace::new(args.get_space_args(
"nonmoving",
true,
false,
VMRequest::discontiguous(),
)),
nonmoving: Self::new_nonmoving_space(&mut args),
base: BasePlan::new(args),
}
}
Expand All @@ -591,17 +604,22 @@ impl<VM: VMBinding> CommonPlan<VM> {
pub fn prepare(&mut self, tls: VMWorkerThread, full_heap: bool) {
self.immortal.prepare();
self.los.prepare(full_heap);
self.nonmoving.prepare();
self.prepare_nonmoving_space(full_heap);
self.base.prepare(tls, full_heap)
}

pub fn release(&mut self, tls: VMWorkerThread, full_heap: bool) {
self.immortal.release();
self.los.release(full_heap);
self.nonmoving.release();
self.release_nonmoving_space(full_heap);
self.base.release(tls, full_heap)
}

pub fn end_of_gc(&mut self, tls: VMWorkerThread) {
self.end_of_gc_nonmoving_space();
self.base.end_of_gc(tls);
}

pub fn get_immortal(&self) -> &ImmortalSpace<VM> {
&self.immortal
}
Expand All @@ -610,9 +628,65 @@ impl<VM: VMBinding> CommonPlan<VM> {
&self.los
}

pub fn get_nonmoving(&self) -> &ImmortalSpace<VM> {
pub fn get_nonmoving(&self) -> &NonMovingSpace<VM> {
&self.nonmoving
}

fn new_nonmoving_space(args: &mut CreateSpecificPlanArgs<VM>) -> NonMovingSpace<VM> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And we can use cfg_if in these functions, too.

let space_args = args.get_space_args("nonmoving", true, false, VMRequest::discontiguous());
cfg_if::cfg_if! {
if #[cfg(any(feature = "immortal_as_nonmoving", feature = "marksweep_as_nonmoving"))] {
NonMovingSpace::new(space_args)
} else {
// Immix requires extra args.
NonMovingSpace::new(
space_args,
crate::policy::immix::ImmixSpaceArgs {
unlog_object_when_traced: false,
#[cfg(feature = "vo_bit")]
mixed_age: false,
never_move_objects: true,
},
)
}
}
}

fn prepare_nonmoving_space(&mut self, _full_heap: bool) {
cfg_if::cfg_if! {
if #[cfg(feature = "immortal_as_nonmoving")] {
self.nonmoving.prepare();
} else if #[cfg(feature = "marksweep_as_nonmoving")] {
self.nonmoving.prepare(_full_heap);
} else {
self.nonmoving.prepare(_full_heap, None);
}
}
}

fn release_nonmoving_space(&mut self, _full_heap: bool) {
cfg_if::cfg_if! {
if #[cfg(feature = "immortal_as_nonmoving")] {
self.nonmoving.release();
} else if #[cfg(feature = "marksweep_as_nonmoving")] {
self.nonmoving.prepare(_full_heap);
} else {
self.nonmoving.release(_full_heap);
}
}
}

fn end_of_gc_nonmoving_space(&mut self) {
cfg_if::cfg_if! {
if #[cfg(feature = "immortal_as_nonmoving")] {
// Nothing we need to do for immortal space.
} else if #[cfg(feature = "marksweep_as_nonmoving")] {
self.nonmoving.end_of_gc();
} else {
self.nonmoving.end_of_gc();
}
}
}
}

use crate::policy::gc_work::TraceKind;
Expand Down
6 changes: 3 additions & 3 deletions src/plan/immix/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ pub const IMMIX_CONSTRAINTS: PlanConstraints = PlanConstraints {
moves_objects: !cfg!(feature = "immix_non_moving"),
// Max immix object size is half of a block.
max_non_los_default_alloc_bytes: crate::policy::immix::MAX_IMMIX_OBJECT_SIZE,
needs_prepare_mutator: false,
..PlanConstraints::default()
};

Expand Down Expand Up @@ -88,7 +87,7 @@ impl<VM: VMBinding> Plan for Immix<VM> {
self.common.prepare(tls, true);
self.immix_space.prepare(
true,
crate::policy::immix::defrag::StatsForDefrag::new(self),
Some(crate::policy::immix::defrag::StatsForDefrag::new(self)),
);
}

Expand All @@ -98,9 +97,10 @@ impl<VM: VMBinding> Plan for Immix<VM> {
self.immix_space.release(true);
}

fn end_of_gc(&mut self, _tls: VMWorkerThread) {
fn end_of_gc(&mut self, tls: VMWorkerThread) {
self.last_gc_was_defrag
.store(self.immix_space.end_of_gc(), Ordering::Relaxed);
self.common.end_of_gc(tls);
}

fn current_gc_may_move_object(&self) -> bool {
Expand Down
Loading
Loading