Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: Unify methods of guest memory creation #5013

Merged
merged 8 commits into from
Feb 3, 2025

Conversation

roypat
Copy link
Contributor

@roypat roypat commented Jan 27, 2025

In this day and age, Firecracker supports theoretically 4 different ways of backing guest memory:

  1. Normal MAP_ANONYMOUS | MAP_PRIVATE memory
  2. memfd backed memory, mapped as shared
  3. direct mapping of a snapshot file
  4. MAP_ANONYMOUS again, but this time regions are described by snapshot file.

We have 3 different functions for creating these different backing stores, which then call each other and vm_memory's APIs.

In light of #4522, which will add yet another way of backing virtual machine guests, this was starting to make my head hurt a bit.

Clean this up by consolidating these into just one function (GuestMemoryExtensions::create) that can be called with a description of the memory regions, plus an enum argument stating how each region should be backed.

License Acceptance

By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following Developer
Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md.

PR Checklist

  • I have read and understand CONTRIBUTING.md.
  • I have run tools/devtool checkstyle to verify that the PR passes the
    automated style checks.
  • I have described what is done in these changes, why they are needed, and
    how they are solving the problem in a clear and encompassing way.
  • I have updated any relevant documentation (both in code and in the docs)
    in the PR.
  • I have mentioned all user-facing changes in CHANGELOG.md.
  • If a specific issue led to this PR, this PR closes the issue.
  • When making API changes, I have followed the
    Runbook for Firecracker API changes.
  • I have tested all new and changed functionalities in unit tests and/or
    integration tests.
  • I have linked an issue to every new TODO.

  • This functionality cannot be added in rust-vmm.

@roypat roypat changed the title refactor: Unify methods of guest memory creation. refactor: Unify methods of guest memory creation Jan 27, 2025
@roypat roypat added the Status: Awaiting review Indicates that a pull request is ready to be reviewed label Jan 27, 2025
@roypat roypat force-pushed the memory-cleanup branch 2 times, most recently from 93e03be to 8a20ec7 Compare January 27, 2025 16:54
src/vmm/src/vstate/memory.rs Outdated Show resolved Hide resolved
src/vmm/src/vstate/memory.rs Outdated Show resolved Hide resolved
src/vmm/src/persist.rs Outdated Show resolved Hide resolved
@roypat roypat force-pushed the memory-cleanup branch 2 times, most recently from 89efde4 to cdb2603 Compare January 28, 2025 16:04
Copy link

codecov bot commented Jan 28, 2025

Codecov Report

Attention: Patch coverage is 83.52941% with 14 lines in your changes missing coverage. Please review.

Project coverage is 83.16%. Comparing base (43247e4) to head (33dfa3f).
Report is 6 commits behind head on main.

Files with missing lines Patch % Lines
src/vmm/src/vstate/memory.rs 83.33% 10 Missing ⚠️
src/vmm/src/persist.rs 76.47% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5013      +/-   ##
==========================================
+ Coverage   83.14%   83.16%   +0.02%     
==========================================
  Files         245      245              
  Lines       26699    26647      -52     
==========================================
- Hits        22198    22161      -37     
+ Misses       4501     4486      -15     
Flag Coverage Δ
5.10-c5n.metal 83.61% <83.52%> (+<0.01%) ⬆️
5.10-m5n.metal 83.60% <83.52%> (+<0.01%) ⬆️
5.10-m6a.metal 82.80% <83.52%> (-0.01%) ⬇️
5.10-m6g.metal 79.59% <83.52%> (+0.02%) ⬆️
5.10-m6i.metal 83.59% <83.52%> (+0.01%) ⬆️
5.10-m7g.metal 79.59% <83.52%> (+0.02%) ⬆️
6.1-c5n.metal 83.61% <83.52%> (+<0.01%) ⬆️
6.1-m5n.metal 83.59% <83.52%> (-0.01%) ⬇️
6.1-m6a.metal 82.80% <83.52%> (+<0.01%) ⬆️
6.1-m6g.metal 79.59% <83.52%> (+0.02%) ⬆️
6.1-m6i.metal 83.58% <83.52%> (+<0.01%) ⬆️
6.1-m7g.metal 79.59% <83.52%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@roypat roypat marked this pull request as draft January 29, 2025 17:58
@roypat roypat marked this pull request as ready for review January 30, 2025 10:01
@roypat roypat requested a review from kalyazin January 30, 2025 10:01
@roypat roypat marked this pull request as draft January 30, 2025 10:59
In addition to being unused, it was also wrong, because it only updated
the flag on KVM's side, but kept firecracker's tracking disabled.

Signed-off-by: Patrick Roy <[email protected]>
We already know about dirty page tracking inside that function, based on
whether the memory regions have a bitmap associated with them or not. So
drop passing this information in again via a parameter, which saves us
quite a bit of plumbing.

Suggested-by: Nikita Kalyazin <[email protected]>
Signed-off-by: Patrick Roy <[email protected]>
In praxis, the way we wrote our snapshot files has always been just
writing all regions in-order. This mean that the offset of a region is
simply the sum of the sizes of the preceding regions. The new
`GuestMemoryMmap::create` code already computes the offsets for mapping
the memory file this way, so drop the explicit calculation at snapshot
creation time (as the calculated value isnt used by the restoration
anymore).

Do not bump the snapshot version number, because we already did so since
the last release.

Signed-off-by: Patrick Roy <[email protected]>
We forgot to include this in the 1.9.0 changelog. Let's retroactively do
it.

Signed-off-by: Patrick Roy <[email protected]>
Some tests in memory.rs were runnign effectively the same test scenario
twice, the only difference being the state of dirty page tracking. Just
use a loop over the two boolean values here to avoid the copy-paste.

Also remove a leftover test that was referring to "guard pages", but
actually only repeated one of the dirty page tracking blocks. Guard
pages were removed in 71cf036.

Signed-off-by: Patrick Roy <[email protected]>
In this day and age, Firecracker supports theoretically 4 different ways
of backing guest memory:

1. Normal MAP_ANONYMOUS | MAP_PRIVATE memory
2. memfd backed memory, mapped as shared
3. direct mapping of a snapshot file
4. MAP_ANONYMOUS again, but this time regions are described by snapshot
   file.

We have 3 different functions for creating these different backing
stores, which then call each other and vm_memory's APIs. Clean this up
by consolidating these into just one function that can be called with
generic memory backing options, plus 3 wrappers for the three actually
used ways of backing memory.

For this, hoist up the hugepages/file-based restore incompatibility
check, as with a dedicated function for dealing with the "snapshot
restored by mapping file" case, this function simply will not take a
huge pages argument, so we have to check this somewhere else.

Signed-off-by: Patrick Roy <[email protected]>
@roypat roypat marked this pull request as ready for review January 31, 2025 10:29
CHANGELOG.md Show resolved Hide resolved
@roypat roypat requested a review from bchalios January 31, 2025 12:14
src/vmm/src/vstate/memory.rs Outdated Show resolved Hide resolved
@roypat roypat merged commit 1bb9d18 into firecracker-microvm:main Feb 3, 2025
6 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Awaiting review Indicates that a pull request is ready to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants