-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory management: Add virtual heap memory allocators #7276
memory management: Add virtual heap memory allocators #7276
Conversation
tip: add pre-commit hook to your local git repo and run checkpatch at every commit. Sometimes its annoying, but helps
|
I knew there were issues in check patch and I intended to fix theme before changing it to full PR. I made this draft so we can discuss the arch approach not the cosmetics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More review is coming. This is a "classic", later I'll focus on the main code
ANYWAY - please use checkpatch even in draft PRs. Pls add comments.
It makes a revew - any review, even draft - easier and faster
d3d92e8
to
bcbcba0
Compare
looks like not all comments have been addressed yet. |
803c646
to
67134e3
Compare
17b94f3
to
c26d7fe
Compare
cd3c39b
to
2d09086
Compare
Tested with asserts on @lyakh I want to talk with @lgirdwood offilne to be sure on the fix for multi thread but it is good to test if You want to do so. |
45074a9
to
46e7938
Compare
so have you cleaned up all the asserts? None are triggering any more? |
@lyakh Run my basic scenarios Create heap/ alloc / alloc /free / free / free heap. And it worked with asserts on. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still not a complete review, just your changes from the previous version, but I haven't finished reviewing the previous version, so I'll have to do that. But please also fix gcc compilation breakage e.g. https://github.com/thesofproject/sof/actions/runs/5659675012/job/15333903191?pr=7276
case MEM_REG_ATTR_SHARED_HEAP: | ||
vm_heaps[i].memory_caps = SOF_MEM_CAPS_RAM | SOF_MEM_CAPS_HP; | ||
struct vmh_heap *new_heap = | ||
rzalloc(SOF_MEM_ZONE_RUNTIME_SHARED, 0, SOF_MEM_CAPS_RAM, sizeof(*new_heap)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is now an "uncached allocation" - is this on purpose? All the allocations are now uncached, I thought it was an important aspect of this allocator to use cached access? Or you're converting addresses somewhere? Haven't checked yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it was needed to change adresses of spin locks in bittarays to uncached. And i decided not to split allocation for parts of struct. The Struct data that we are talking about here should not generate big overhead and the changes only apply to information about heap and its allocations. The buffer itself is still provided by zephyr defines and the allocated ptrs from this buffer (allocated using this api) and this structure that saves heap info are separate.
*/ | ||
size_t bitarray_size = SOF_DIV_ROUND_UP( | ||
SOF_DIV_ROUND_UP((uint64_t)cfg->block_bundles_table[i].number_of_blocks, 8), | ||
sizeof(uint32_t)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we just make it
size_t bitarray_size = SOF_DIV_ROUND_UP(cfg->block_bundles_table[i].number_of_blocks, 8 * sizeof(uint32_t));
because if .number_of_blocks + 31 > 0xffffffff
I don't think this will work anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to stay as true to the implementation that already exists in zephyr for calculation of the size. It is complicated as is would not like to convert it anymore since this will be easier for people to understand when they cross reference it with sys_bittaray implementation.
return NULL; | ||
/* Only operations on the same core are allowed */ | ||
if (heap->core_id != cpu_get_id()) | ||
return NULL; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please, also check thread-locality and thread context
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disagree with handling anything about threads here. It should be under the managment code that handles the threads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, don't understand what you mean, what code that handles the threads? Is this code handling multiple cores? This is a function that has its limitations: it should run only on a specific core and only in specific thread context, so it should check all the conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I understand it this should not be handling threads in any capacity and it is management code that does it. @mmaka1 didn't we agree upon it on the syncs ?
*/ | ||
block_size = | ||
1 << heap->physical_blocks_allocators[mem_block_iterator]->blk_sz_shift; | ||
block_count = SOF_DIV_ROUND_UP((uint64_t)alloc_size, (uint64_t)block_size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please, check the definition of SOF_DIV_ROUND_UP()
- it's doing a division, the only way it can overflow (which is presumably what you're protecting against by typecasting to 64 bits), is from the addition operation. And we really shouldn't be allocating anything where alloc_size + block_size - 1 > 0xffffffff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what really is Your point here ? You want to remove the protection ? I know that we will not be allocating sizes like that but who knows what will we use it for later. I talked it over with @tlissows and we came to conclusion that if it does not hurt anyone and makes this code safer then it should be there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't use 64-bit types. Type-casts are generally a rather crude instrument and should be avoided where possible, and here I don't see a need for them, and it causes a significant overhead - a 64-bit division.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I want it to be safe this way. The overhead seems negligible. And casts are tools to be used.
This version passed the previous version of my basic test in #7688 - that's already good. I then went ahead and added a call to
i.e. the allocator fails to allocate memory in the 5 blocks of the heap - which is wrong, because there seems to be plenty of free memory in all of them - this is the first allocation attempt, and having failed those 5 blocks it attempts to use the 6th block, which is apparently not present, so it hits a NULL dereference. |
46e7938
to
8d27ebf
Compare
The new version fixed the crash. Now the new version of #7688 produces these errors:
i.e. |
*/ | ||
struct sys_bitarray *allocation_sizes_bitarray = | ||
rzalloc(SOF_MEM_ZONE_RUNTIME_SHARED, | ||
0, SOF_MEM_CAPS_RAM, sizeof(sys_bitarray_t)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have been discussing ways to reduce the number of heap allocations in SOF, and here we have an example of a single object creation process using multiple such allocations. Maybe we could calculate a size and allocate the whole memory in 1 go. Can be a follow-up optimisation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I was exploring that idea myself. Agreed As a followup.
} | ||
rfree(allocator); | ||
} | ||
rfree(new_heap); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so for each new heap we do 5 * block-number + 1 heap allocations... That's really too many
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that this could be followed up later.
* First: 1111111100000000 allocate another block 1111111100000000 | ||
* Second:1110000000000000 and another two blocks 1110001000000000 | ||
* We are still able to pinpoint allocations size | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in principle we need 3 values, i.e. 2 bits per block: free, used-single-or-last, used-linked-to-next. So in theory we'd need an array of 2-bit values, but that would be difficult to work with
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have two bits per block now. One bit is taken from the mem_block bitarray that saves information if it is allocated or not and the second bittaray is saved in heap struct that saves information if the block is linked to the next one. Aggregating both these arrays You end up with all information You need. I think that I explained it clearly in the in code comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, sorry, it's me who didn't explain it clearly in my comment. I just meant that those two bitarrays actually represent an array of 2-bit fields, just in a slightly cumbersome and non-obvious way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what is the point of this comment ? Do You want any changes here ? I do not understand.
*/ | ||
phys_aligned_ptr = ALIGN_DOWN((uintptr_t)ptr, CONFIG_MM_DRV_PAGE_SIZE); | ||
phys_aligned_alloc_end = ALIGN_UP((uintptr_t)ptr | ||
+ alloc_size, CONFIG_MM_DRV_PAGE_SIZE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same line, please
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm right about ptr
about needing an array instead of it, then this is wrong too, and the following code as well. I'll stop here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same line produces warnings in checkpath
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did I miss some comment here ? Because i do not understand What should You be right about here ?
continue; | ||
/* Try span alloc on first available mem_block for non span | ||
* check if block size is sufficient. | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what are use-cases when we must have contiguous physical pages? DMA? Or would DMA also use virtual addresses and thus only care about virtual contiguity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has nothing to do with contiguous physical pages.
This is firstly checking if we have virtual blocks of sufficient size by allocating on memblocks. If allocation was successful we allocate physical pages if needed.
And the answer is we do not need it and I do not force anything like that here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, ok, so we just map them into a contiguous address region, but physical pages can be discontiguous. Maybe it would be good to clarify that somewhere in a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When this gets in, one of the next steps would be documenting it. I think adding anything about span physical mapping would only cause confusion here since there is a clear distinction in wording - allocation / mapping.
CONFIG_MM_DRV_PAGE_SIZE, SYS_MM_MEM_PERM_RW) != 0) { | ||
ptrs_to_map[i] = (uintptr_t)NULL; | ||
goto fail; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we make a function vmh_is_page_mapped()
and just call it for each physical page? Eventually for contiguous allocators you can just check the first and the last page, but that can be optimised later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would refactor it after the implementation of zephyr physical memory counting - this will simplify this implementation and make it much shorter.
8d27ebf
to
5ed4d5c
Compare
Add virtual heap allocators that allocate on proper heaps and map physical memory where nesscessary. Add free function that frees up the virtual heap memory and unmaps physical memory when possible. Virtual heap allocator allows using virtual memory as a base for allocation and booking memory. Physical memory banks will be mapped when needed allowing for greater flexibility with mapping. Signed-off-by: Jakub Dabek <[email protected]>
5ed4d5c
to
02076a4
Compare
@lyakh I was unable to reproduce the issue you present with my latest changes.- Haven't explicitly fix any bugs but still changes were made. |
ok, we can extend the UTs after merge and as clients are added. |
SOFCI TEST |
Add virtual heap allocators that allocate on proper heaps and map physical memory where nesscessary.
Add free function that frees up the virtual heap memory and unmaps physical memory when possible.
Some relevant links to understand context
https://thesofproject.github.io/latest/architectures/firmware/sof-zephyr/rtos_layer/memory_management/heap_sharing.html
https://thesofproject.github.io/latest/architectures/firmware/sof-zephyr/rtos_layer/memory_management/index.html
https://github.com/orgs/thesofproject/discussions/7514
Added a presentation that outlines the intent of the commit I crafted at the start of this commit lifecycle.
virtual heaps.pdf
Summarizing potential benefits that this api introduces: