Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ace: zephyr: alloc: Use virtual memory heap for buffers #9155

Merged
merged 2 commits into from
Dec 3, 2024

Conversation

softwarecki
Copy link
Collaborator

@softwarecki softwarecki commented May 22, 2024

The buffer allocation method for ace platform has been changed. They are allocated using the virtual memory heap. This consists of a set of buffers with a predefined size.

Dividing memory into buffers of a predefined size is necessary for the proper operation of the allocator, which creates a buffer with a size given as a range (buffer_alloc_range #9145) - a functionality used for deep buffering.

PR needed before merge this PR:

@@ -36,7 +36,7 @@
* either be spanned on specifically configured heap or have
* individual configs with bigger block sizes.
*/
#define MAX_MEMORY_ALLOCATORS_COUNT 8
#define MAX_MEMORY_ALLOCATORS_COUNT 10
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this now come from Kconfig if CONFIG_VIRTUAL_HEAP == y ?

zephyr/lib/alloc.c Outdated Show resolved Hide resolved
@lyakh
Copy link
Collaborator

lyakh commented Jun 4, 2024

Dividing memory into buffers of a predefined size is necessary for the proper operation of the allocator, which creates a buffer with a size given as a range

@softwarecki sorry, could you explain why it's necessary? Your buffer size as a range PR is separate, so it also works with the standard Zephyr heap allocator, right?

@softwarecki
Copy link
Collaborator Author

@lyakh In a situation where a preferred buffer size given by the IPC is too large to be allocated, the size of a buffer is reduced step by step until allocation is successful. Using this mechanism on a regular allocator will allocate all available memory. As a result, it will not be possible to allocate next buffers due to lack of available memory. In a situation where we have memory divided earlier into buffers of specific sizes, allocating the largest possible buffer leaves the remaining buffers available.

@lyakh
Copy link
Collaborator

lyakh commented Jun 5, 2024

@lyakh In a situation where a preferred buffer size given by the IPC is too large to be allocated, the size of a buffer is reduced step by step until allocation is successful. Using this mechanism on a regular allocator will allocate all available memory. As a result, it will not be possible to allocate next buffers due to lack of available memory. In a situation where we have memory divided earlier into buffers of specific sizes, allocating the largest possible buffer leaves the remaining buffers available.

@softwarecki sorry, still don't understand. You have your range allocator. First you request a buffer of size min=16k, max=128k. Say, we have more than 128k available, so we allocate the maximum. With both allocators, right? Then comes another request for the same size range. Now we don't have another 128k available any more, but we do have 16k, so we allocate that - with either allocator. What's the difference? Why are you saying that "on a regular allocator will allocate all available memory?" Why should one allocation take all the available memory? Confused.

@softwarecki
Copy link
Collaborator Author

@lyakh Let me explain. For example, we want to make two allocations (min=16k, preferred=128k) but we only have 96k available. In the case of a classic allocation using sys_heap, we scrape all the available memory and the next allocation fails. If we have previously divided this space into three buffers of 32k each, both allocations will succeed and there will still be something left. The difference can be seen when the preferred size exceeds the available memory.

@lyakh
Copy link
Collaborator

lyakh commented Jun 6, 2024

@lyakh Let me explain. For example, we want to make two allocations (min=16k, preferred=128k) but we only have 96k available. In the case of a classic allocation using sys_heap, we scrape all the available memory and the next allocation fails. If we have previously divided this space into three buffers of 32k each, both allocations will succeed and there will still be something left. The difference can be seen when the preferred size exceeds the available memory.

@softwarecki ok, I see, but TBH this seems to be a rather artificial example to me. In this "lucky" situation - yes, that would be the case. But in many other cases the VMH would fail too. In fact, doesn't VMH add fragmentation? E.g. if you allocate buffers of 24 or 48k and you use 32 or 64k VMH buffers for them?

@softwarecki softwarecki force-pushed the vmh-alloc branch 3 times, most recently from 5d8f11d to 80bb55f Compare June 12, 2024 13:14
@softwarecki softwarecki marked this pull request as ready for review June 12, 2024 13:15
Copy link
Contributor

@iganakov iganakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks ok to me

@softwarecki softwarecki force-pushed the vmh-alloc branch 2 times, most recently from 36132d4 to ce05dc9 Compare July 5, 2024 10:36
Copy link
Member

@lgirdwood lgirdwood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@softwarecki I'm good for this today, but long term we should move all our memory logic into Zephyr i.e. all SOF application would be a client of Zephyr memory mgmt logic APIs.

@lgirdwood
Copy link
Member

@softwarecki can you check internal CI. Thanks !

@lgirdwood
Copy link
Member

@softwarecki I'm good for this today, but long term we should move all our memory logic into Zephyr i.e. all SOF application would be a client of Zephyr memory mgmt logic APIs.

@nashif @teburd @andyross fwiw, we are good to take this into SOF atm, but can we plan initial Zephyr support for virtual memory in next merge window ? I assume we are still pending some dependencies for virtual memory heaps in Zephyr ?
@kv2019i topic for next Zephyr/SOF call ?

@softwarecki
Copy link
Collaborator Author

@lgirdwood
In my opinion, the current implementation has too restricted requirements for a configuration of memory division. This is partly due to the use of mem_blocks (buffer size must be a power of two) as well as design assumptions (buffer bundle size must be aligned to the page size).

Last year I proposed my allocator for zephyr (zephyrproject-rtos/zephyr#54714) with predefined buffers, but it was met with criticism that it did not use zephyr's existing allocators as a base. It supports any division configuration and does not use dynamic memory allocation. Virtual memory support (page mapping) was supposed to be added in the next layer but it turned out that colleagues were already working on a similar solution and I did not continue the work.

@lgirdwood
Copy link
Member

@lgirdwood In my opinion, the current implementation has too restricted requirements for a configuration of memory division. This is partly due to the use of mem_blocks (buffer size must be a power of two) as well as design assumptions (buffer bundle size must be aligned to the page size).

Last year I proposed my allocator for zephyr (zephyrproject-rtos/zephyr#54714) with predefined buffers, but it was met with criticism that it did not use zephyr's existing allocators as a base. It supports any division configuration and does not use dynamic memory allocation. Virtual memory support (page mapping) was supposed to be added in the next layer but it turned out that colleagues were already working on a similar solution and I did not continue the work.

ok, we can keep here until everything can be aligned with Zephyr folks and migrated in the future.

@lgirdwood
Copy link
Member

@softwarecki can you check internal CI and confirm, we could split the PR here if its only 1 patch that causing the failure ?

@lgirdwood
Copy link
Member

@softwarecki still DNM ?

@softwarecki softwarecki marked this pull request as draft September 2, 2024 16:00
@kv2019i
Copy link
Collaborator

kv2019i commented Sep 6, 2024

Release reminder - one week to v2.11-rc1.

@kv2019i kv2019i modified the milestones: v2.11, v2.12 Sep 13, 2024
@lgirdwood
Copy link
Member

@softwarecki any ETA, stable v2.11 now forked so less risk now.

The current design constraints of vmh allocator dictate that a buffer
size must be a power of two and each group of buffers must have a size that
is a multiple of the page size. The current configuration of memory regions
provides 1MB of virtual space for buffers. The combination of these factors
currently makes it impossible to allocate bigger buffer for the kbp. It is
therefore necessary to increase the number of memory pools of which kbp
use to allocade necessary buffer.

Signed-off-by: Adrian Warecki <[email protected]>
Use virtual memory heap to allocate buffers for ace platforms. The new
buffer allocation mechanism uses buffers of a predefined size. Each core
have a dedicated region of the virtual address space from which buffers are
allocated.

Signed-off-by: Adrian Warecki <[email protected]>
@softwarecki softwarecki marked this pull request as ready for review November 28, 2024 10:37
@softwarecki softwarecki removed the DNM Do Not Merge tag label Nov 28, 2024
@softwarecki
Copy link
Collaborator Author

@kv2019i @lgirdwood I think its good to merge now.

@lgirdwood
Copy link
Member

@kv2019i @lgirdwood I think its good to merge now.

Ack - just waiting on CI now.

@kv2019i
Copy link
Collaborator

kv2019i commented Dec 3, 2024

SOFCI TEST

@kv2019i
Copy link
Collaborator

kv2019i commented Dec 3, 2024

sof-docs fail and Intel LNL fails all known and tracked in https://github.com/thesofproject/sof/issues?q=is%3Aissue+is%3Aopen+label%3A%22Known+PR+Failures%22+

@kv2019i kv2019i merged commit fceac69 into thesofproject:main Dec 3, 2024
81 of 88 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants