Message ID | 20200818080415.7531-1-hyesoo.yu@samsung.com (mailing list archive) |
---|---|
Headers |
Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from <linux-media-owner@vger.kernel.org>) id 1k7wEK-007GDg-Qz; Tue, 18 Aug 2020 07:40:28 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726645AbgHRHp4 (ORCPT <rfc822;mkrufky@linuxtv.org> + 1 other); Tue, 18 Aug 2020 03:45:56 -0400 Received: from mailout2.samsung.com ([203.254.224.25]:58825 "EHLO mailout2.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726631AbgHRHpz (ORCPT <rfc822;linux-media@vger.kernel.org>); Tue, 18 Aug 2020 03:45:55 -0400 Received: from epcas2p4.samsung.com (unknown [182.195.41.56]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20200818074552epoutp026c945a0f1a88299ca54c69ff6e46e7a3~sTRxeo6qj2465524655epoutp02z for <linux-media@vger.kernel.org>; Tue, 18 Aug 2020 07:45:52 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20200818074552epoutp026c945a0f1a88299ca54c69ff6e46e7a3~sTRxeo6qj2465524655epoutp02z DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1597736752; bh=dw5oXT9wypsLxfzjFB/LVZWr7Mx69JAVk4Tqy1Lmc7s=; h=From:To:Cc:Subject:Date:References:From; b=jOzfJefJboLRPUZkvthpwCZYPdn6vhl1ylQEz6YGDCqKxU6+dx5370+02X4yBUNxD 20ktGChjWLb1nwSGGYrtaajqgwXNXWWvFDWWgMdfO81jOmAreovY9qIzqDXyK9nFsP HBkLXhwVANta5GksROanM4wqJNjbVDaeYEY6idN4= Received: from epsnrtp1.localdomain (unknown [182.195.42.162]) by epcas2p3.samsung.com (KnoxPortal) with ESMTP id 20200818074551epcas2p3eafdb6409e1a2636717a4c2f73d76015~sTRwgIcRf2091420914epcas2p3b; Tue, 18 Aug 2020 07:45:51 +0000 (GMT) Received: from epsmges2p1.samsung.com (unknown [182.195.40.181]) by epsnrtp1.localdomain (Postfix) with ESMTP id 4BW2yj0RGSzMqYmC; Tue, 18 Aug 2020 07:45:49 +0000 (GMT) Received: from epcas2p2.samsung.com ( [182.195.41.54]) by epsmges2p1.samsung.com (Symantec Messaging Gateway) with SMTP id 63.BD.19322.C278B3F5; Tue, 18 Aug 2020 16:45:48 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas2p2.samsung.com (KnoxPortal) with ESMTPA id 20200818074547epcas2p21e0c2442873d03800c7bc2c3e76405d6~sTRtSO3vb2672326723epcas2p2X; Tue, 18 Aug 2020 07:45:47 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20200818074547epsmtrp163a93b57be1a23b472e1de5be6cc5044~sTRtRGzpp2704127041epsmtrp1l; Tue, 18 Aug 2020 07:45:47 +0000 (GMT) X-AuditID: b6c32a45-7adff70000004b7a-d2-5f3b872c46b0 Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id 84.78.08303.B278B3F5; Tue, 18 Aug 2020 16:45:47 +0900 (KST) Received: from Dabang.dsn.sec.samsung.com (unknown [12.36.155.59]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20200818074547epsmtip239546245637fb5172bd79a2d66a02ce9~sTRtBSvcm2582525825epsmtip2n; Tue, 18 Aug 2020 07:45:47 +0000 (GMT) From: Hyesoo Yu <hyesoo.yu@samsung.com> To: sumit.semwal@linaro.org Cc: minchan@kernel.org, akpm@linux-foundation.org, iamjoonsoo.kim@lge.com, joaodias@google.com, linux-mm@kvack.org, pullip.cho@samsung.com, surenb@google.com, vbabka@suse.cz, afd@ti.com, benjamin.gaignard@linaro.org, lmark@codeaurora.org, labbott@redhat.com, Brian.Starkey@arm.com, john.stultz@linaro.org, christian.koenig@amd.com, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, devicetree@vger.kernel.org, Hyesoo Yu <hyesoo.yu@samsung.com> Subject: [PATCH 0/3] Chunk Heap Support on DMA-HEAP Date: Tue, 18 Aug 2020 17:04:12 +0900 Message-Id: <20200818080415.7531-1-hyesoo.yu@samsung.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Brightmail-Tracker: H4sIAAAAAAAAA02Te0xTZxjG8522p4Wk21lx46MkwEo2A5NLweKHEzVOt2MgjsCyIMtWKhwp obRdD3hpXKYgFxGqgMPRKhRQVhgIloqAiIJkXEQDglgHCkPUyEAQVAgEXGkx47/nfd73l+e7 5OUweGlsPidOnkip5BKZALdn1t3yCPDakP6l2LdhiIemunLY6Fx1JY6SW7twNNrVA9Dk8DUW Kmq7y0L9b6ZwtHSih4XKT6bgaE4/ykDdi16ovGkeQ6/7/8FQX+M5HGXVXGGhx5XvWEjbOYCh sjdTbLSc3QJQ6vU2Nup6NMtGeff0ONId04DtjmTqvSWcrCysBGSfJhsj9cYk0lhxAieNM7ls MnOyHyOHBppwsuP3RSY5fLIdIweXnzDIqeb7OKkxVQCyW9/GJtvNVzFy1ugSSkTKtkgpSQyl cqPk0YqYOHlskCA4XPyVWBTgK/QSBqJNAje5JIEKEuwMCfX6Ok5meQuB2wGJLMlihUpoWuCz dYtKkZRIuUkVdGKQgFLGyJRCodKbliTQSfJY72hFwmahr6+fyDIZJZNqiqTKLO6hlgYddhSc t88EdhxIbITNczdZmcCewyPqAWzNr2HYihkAl8/kYrZiFsBLExPgPfJ4eQi3NRoBnDdcWOXn AHw7PchYmcKJ9bDDVGYl1hHO8KzBYB1iEOVMOJBntjYcCH94YXDeCjCJz2Dm4hmr5hKBsFir WY1zhTm6R8DmfwQ7C8aYK5ph8VOu6KyHhUSeHXx+PpNtA3bClo4qpk07wPF206rPhy9Opa3q g/DO8VGWDT4OYG/+fdzW8IfaZ+mWNI4lwQNWN/qsSEi4w7a/V3M/gBm3ltg2mwsz0ng20B3e KCtcTXWCo1XpLJsm4cU6jVXziB9hTW0K6zRw1a65jXbNbbT/5+oBowJ8QinphFiK9lMK1/6q EVgXwnNXPcibnPZuBRgHtALIYQjWcfd1bBLzuDGSw2pKpRCrkmQU3QpElvfNYfA/jlZYNkqe KBaK/AICfANFSBTghwSO3IRPe37iEbGSRCqeopSU6j2Hcez4R7GCufmwUsm2cJT67qBh0fkZ P/XwUrj62sumyIcjyV1b+w3mhtEs6eZOV8c/f8mO92uujnCumD2S6fG5U99vKT43hbsf7AhR /+UcmfGH5zZ1xPOSq/ELt/dyMvLFpgOnPqy6W7D3i6Gj++hLtbXq+tjSKKco1+2F9SPmEsPT B+3BdTy5S8TppT3xDuYdv/aE5S4szH1fnHyIcr9hfklfbk4+Mp41ZjLxA3ffYZX6364t6z1L Q5WLpHd/3sYNP4QY08Nan+h1/eOXj3VPBbz61in13/aGkTaddqZhoji36O13L5hVF18J1K+f RsbkjvlUs4dLgqf2fzM0/vMe74eFWufr61Ox6V0CJi2VCD0ZKlryH4b59EiZBAAA X-Brightmail-Tracker: H4sIAAAAAAAAA03RfUzMcRzA8b6/3+9+9+u2y69T+uZMdliWdUKbr9XIU75DmGbDcI5+qvV0 7sqz6PKwLi55CL9DV7PknIeuECpcR9cTLTk3JKp5GKmVm0jhNJv/Xvu8t8/njw9DSkqp0Ux8 ciqnTlYmymgRdbNaFhA8+VCYIqTquz/qrssVorPXzDTSWuto1F7XBFBX210Byrc9FqAWVzeN BrOaBOhSdiaNvhnbSdQwEIwuVfQT6GvLWwI9vXOWRoev3xCg1+ZfAsTXOghU5OoWoqEjDwA6 UGkTorrWPiE63mykkSFDDyL88IHmQRqbz5sBfqo/QmCjJQ1bTFk0tvQeE2JdVwuBXzkqaGw/ PUDhtuwaAr8c6iBxd9UzGuvLTAA3GG1CXOO8ReA+y9jl7BpReAyXGL+VU0+ZtUEUp8+PUx0W b39w20DsA+dEOuDJQDYUvh56ReuAiJGw5QB25emFw8Ef8n21xLBHwrb9NoHbEtYFYJ5jj9s0 GwjtZUXAbR9WCk8VFwvci0i2moLVV520O4xkp8MLL/tJtyl2ItQNnPhrMTsTFvB6MHwgAOYa WsHw3BvWnumk3Cb/zDNvGMijwIv/L/H/JSMgTMCfU2mSYpM0U1XTkrltco0ySZOWHCvflJJk AX//GhRUDipMPXIrIBhgBZAhZT7ijfYZCok4RrljJ6dOUajTEjmNFUgZSuYndvEF6yVsrDKV S+A4Faf+VwnGc/Q+wlqvXdc4bgzd6FFp73FeWRbcsdmZnjk3RjuPSi9V34kMbJU6zCF+tjC2 BHyz3rus++DkewLT5ov6iyO2JZhyol0e4VHaSE47GDUrdAuc77jlVPikfPIu7GVUOeX3CwVz Hofmh+5UfJZYD+ZGT2rwtO5IwEUrOuOjrzbtD3jHzxtFVr7x25XBGiKffLf/GL/Ie+GYD887 myetfCT1fRH+UyL1LUZVYWtLd//qW/D20O7bqVl5Hj22givhXgG5MHt9O1orz/CK1xjSHxWW rzrpM7NkLy/fLFFOOF0vrmV9yYcj7PX5X3pnr4ZjP0aocpbkeHRkOBYHLi2d20Jd7HgfIqM0 ccqpQaRao/wNZwBpYkYDAAA= X-CMS-MailID: 20200818074547epcas2p21e0c2442873d03800c7bc2c3e76405d6 X-Msg-Generator: CA Content-Type: text/plain; charset="utf-8" X-Sendblock-Type: AUTO_CONFIDENTIAL CMS-TYPE: 102P DLP-Filter: Pass X-CFilter-Loop: Reflected X-CMS-RootMailID: 20200818074547epcas2p21e0c2442873d03800c7bc2c3e76405d6 References: <CGME20200818074547epcas2p21e0c2442873d03800c7bc2c3e76405d6@epcas2p2.samsung.com> Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: <linux-media.vger.kernel.org> X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIMWL_WL_HIGH=0.001,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no |
Series | Chunk Heap Support on DMA-HEAP | |
Message
Hyesoo Yu
Aug. 18, 2020, 8:04 a.m. UTC
These patch series to introduce a new dma heap, chunk heap. That heap is needed for special HW that requires bulk allocation of fixed high order pages. For example, 64MB dma-buf pages are made up to fixed order-4 pages * 1024. The chunk heap uses alloc_pages_bulk to allocate high order page. https://lore.kernel.org/linux-mm/20200814173131.2803002-1-minchan@kernel.org The chunk heap is registered by device tree with alignment and memory node of contiguous memory allocator(CMA). Alignment defines chunk page size. For example, alignment 0x1_0000 means chunk page size is 64KB. The phandle to memory node indicates contiguous memory allocator(CMA). If device node doesn't have cma, the registration of chunk heap fails. The patchset includes the following: - export dma-heap API to register kernel module dma heap. - add chunk heap implementation. - document of device tree to register chunk heap Hyesoo Yu (3): dma-buf: add missing EXPORT_SYMBOL_GPL() for dma heaps dma-buf: heaps: add chunk heap to dmabuf heaps dma-heap: Devicetree binding for chunk heap .../devicetree/bindings/dma-buf/chunk_heap.yaml | 46 +++++ drivers/dma-buf/dma-heap.c | 2 + drivers/dma-buf/heaps/Kconfig | 9 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/chunk_heap.c | 222 +++++++++++++++++++++ drivers/dma-buf/heaps/heap-helpers.c | 2 + 6 files changed, 282 insertions(+) create mode 100644 Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml create mode 100644 drivers/dma-buf/heaps/chunk_heap.c
Comments
Hi, On Tue, Aug 18, 2020 at 05:04:12PM +0900, Hyesoo Yu wrote: > These patch series to introduce a new dma heap, chunk heap. > That heap is needed for special HW that requires bulk allocation of > fixed high order pages. For example, 64MB dma-buf pages are made up > to fixed order-4 pages * 1024. > > The chunk heap uses alloc_pages_bulk to allocate high order page. > https://lore.kernel.org/linux-mm/20200814173131.2803002-1-minchan@kernel.org > > The chunk heap is registered by device tree with alignment and memory node > of contiguous memory allocator(CMA). Alignment defines chunk page size. > For example, alignment 0x1_0000 means chunk page size is 64KB. > The phandle to memory node indicates contiguous memory allocator(CMA). > If device node doesn't have cma, the registration of chunk heap fails. This reminds me of an ion heap developed at Arm several years ago: https://git.linaro.org/landing-teams/working/arm/kernel.git/tree/drivers/staging/android/ion/ion_compound_page.c Some more descriptive text here: https://github.com/ARM-software/CPA It maintains a pool of high-order pages with a worker thread to attempt compaction and allocation to keep the pool filled, with high and low watermarks to trigger freeing/allocating of chunks. It implements a shrinker to allow the system to reclaim the pool under high memory pressure. Is maintaining a pool something you considered? From the alloc_pages_bulk thread it sounds like you want to allocate 300M at a time, so I expect if you tuned the pool size to match that it could work quite well. That implementation isn't using a CMA region, but a similar approach could definitely be applied. Thanks, -Brian > > The patchset includes the following: > - export dma-heap API to register kernel module dma heap. > - add chunk heap implementation. > - document of device tree to register chunk heap > > Hyesoo Yu (3): > dma-buf: add missing EXPORT_SYMBOL_GPL() for dma heaps > dma-buf: heaps: add chunk heap to dmabuf heaps > dma-heap: Devicetree binding for chunk heap > > .../devicetree/bindings/dma-buf/chunk_heap.yaml | 46 +++++ > drivers/dma-buf/dma-heap.c | 2 + > drivers/dma-buf/heaps/Kconfig | 9 + > drivers/dma-buf/heaps/Makefile | 1 + > drivers/dma-buf/heaps/chunk_heap.c | 222 +++++++++++++++++++++ > drivers/dma-buf/heaps/heap-helpers.c | 2 + > 6 files changed, 282 insertions(+) > create mode 100644 Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml > create mode 100644 drivers/dma-buf/heaps/chunk_heap.c > > -- > 2.7.4 >
On Tue, Aug 18, 2020 at 12:45 AM Hyesoo Yu <hyesoo.yu@samsung.com> wrote: > > These patch series to introduce a new dma heap, chunk heap. > That heap is needed for special HW that requires bulk allocation of > fixed high order pages. For example, 64MB dma-buf pages are made up > to fixed order-4 pages * 1024. > > The chunk heap uses alloc_pages_bulk to allocate high order page. > https://lore.kernel.org/linux-mm/20200814173131.2803002-1-minchan@kernel.org > > The chunk heap is registered by device tree with alignment and memory node > of contiguous memory allocator(CMA). Alignment defines chunk page size. > For example, alignment 0x1_0000 means chunk page size is 64KB. > The phandle to memory node indicates contiguous memory allocator(CMA). > If device node doesn't have cma, the registration of chunk heap fails. > > The patchset includes the following: > - export dma-heap API to register kernel module dma heap. > - add chunk heap implementation. > - document of device tree to register chunk heap > > Hyesoo Yu (3): > dma-buf: add missing EXPORT_SYMBOL_GPL() for dma heaps > dma-buf: heaps: add chunk heap to dmabuf heaps > dma-heap: Devicetree binding for chunk heap Hey! Thanks so much for sending this out! I'm really excited to see these heaps be submitted and reviewed on the list! The first general concern I have with your series is that it adds a dt binding for the chunk heap, which we've gotten a fair amount of pushback on. A possible alternative might be something like what Kunihiko Hayashi proposed for non-default CMA heaps: https://lore.kernel.org/lkml/1594948208-4739-1-git-send-email-hayashi.kunihiko@socionext.com/ This approach would insteal allow a driver to register a CMA area with the chunk heap implementation. However, (and this was the catch Kunihiko Hayashi's patch) this requires that the driver also be upstream, as we need an in-tree user of such code. Also, it might be good to provide some further rationale on why this heap is beneficial over the existing CMA heap? In general focusing the commit messages more on the why we might want the patch, rather than what the patch does, is helpful. "Special hardware" that doesn't have upstream drivers isn't very compelling for most maintainers. That said, I'm very excited to see these sorts of submissions, as I know lots of vendors have historically had very custom out of tree ION heaps, and I think it would be a great benefit to the community to better understand the experience vendors have in optimizing performance on their devices, so we can create good common solutions upstream. So I look forward to your insights on future revisions of this patch series! thanks -john
On Tue, Aug 18, 2020 at 11:55:57AM +0100, Brian Starkey wrote: > Hi, > > On Tue, Aug 18, 2020 at 05:04:12PM +0900, Hyesoo Yu wrote: > > These patch series to introduce a new dma heap, chunk heap. > > That heap is needed for special HW that requires bulk allocation of > > fixed high order pages. For example, 64MB dma-buf pages are made up > > to fixed order-4 pages * 1024. > > > > The chunk heap uses alloc_pages_bulk to allocate high order page. > > https://lore.kernel.org/linux-mm/20200814173131.2803002-1-minchan@kernel.org > > > > The chunk heap is registered by device tree with alignment and memory node > > of contiguous memory allocator(CMA). Alignment defines chunk page size. > > For example, alignment 0x1_0000 means chunk page size is 64KB. > > The phandle to memory node indicates contiguous memory allocator(CMA). > > If device node doesn't have cma, the registration of chunk heap fails. > > This reminds me of an ion heap developed at Arm several years ago: > https://protect2.fireeye.com/v1/url?k=aceed8af-f122140a-acef53e0-0cc47a30d446-0980fa451deb2df6&q=1&e=a58a9bb0-a837-4fc5-970e-907089bfe25e&u=https%3A%2F%2Fgit.linaro.org%2Flanding-teams%2Fworking%2Farm%2Fkernel.git%2Ftree%2Fdrivers%2Fstaging%2Fandroid%2Fion%2Fion_compound_page.c > > Some more descriptive text here: > https://protect2.fireeye.com/v1/url?k=83dc3e8b-de10f22e-83ddb5c4-0cc47a30d446-a406aa201ca7dddc&q=1&e=a58a9bb0-a837-4fc5-970e-907089bfe25e&u=https%3A%2F%2Fgithub.com%2FARM-software%2FCPA > > It maintains a pool of high-order pages with a worker thread to > attempt compaction and allocation to keep the pool filled, with high > and low watermarks to trigger freeing/allocating of chunks. > It implements a shrinker to allow the system to reclaim the pool under > high memory pressure. > > Is maintaining a pool something you considered? From the > alloc_pages_bulk thread it sounds like you want to allocate 300M at a > time, so I expect if you tuned the pool size to match that it could > work quite well. > > That implementation isn't using a CMA region, but a similar approach > could definitely be applied. > I have seriously considered CPA in our product but we developed our own because of the pool in CPA. The high-order pages are required by some specific users like Netflix app. Moreover required number of bytes are dramatically increasing because of high resolution videos and displays in these days. Gathering lots of free high-order pages in the background during run-time means reserving that amount of pages from the entier available system memory. Moreover the gathered pages are soon reclaimed whenever the system is sufferring from memory pressure (i.e. camera recording, heavy games). So we had to consider allocating hundreds of megabytes at at time. Of course we don't allocate all buffers by a single call to alloc_pages_bulk(). But still a buffer is very large. A single frame of 8K HDR video needs 95MB (7680*4320*2*1.5). Even a single frame of HDR 4K video needs 24MB and 4K HDR is now popular in Netflix, YouTube and Google Play video. > Thanks, > -Brian Thank you! KyongHo
Hi KyongHo, On Wed, Aug 19, 2020 at 12:46:26PM +0900, Cho KyongHo wrote: > I have seriously considered CPA in our product but we developed our own > because of the pool in CPA. Oh good, I'm glad you considered it :-) > The high-order pages are required by some specific users like Netflix > app. Moreover required number of bytes are dramatically increasing > because of high resolution videos and displays in these days. > > Gathering lots of free high-order pages in the background during > run-time means reserving that amount of pages from the entier available > system memory. Moreover the gathered pages are soon reclaimed whenever > the system is sufferring from memory pressure (i.e. camera recording, > heavy games). Aren't these two things in contradiction? If they're easily reclaimed then they aren't "reserved" in any detrimental way. And if you don't want them to be reclaimed, then you need them to be reserved... The approach you have here assigns the chunk of memory as a reserved CMA region which the kernel is going to try not to use too - similar to the CPA pool. I suppose it's a balance depending on how much you're willing to wait for migration on the allocation path. CPA has the potential to get you faster allocations, but the downside is you need to make it a little more "greedy". Cheers, -Brian
Hi Brain, On Wed, Aug 19, 2020 at 02:22:04PM +0100, Brian Starkey wrote: > Hi KyongHo, > > On Wed, Aug 19, 2020 at 12:46:26PM +0900, Cho KyongHo wrote: > > I have seriously considered CPA in our product but we developed our own > > because of the pool in CPA. > > Oh good, I'm glad you considered it :-) > > > The high-order pages are required by some specific users like Netflix > > app. Moreover required number of bytes are dramatically increasing > > because of high resolution videos and displays in these days. > > > > Gathering lots of free high-order pages in the background during > > run-time means reserving that amount of pages from the entier available > > system memory. Moreover the gathered pages are soon reclaimed whenever > > the system is sufferring from memory pressure (i.e. camera recording, > > heavy games). > > Aren't these two things in contradiction? If they're easily reclaimed > then they aren't "reserved" in any detrimental way. And if you don't > want them to be reclaimed, then you need them to be reserved... > > The approach you have here assigns the chunk of memory as a reserved > CMA region which the kernel is going to try not to use too - similar > to the CPA pool. > > I suppose it's a balance depending on how much you're willing to wait > for migration on the allocation path. CPA has the potential to get you > faster allocations, but the downside is you need to make it a little > more "greedy". > I understand why you think it as contradiction. But I don't think so. Kernel page allocator now prefers free pages in CMA when allocating movable pages by commit https://lore.kernel.org/linux-mm/CAAmzW4P6+3O_RLvgy_QOKD4iXw+Hk3HE7Toc4Ky7kvQbCozCeA@mail.gmail.com/ . We are trying to reduce unused pages to improve performance. So, unused pages in a pool should be easily reclaimed. That is why we does not secure free pages in a special pool for a specific usecase. Instead we have tried to reduce performance bottle-necks in page migration to allocate large amount memory when the memory is needed.