From patchwork Tue Nov 17 18:19:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 69017 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kf5b4-00GRlI-NN; Tue, 17 Nov 2020 18:20:55 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730631AbgKQSTs (ORCPT + 1 other); Tue, 17 Nov 2020 13:19:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726979AbgKQSTr (ORCPT ); Tue, 17 Nov 2020 13:19:47 -0500 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B63BCC0613CF; Tue, 17 Nov 2020 10:19:47 -0800 (PST) Received: by mail-pj1-x1043.google.com with SMTP id g19so735683pji.0; Tue, 17 Nov 2020 10:19:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uO92xm4brn088z6K9xBnUjWL7itcyhZywUJ7fkQYnbM=; b=s9QJls1aFG3b2axzdWJTySxrbU5RCDggFiYqqnDrKYf55uoQZJ84JM0n5LuPLuqHcm /2BszNJ18ae6K41NDFmYrHNDQrlFDJe1oP8lNGlTCAiR/RanEdmCO/0saauqeEaXmJpL LF/QD7C0+68iFFGK1cH8fq66X20MouhRrFNksWik1JG7Rl9/UpZZvRxeRN7VpSgntFG1 XqoVkVpnyGRktgpOWrqr2LDIDfoKKF3HV/J37uWtwf/18CTvxcvZsPxQZ2LaCx2Ab8NP GBLjxPfx7dL2vEOgj//mmaGlARIZsuy4Bp8lf7a4OdiXlEC27haVhSUlHX5V8DKrj7oz YRTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=uO92xm4brn088z6K9xBnUjWL7itcyhZywUJ7fkQYnbM=; b=IQ3MpSZsWOASG54XMIX0GUsZ4Rn3UE1zjQWdPEfd7d/2fpB7W78E+JiV41JwTQ5paE 59idhp5f9xnPd7swdAVSLDjOLAVhOzkfnmgz09xk866H6Uc8v/QjiqU+T6X1K12lrBcH OAUYzAYIcIwPMlOmU83Pdz/lMfusBFOoEHGX9KyeB7YY5sJCoF69el8KMsx4po253izQ OZSZgquCSFtsJ+Zmgk2CVlbHxEyyDB0lYJifJ4mqGGBCmqOr8SVGNZ/u6lKGR5fkN2Ek 95SzHA5t7RFWL5MVR9EFAHK55U135ivmxALYiit5BNLS0OekvHAYxmJFsbU7kzGvBvRM JxJA== X-Gm-Message-State: AOAM532xAgKB96q4r5qvxCgaXsCish1g/vkIgFDVo+FurlsG8SUjwFVA ybRyTB6i26duTTitLoK58k4= X-Google-Smtp-Source: ABdhPJwl624GAicF93kl7dieILZKed9/4fkBOAIWrqBdo2yU9vIljUIivWAYDtC8ft2w2WeeIrwmpw== X-Received: by 2002:a17:902:8d95:b029:d8:c2ee:7dc with SMTP id v21-20020a1709028d95b02900d8c2ee07dcmr822221plo.57.1605637187208; Tue, 17 Nov 2020 10:19:47 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id h8sm4302639pjc.54.2020.11.17.10.19.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Nov 2020 10:19:45 -0800 (PST) Sender: Minchan Kim From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH 1/4] mm: introduce cma_alloc_bulk API Date: Tue, 17 Nov 2020 10:19:32 -0800 Message-Id: <20201117181935.3613581-2-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog In-Reply-To: <20201117181935.3613581-1-minchan@kernel.org> References: <20201117181935.3613581-1-minchan@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.9 (--) X-LSpam-Report: No, score=-2.9 required=5.0 tests=BAYES_00=-1.9,DKIMWL_WL_MED=0.001,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no There is a need for special HW to require bulk allocation of high-order pages. For example, 4800 * order-4 pages, which would be minimum, sometimes, it requires more. To meet the requirement, a option reserves 300M CMA area and requests the whole 300M contiguous memory. However, it doesn't work if even one of those pages in the range is long-term pinned directly or indirectly. The other option is to ask higher-order size (e.g., 2M) than requested order(64K) repeatedly until driver could gather necessary amount of memory. Basically, this approach makes the allocation very slow due to cma_alloc's function slowness and it could be stuck on one of the pageblocks if it encounters unmigratable page. To solve the issue, this patch introduces cma_alloc_bulk. int cma_alloc_bulk(struct cma *cma, unsigned int align, gfp_t gfp_mask, unsigned int order, size_t nr_requests, struct page **page_array, size_t *nr_allocated); Most parameters are same with cma_alloc but it additionally passes vector array to store allocated memory. What's different with cma_alloc is it will skip pageblocks without waiting/stopping if it has unmovable page so that API continues to scan other pageblocks to find requested order page. cma_alloc_bulk is best effort approach in that it skips some pageblocks if they have unmovable pages unlike cma_alloc. It doesn't need to be perfect from the beginning at the cost of performance. Thus, the API takes gfp_t to support __GFP_NORETRY which is propagated into alloc_contig_page to avoid significat overhead functions to inrecase CMA allocation success ratio(e.g., migration retrial, PCP, LRU draining per pageblock) at the cost of less allocation success ratio. If the caller couldn't allocate enough pages with __GFP_NORETRY, they could call it without __GFP_NORETRY to increase success ratio this time if they are okay to expense the overhead for the success ratio. Signed-off-by: Minchan Kim --- include/linux/cma.h | 5 ++ include/linux/page-isolation.h | 1 + mm/cma.c | 126 +++++++++++++++++++++++++++++++-- mm/page_alloc.c | 19 +++-- mm/page_isolation.c | 3 +- 5 files changed, 141 insertions(+), 13 deletions(-) diff --git a/include/linux/cma.h b/include/linux/cma.h index 217999c8a762..2fc8d2b7cf99 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -46,6 +46,11 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, bool no_warn); + +extern int cma_alloc_bulk(struct cma *cma, unsigned int align, gfp_t gfp_mask, + unsigned int order, size_t nr_requests, + struct page **page_array, size_t *nr_allocated); + extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count); extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 572458016331..0e105dce2a15 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -32,6 +32,7 @@ static inline bool is_migrate_isolate(int migratetype) #define MEMORY_OFFLINE 0x1 #define REPORT_FAILURE 0x2 +#define SKIP_PCP_DRAIN 0x4 struct page *has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, int flags); diff --git a/mm/cma.c b/mm/cma.c index 3692a34e2353..7c11ec2dc04c 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include "cma.h" @@ -397,6 +398,14 @@ static void cma_debug_show_areas(struct cma *cma) static inline void cma_debug_show_areas(struct cma *cma) { } #endif +static void reset_page_kasan_tag(struct page *page, int count) +{ + int i; + + for (i = 0; i < count; i++) + page_kasan_tag_reset(page + i); +} + /** * cma_alloc() - allocate pages from contiguous area * @cma: Contiguous memory region for which the allocation is performed. @@ -414,7 +423,6 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, unsigned long pfn = -1; unsigned long start = 0; unsigned long bitmap_maxno, bitmap_no, bitmap_count; - size_t i; struct page *page = NULL; int ret = -ENOMEM; @@ -478,10 +486,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, * blocks being marked with different tags. Reset the tags to ignore * those page blocks. */ - if (page) { - for (i = 0; i < count; i++) - page_kasan_tag_reset(page + i); - } + if (page) + reset_page_kasan_tag(page, count); if (ret && !no_warn) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", @@ -493,6 +499,116 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, return page; } +/* + * cma_alloc_bulk() - allocate high order bulk pages from contiguous area with + * best effort. It will usually be used for private @cma + * + * @cma: contiguous memory region for which the allocation is performed. + * @align: requested alignment of pages (in PAGE_SIZE order). + * @gfp_mask: memory allocation flags + * @order: requested page order + * @nr_requests: the number of 2^order pages requested to be allocated as input, + * @page_array: page_array pointer to store allocated pages (must have space + * for at least nr_requests) + * @nr_allocated: the number of 2^order pages allocated as output + * + * This function tries to allocate up to @nr_requests @order pages on specific + * contiguous memory area. If @gfp_mask has __GFP_NORETRY, it will avoid costly + * functions to increase allocation success ratio so it will be fast but might + * return less than requested number of pages. User could retry with + * !__GFP_NORETRY if it is needed. + * + * Return: it will return 0 only if all pages requested by @nr_requestsed are + * allocated. Otherwise, it returns negative error code. + * + * Note: Regardless of success/failure, user should check @nr_allocated to see + * how many @order pages are allocated and free those pages when they are not + * needed. + */ +int cma_alloc_bulk(struct cma *cma, unsigned int align, gfp_t gfp_mask, + unsigned int order, size_t nr_requests, + struct page **page_array, size_t *nr_allocated) +{ + int ret = 0; + size_t i = 0; + unsigned long nr_pages_needed = nr_requests * (1 << order); + unsigned long nr_chunk_pages, nr_pages; + unsigned long mask, offset; + unsigned long pfn = -1; + unsigned long start = 0; + unsigned long bitmap_maxno, bitmap_no, bitmap_count; + struct page *page = NULL; + gfp_t gfp = GFP_KERNEL|__GFP_NOWARN|gfp_mask; + + *nr_allocated = 0; + if (!cma || !cma->count || !cma->bitmap || !page_array) + return -EINVAL; + + if (!nr_pages_needed) + return 0; + + nr_chunk_pages = 1 << max_t(unsigned int, order, pageblock_order); + + mask = cma_bitmap_aligned_mask(cma, align); + offset = cma_bitmap_aligned_offset(cma, align); + bitmap_maxno = cma_bitmap_maxno(cma); + + lru_add_drain_all(); + drain_all_pages(NULL); + + while (nr_pages_needed) { + nr_pages = min(nr_chunk_pages, nr_pages_needed); + + bitmap_count = cma_bitmap_pages_to_bits(cma, nr_pages); + mutex_lock(&cma->lock); + bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap, + bitmap_maxno, start, bitmap_count, mask, + offset); + if (bitmap_no >= bitmap_maxno) { + mutex_unlock(&cma->lock); + break; + } + bitmap_set(cma->bitmap, bitmap_no, bitmap_count); + /* + * It's safe to drop the lock here. If the migration fails + * cma_clear_bitmap will take the lock again and unmark it. + */ + mutex_unlock(&cma->lock); + + pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); + ret = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_CMA, gfp); + if (ret) { + cma_clear_bitmap(cma, pfn, nr_pages); + if (ret != -EBUSY) + break; + + /* continue to search next block */ + start = (pfn + nr_pages - cma->base_pfn) >> + cma->order_per_bit; + continue; + } + + page = pfn_to_page(pfn); + while (nr_pages) { + page_array[i++] = page; + reset_page_kasan_tag(page, 1 << order); + page += 1 << order; + nr_pages -= 1 << order; + nr_pages_needed -= 1 << order; + } + + start = bitmap_no + bitmap_count; + } + + *nr_allocated = i; + + if (!ret && nr_pages_needed) + ret = -EBUSY; + + return ret; +} +EXPORT_SYMBOL_GPL(cma_alloc_bulk); + /** * cma_release() - release allocated pages * @cma: Contiguous memory region for which the allocation is performed. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f84b7eea39ec..097cc83097bb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8404,7 +8404,8 @@ static unsigned long pfn_max_align_up(unsigned long pfn) /* [start, end) must belong to a single zone. */ static int __alloc_contig_migrate_range(struct compact_control *cc, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, + unsigned int max_tries) { /* This function is based on compact_zone() from compaction.c. */ unsigned int nr_reclaimed; @@ -8432,7 +8433,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, break; } tries = 0; - } else if (++tries == 5) { + } else if (++tries == max_tries) { ret = ret < 0 ? ret : -EBUSY; break; } @@ -8478,6 +8479,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, unsigned long outer_start, outer_end; unsigned int order; int ret = 0; + bool no_retry = gfp_mask & __GFP_NORETRY; struct compact_control cc = { .nr_migratepages = 0, @@ -8516,7 +8518,8 @@ int alloc_contig_range(unsigned long start, unsigned long end, */ ret = start_isolate_page_range(pfn_max_align_down(start), - pfn_max_align_up(end), migratetype, 0); + pfn_max_align_up(end), migratetype, + no_retry ? SKIP_PCP_DRAIN : 0); if (ret) return ret; @@ -8530,7 +8533,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, * allocated. So, if we fall through be sure to clear ret so that * -EBUSY is not accidentally used or returned to caller. */ - ret = __alloc_contig_migrate_range(&cc, start, end); + ret = __alloc_contig_migrate_range(&cc, start, end, no_retry ? 1 : 5); if (ret && ret != -EBUSY) goto done; ret =0; @@ -8552,7 +8555,8 @@ int alloc_contig_range(unsigned long start, unsigned long end, * isolated thus they won't get removed from buddy. */ - lru_add_drain_all(); + if (!no_retry) + lru_add_drain_all(); order = 0; outer_start = start; @@ -8579,8 +8583,9 @@ int alloc_contig_range(unsigned long start, unsigned long end, /* Make sure the range is really isolated. */ if (test_pages_isolated(outer_start, end, 0)) { - pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", - __func__, outer_start, end); + if (!no_retry) + pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", + __func__, outer_start, end); ret = -EBUSY; goto done; } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index abbf42214485..31b1dcc1a395 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -49,7 +49,8 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ __mod_zone_freepage_state(zone, -nr_pages, mt); spin_unlock_irqrestore(&zone->lock, flags); - drain_all_pages(zone); + if (!(isol_flags & SKIP_PCP_DRAIN)) + drain_all_pages(zone); return 0; } From patchwork Tue Nov 17 18:19:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 69018 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kf5b6-00GRlI-D1; Tue, 17 Nov 2020 18:20:57 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731282AbgKQSTw (ORCPT + 1 other); Tue, 17 Nov 2020 13:19:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731267AbgKQSTu (ORCPT ); Tue, 17 Nov 2020 13:19:50 -0500 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29AD6C0613CF; Tue, 17 Nov 2020 10:19:50 -0800 (PST) Received: by mail-pl1-x641.google.com with SMTP id l11so909504plt.1; Tue, 17 Nov 2020 10:19:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q8kidPFG0ShsrD4EKZFHMGOfpY+czdS6PReDEDQbdzA=; b=Ss4Iqc3UxqcFxDXwzjylxzWG4tnvFAlp0z7KwhwRCQ6/A4Fiboihyyz+OtkIWWaTmo q6HEssaPnI7BWoXCjO9q4ctsz9z0BJ8qo5Gz4QX2L2IYMIS7locnci4Ka/lb9l8YXg0/ JZ3s21ZPGUd/O05p3sebOK9xJqTqWr0xeYut/4U+lv5Ay1FqnxYPWgG1gnwiNb9y12Ii K5OsrIIKQAyZZ9K26oEtFZAquc5g7KSS0Lm4ec6kPD4ZaY4P+2VheOkOtNRP9sYi8HrB DKAfd7MnFcRL66vMqz6BmzWkIsvnTqfJtNu9dpk9kL7CAofN8VW5kP0xkzYU6EO66UHw kIQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=q8kidPFG0ShsrD4EKZFHMGOfpY+czdS6PReDEDQbdzA=; b=COHD72/yjwqJZBEg6Ul0iy01tw3p4k46iu8d1i+bpLpa/p5Xra/HkB+y+O9kUJfM+K EUR177u552VUh3wwwfeOST8VVhhRxgQsWPfa4rQVVazSaE8e7fAu24FU1Fv+MCLeNJRd jJYIek43sBr45tQnFwdXZMte646AwNZpE8sIUADkVOBbJfk8f5aHNwnwXxr1OBP/DK3s vw2vU9zPBbIdtDy6X9fxAgutKT2cc6GEVxgSgTA2wil7j5njet7vzaSedxVnMWHySD15 l4JhrYy+PavggWVIVD/Xslumz+iIivfEUkCNkJdUCagQhmp7a0W2UXrvt+9hAVOde6KY 057g== X-Gm-Message-State: AOAM532756mxFEKUdBdXcpudKjTO+7rlxy0ctXiezDralmmZqErVGZvh KypZwc88euqcDz2xVqAfiBg= X-Google-Smtp-Source: ABdhPJyQ446q/Ak38ZEn3hLJs3wvpyQM6zYG9Z6FeiHKyC/wU2GU+8h9b/m1GDAa14eu5Aua13Cu+A== X-Received: by 2002:a17:902:778d:b029:d7:cd5a:945e with SMTP id o13-20020a170902778db02900d7cd5a945emr384335pll.25.1605637189747; Tue, 17 Nov 2020 10:19:49 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id h8sm4302639pjc.54.2020.11.17.10.19.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Nov 2020 10:19:48 -0800 (PST) Sender: Minchan Kim From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH 2/4] dma-buf: add export symbol for dma-heap Date: Tue, 17 Nov 2020 10:19:33 -0800 Message-Id: <20201117181935.3613581-3-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog In-Reply-To: <20201117181935.3613581-1-minchan@kernel.org> References: <20201117181935.3613581-1-minchan@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.9 (--) X-LSpam-Report: No, score=-2.9 required=5.0 tests=BAYES_00=-1.9,DKIMWL_WL_MED=0.001,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no From: Hyesoo Yu The heaps could be added as module, so some functions should be exported to register dma-heaps. And dma-heap of module can use cma area to allocate and free. However the function related cma is not exported now. Let's export them for next patches. Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim --- drivers/dma-buf/dma-heap.c | 2 ++ mm/cma.c | 3 +++ 2 files changed, 5 insertions(+) diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index afd22c9dbdcf..cc6339cbca09 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -189,6 +189,7 @@ void *dma_heap_get_drvdata(struct dma_heap *heap) { return heap->priv; } +EXPORT_SYMBOL_GPL(dma_heap_get_drvdata); struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) { @@ -272,6 +273,7 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) kfree(heap); return err_ret; } +EXPORT_SYMBOL_GPL(dma_heap_add); static char *dma_heap_devnode(struct device *dev, umode_t *mode) { diff --git a/mm/cma.c b/mm/cma.c index 7c11ec2dc04c..87834e2966fa 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -54,6 +54,7 @@ const char *cma_get_name(const struct cma *cma) { return cma->name; } +EXPORT_SYMBOL_GPL(cma_get_name); static unsigned long cma_bitmap_aligned_mask(const struct cma *cma, unsigned int align_order) @@ -498,6 +499,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, pr_debug("%s(): returned %p\n", __func__, page); return page; } +EXPORT_SYMBOL_GPL(cma_alloc); /* * cma_alloc_bulk() - allocate high order bulk pages from contiguous area with @@ -641,6 +643,7 @@ bool cma_release(struct cma *cma, const struct page *pages, unsigned int count) return true; } +EXPORT_SYMBOL_GPL(cma_release); int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data) { From patchwork Tue Nov 17 18:19:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 69019 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kf5b7-00GRlI-P3; Tue, 17 Nov 2020 18:20:59 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726979AbgKQSTz (ORCPT + 1 other); Tue, 17 Nov 2020 13:19:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730534AbgKQSTy (ORCPT ); Tue, 17 Nov 2020 13:19:54 -0500 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D52FBC0613CF; Tue, 17 Nov 2020 10:19:52 -0800 (PST) Received: by mail-pf1-x442.google.com with SMTP id 131so7300219pfb.9; Tue, 17 Nov 2020 10:19:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mAgEy75lX1l08pwNKZ1bwH8Dg3sIGTaA6LNhifIcl8w=; b=Nrq2dsVspzqYCS+JWYycWnW6yQ7fVIHAWiK0j71XYKps6sFHS+HnThXTYqsMlTHKbg 8i3OWieiAWM36uml45D/5tXMTtIEtkv8C7rpj94xg0gQTQiu6f8S1522+1jUOLBXntEL qvK8zL3YINWQuMpaB9t1/M2+6s6v93bhCscUI4hyjoXPx3dvOVsBJeDUdzoUVE4IqxXR y6TSYHfmduumZ3WurViMRVPvLoMMbBOvJTYQjMjXrriRJVFGj+Ix1voWmMpoxudCCLsL 3H3IX8P6xg7oIAj7bS2c86u1QGtcngPkdM8NltMtbEArJcQgrjk/pGmuzcAjvTwmoDme hMxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=mAgEy75lX1l08pwNKZ1bwH8Dg3sIGTaA6LNhifIcl8w=; b=JSSWqdy4ZLIS30uy7EsIePa8GqtsNITV56xCq9suX9v6TPlhe+eLgXqt3yOgXhiQCV xsbkteAAyarAj5bHGT2IbrwHe8O6+F1qF9sWeWkt7y6TcWCYfRuXwhclE6bIiUOvlX3R RrFDzb6zzgNiszOyhGpMxJmDQOj2IDU20BKV52NQmHgfgtQGn98bZL+Cikj4PQNADgHF 8pGZBPsu3aTTa5sKKUOOPpxWHUlEH8+cxDwHGP4DNdgpsFG0J0innb4w4CbhYbwNYA1N JLPzsxifNVGpfcdcSXyTIqQBLsz6oWKMaA4IT/jTBSCzI/G1ZdBdc+M1gYsMpLs18BV4 x/2w== X-Gm-Message-State: AOAM533TmG0gXmZGTjNDg1mt/jXM3LNyq4KFunUT1pN+I6YM/i/KQCOu lB7jjgSIjWNZtLiBTFkv73A= X-Google-Smtp-Source: ABdhPJw6U2WzRE+W1IQuANAReU9W4CsUAyehfYTYekO3eDffAVo2cKA+Es/xydW1mMdAGU2RJGzAig== X-Received: by 2002:aa7:9a50:0:b029:18b:fa6b:f738 with SMTP id x16-20020aa79a500000b029018bfa6bf738mr576927pfj.64.1605637192345; Tue, 17 Nov 2020 10:19:52 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id h8sm4302639pjc.54.2020.11.17.10.19.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Nov 2020 10:19:51 -0800 (PST) Sender: Minchan Kim From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH 3/4] dma-buf: heaps: add chunk heap to dmabuf heaps Date: Tue, 17 Nov 2020 10:19:34 -0800 Message-Id: <20201117181935.3613581-4-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog In-Reply-To: <20201117181935.3613581-1-minchan@kernel.org> References: <20201117181935.3613581-1-minchan@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.9 (--) X-LSpam-Report: No, score=-2.9 required=5.0 tests=BAYES_00=-1.9,DKIMWL_WL_MED=0.001,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no From: Hyesoo Yu This patch supports chunk heap that allocates the buffers that are made up of a list of fixed size chunks taken from a CMA. The chunk heap doesn't use heap-helper although it can remove duplicated code since heap-helper is under deprecated process.[1] [1] https://lore.kernel.org/patchwork/patch/1336002 Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim --- drivers/dma-buf/heaps/Kconfig | 9 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/chunk_heap.c | 458 +++++++++++++++++++++++++++++ 3 files changed, 468 insertions(+) create mode 100644 drivers/dma-buf/heaps/chunk_heap.c diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..9cc5366b8f5e 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_CHUNK + tristate "DMA-BUF CHUNK Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CHUNK heap. This heap is backed + by the Contiguous Memory Allocator (CMA) and allocates the buffers that + arranged into a list of fixed size chunks taken from CMA. Chunk size + is configured when the heap is created. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 6e54cdec3da0..3b2a09869fd8 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -2,3 +2,4 @@ obj-y += heap-helpers.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CHUNK) += chunk_heap.o diff --git a/drivers/dma-buf/heaps/chunk_heap.c b/drivers/dma-buf/heaps/chunk_heap.c new file mode 100644 index 000000000000..427594f56e18 --- /dev/null +++ b/drivers/dma-buf/heaps/chunk_heap.c @@ -0,0 +1,458 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ION Memory Allocator chunk heap exporter + * + * Copyright (c) 2020 Samsung Electronics Co., Ltd. + * Author: for Samsung Electronics. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct chunk_heap { + struct dma_heap *heap; + unsigned int order; + struct cma *cma; +}; + +struct chunk_heap_buffer { + struct chunk_heap *heap; + struct list_head attachments; + struct mutex lock; + struct sg_table sg_table; + unsigned long len; + int vmap_cnt; + void *vaddr; +}; + +struct chunk_heap_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; + bool mapped; +}; + +static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static int chunk_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + struct sg_table *table; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(&buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + a->mapped = false; + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table *chunk_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + int ret; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(ret); + + a->mapped = true; + return table; +} + +static void chunk_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + + a->mapped = false; + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int chunk_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_device(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table = &buffer->sg_table; + unsigned long addr = vma->vm_start; + struct sg_page_iter piter; + int ret; + + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { + struct page *page = sg_page_iter_page(&piter); + + ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + addr = PAGE_SIZE; + if (addr >= vma->vm_end) + return 0; + } + return 0; +} + +static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer) +{ + struct sg_table *table = &buffer->sg_table; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + struct page **pages = vmalloc(sizeof(struct page *) * npages); + struct page **tmp = pages; + struct sg_page_iter piter; + void *vaddr; + + if (!pages) + return ERR_PTR(-ENOMEM); + + for_each_sgtable_page(table, &piter, 0) { + WARN_ON(tmp - pages >= npages); + *tmp++ = sg_page_iter_page(&piter); + } + + vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static int chunk_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + int ret = 0; + void *vaddr; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + vaddr = buffer->vaddr; + goto done; + } + + vaddr = chunk_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) { + ret = PTR_ERR(vaddr); + goto err; + } + + buffer->vaddr = vaddr; +done: + buffer->vmap_cnt++; + dma_buf_map_set_vaddr(map, vaddr); +err: + mutex_unlock(&buffer->lock); + + return ret; +} + +static void chunk_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } + mutex_unlock(&buffer->lock); +} + +static void chunk_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap *chunk_heap = buffer->heap; + struct sg_table *table; + struct scatterlist *sg; + int i; + + table = &buffer->sg_table; + for_each_sgtable_sg(table, sg, i) + cma_release(chunk_heap->cma, sg_page(sg), 1 << chunk_heap->order); + sg_free_table(table); + kfree(buffer); +} + +static const struct dma_buf_ops chunk_heap_buf_ops = { + .attach = chunk_heap_attach, + .detach = chunk_heap_detach, + .map_dma_buf = chunk_heap_map_dma_buf, + .unmap_dma_buf = chunk_heap_unmap_dma_buf, + .begin_cpu_access = chunk_heap_dma_buf_begin_cpu_access, + .end_cpu_access = chunk_heap_dma_buf_end_cpu_access, + .mmap = chunk_heap_mmap, + .vmap = chunk_heap_vmap, + .vunmap = chunk_heap_vunmap, + .release = chunk_heap_dma_buf_release, +}; + +static int chunk_heap_allocate(struct dma_heap *heap, unsigned long len, + unsigned long fd_flags, unsigned long heap_flags) +{ + struct chunk_heap *chunk_heap = dma_heap_get_drvdata(heap); + struct chunk_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + struct sg_table *table; + struct scatterlist *sg; + struct page **pages; + unsigned int chunk_size = PAGE_SIZE << chunk_heap->order; + unsigned int count, alloced = 0; + unsigned int num_retry = 5; + int ret = -ENOMEM; + pgoff_t pg; + + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) + return ret; + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->heap = chunk_heap; + buffer->len = ALIGN(len, chunk_size); + count = buffer->len / chunk_size; + + pages = kvmalloc_array(count, sizeof(*pages), GFP_KERNEL); + if (!pages) + goto err_pages; + + while (num_retry--) { + unsigned long nr_pages; + + ret = cma_alloc_bulk(chunk_heap->cma, chunk_heap->order, + num_retry ? __GFP_NORETRY : 0, + chunk_heap->order, count - alloced, + pages + alloced, &nr_pages); + alloced += nr_pages; + if (alloced == count) + break; + if (ret != -EBUSY) + break; + + } + if (ret < 0) + goto err_alloc; + + table = &buffer->sg_table; + if (sg_alloc_table(table, count, GFP_KERNEL)) + goto err_alloc; + + sg = table->sgl; + for (pg = 0; pg < count; pg++) { + sg_set_page(sg, pages[pg], chunk_size, 0); + sg = sg_next(sg); + } + + exp_info.ops = &chunk_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err_export; + } + kvfree(pages); + + ret = dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + return ret; + } + + return 0; +err_export: + sg_free_table(table); +err_alloc: + for (pg = 0; pg < alloced; pg++) + cma_release(chunk_heap->cma, pages[pg], 1 << chunk_heap->order); + kvfree(pages); +err_pages: + kfree(buffer); + + return ret; +} + +static void rmem_remove_callback(void *p) +{ + of_reserved_mem_device_release((struct device *)p); +} + +static const struct dma_heap_ops chunk_heap_ops = { + .allocate = chunk_heap_allocate, +}; + +static int chunk_heap_probe(struct platform_device *pdev) +{ + struct chunk_heap *chunk_heap; + struct dma_heap_export_info exp_info; + unsigned int alignment; + int ret; + + ret = of_reserved_mem_device_init(&pdev->dev); + if (ret || !pdev->dev.cma_area) { + dev_err(&pdev->dev, "The CMA reserved area is not assigned (ret %d)", ret); + return -EINVAL; + } + + ret = devm_add_action(&pdev->dev, rmem_remove_callback, &pdev->dev); + if (ret) { + of_reserved_mem_device_release(&pdev->dev); + return ret; + } + + chunk_heap = devm_kzalloc(&pdev->dev, sizeof(*chunk_heap), GFP_KERNEL); + if (!chunk_heap) + return -ENOMEM; + + if (of_property_read_u32(pdev->dev.of_node, "alignment", &alignment)) + chunk_heap->order = 0; + else + chunk_heap->order = get_order(alignment); + + chunk_heap->cma = pdev->dev.cma_area; + + exp_info.name = cma_get_name(pdev->dev.cma_area); + exp_info.ops = &chunk_heap_ops; + exp_info.priv = chunk_heap; + + chunk_heap->heap = dma_heap_add(&exp_info); + if (IS_ERR(chunk_heap->heap)) + return PTR_ERR(chunk_heap->heap); + + return 0; +} + +static const struct of_device_id chunk_heap_of_match[] = { + { .compatible = "dma_heap,chunk", }, + { }, +}; + +MODULE_DEVICE_TABLE(of, chunk_heap_of_match); + +static struct platform_driver chunk_heap_driver = { + .driver = { + .name = "chunk_heap", + .of_match_table = chunk_heap_of_match, + }, + .probe = chunk_heap_probe, +}; + +static int __init chunk_heap_init(void) +{ + return platform_driver_register(&chunk_heap_driver); +} +module_init(chunk_heap_init); +MODULE_DESCRIPTION("DMA-BUF Chunk Heap"); +MODULE_LICENSE("GPL v2"); From patchwork Tue Nov 17 18:19:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 69020 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kf5b9-00GRlI-Ov; Tue, 17 Nov 2020 18:21:00 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731312AbgKQST4 (ORCPT + 1 other); Tue, 17 Nov 2020 13:19:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730534AbgKQSTz (ORCPT ); Tue, 17 Nov 2020 13:19:55 -0500 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2577C0613CF; Tue, 17 Nov 2020 10:19:55 -0800 (PST) Received: by mail-pg1-x543.google.com with SMTP id f18so16563831pgi.8; Tue, 17 Nov 2020 10:19:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FbR8FBZ8v9oRIOK3rZ6XdWYjeIbKFf31XDvQwkSkmGg=; b=ONymwN4v702ZwagKvnMijNuBl6CFcG6m7iSi/X/p3JHQERk+7YhRRnwIsAQnzbzLrn JszoxS2Govb2yKO3jW7iuS6PpImRZPjbxPC21Stn1Pp+mYa+oRhcodUSyxKYBxWhanf8 Nf6U1Bh38Ij8hrlEWj3tacl+yhlsE82tB3P1oHIv4MLpFGgFjSKIZUV69pDHusbBWt2U iCNCCHZWZj3ibUzIZP5mkcY03yF7ceeljrKL03zVouTD9zdw0lvKP//ukKZaVh52XC+e Wp4HWWtYV2F7XkuK13BPNKQhGkYONBSiE8gNSU1BiuWGXNBZZWxV6kmVVofoQu1slgvI Tluw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=FbR8FBZ8v9oRIOK3rZ6XdWYjeIbKFf31XDvQwkSkmGg=; b=RJ3Klbw2ymcZDPDBMj5cKf0cdohx3E360G+pojgd4XWJjloDlr+QJjj7R9MWI9Hqz9 whdAnLAjUv12y5HZMqLc7YYDx7JoC/XxdJ8/FEytU0S3E241D0wQJ9Ki61v/jv7PSfOn 6bRj9e0rxlaA2s7yLJjD5PIYweu0BmXiVFZzZ4JDw6T4XIrPopbOeTh4qy/s+geHFnMY ZmS482/GD8Wy18o9ae+W+nQlhctBEMo5ZvRqyZtA282jXYpfvNC8vg67+5x5Y2MZn3Is PC18nhqco4LvCCtgcJ/0K8Cr9NWSy7ZzQ+t0wbEU73oEZ9hIe2JbOYxKsndJkf9He6WX zAKQ== X-Gm-Message-State: AOAM5324katgqzXD7EEcj+c3xrykP9mjuMXo6JPvPSo6C8Yyv4q8PxWS 4BqcaKsk+nJqSPsgXLkWxgU= X-Google-Smtp-Source: ABdhPJx4cA1WlpSWJrNjJCeWkhFiCJNtehbF8opfuAbfGhq3jrdOy0v8F8YQec3puI0Q8HXVwmFOVg== X-Received: by 2002:a62:d11b:0:b029:18b:b3e:95aa with SMTP id z27-20020a62d11b0000b029018b0b3e95aamr554667pfg.3.1605637195227; Tue, 17 Nov 2020 10:19:55 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id h8sm4302639pjc.54.2020.11.17.10.19.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Nov 2020 10:19:54 -0800 (PST) Sender: Minchan Kim From: Minchan Kim To: Andrew Morton Cc: LKML , linux-mm , hyesoo.yu@samsung.com, willy@infradead.org, david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, sumit.semwal@linaro.org, john.stultz@linaro.org, Brian.Starkey@arm.com, linux-media@vger.kernel.org, devicetree@vger.kernel.org, robh@kernel.org, christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH 4/4] dma-heap: Devicetree binding for chunk heap Date: Tue, 17 Nov 2020 10:19:35 -0800 Message-Id: <20201117181935.3613581-5-minchan@kernel.org> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog In-Reply-To: <20201117181935.3613581-1-minchan@kernel.org> References: <20201117181935.3613581-1-minchan@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.9 (--) X-LSpam-Report: No, score=-2.9 required=5.0 tests=BAYES_00=-1.9,DKIMWL_WL_MED=0.001,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no From: Hyesoo Yu Document devicetree binding for chunk heap on dma heap framework Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim --- .../bindings/dma-buf/chunk_heap.yaml | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml diff --git a/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml b/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml new file mode 100644 index 000000000000..f382bee02778 --- /dev/null +++ b/Documentation/devicetree/bindings/dma-buf/chunk_heap.yaml @@ -0,0 +1,52 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/dma-buf/chunk_heap.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK + +maintainers: + - Sumit Semwal + +description: | + The chunk heap is backed by the Contiguous Memory Allocator (CMA) and + allocates the buffers that are made up to a list of fixed size chunks + taken from CMA. Chunk sizes are configurated when the heaps are created. + +properties: + compatible: + enum: + - dma_heap,chunk + + memory-region: + maxItems: 1 + + alignment: + maxItems: 1 + +required: + - compatible + - memory-region + - alignment + +additionalProperties: false + +examples: + - | + reserved-memory { + #address-cells = <2>; + #size-cells = <1>; + + chunk_memory: chunk_memory { + compatible = "shared-dma-pool"; + reusable; + size = <0x10000000>; + }; + }; + + chunk_default_heap: chunk_default_heap { + compatible = "dma_heap,chunk"; + memory-region = <&chunk_memory>; + alignment = <0x10000>; + };