From patchwork Sat Oct 3 04:02:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 67689 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kOYkh-00HSD8-99; Sat, 03 Oct 2020 04:02:32 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725792AbgJCEDH (ORCPT + 1 other); Sat, 3 Oct 2020 00:03:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725783AbgJCEDG (ORCPT ); Sat, 3 Oct 2020 00:03:06 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 598BEC0613D0 for ; Fri, 2 Oct 2020 21:03:06 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id kk9so2135448pjb.2 for ; Fri, 02 Oct 2020 21:03:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IuKF6WW48YEHCV5GSH/tcw5D/e7OLEfTzOdrRc6wtHY=; b=x91paVPFb6G7oK1ni0qXmGJ3ayJ/22Zz8IsGAEkyMcaJxe+/uaMee8PpMwPnwtdiWR rtuPKnY4DDwTl0BPwWEO3Cr6O0Rn+Y09lV95FXpFcn+Ws0kt7X3GaoRnzr37CyE1rg7G NGNJV+N0HoJn+ZwmuuhrGONcVd5CsSRlereIHcMiJ1mi6bYTtXRTBmBL6JfiHltGVv3i yLr+rxEqcktaVJ/eBHUXG5m6HG0wMr3WliLyONASTHLkYOVBYYIVVfdgVtIDpYRuSx9t 6Em5wYdnfRWwJxew0hvOOG3OBvis/RIqq1Cc8clW/MTuGYQmpnyrzJ/VuSC8m2XBOvL4 x3PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IuKF6WW48YEHCV5GSH/tcw5D/e7OLEfTzOdrRc6wtHY=; b=ruySTZeThnQxTuXQQ40cKltWx7h61Rj0CltJLoNsrDJHsCDUBOTKF//7bCsnueOiid FJyGa2BIDUmM/y9fqxMUHmNf35PRIh0C/WcauZOfXLmjKY1Wn2Qabu5JClrhywxT5Ulz 6rxNPTgMvU3vW52emmaCDAEB+VmHQ4IHDxfWCsiuJo8fO5eT307RQAmDeKzQ10RO/lC3 +JfrprVAQjrZvdjcuSv1N7wE0da1r9EOQToXhsMSKDTfzthidCLDZBVIZq7mLIDeiTRB 5k3DRK/FTSrsIXpmhOsKcqqdfjugCIT4UhXe1vtZr445QFZaZ2eY7GYCsI0rptj+j6cx nNTA== X-Gm-Message-State: AOAM532TiyFJsTTreCjS/UDeoVr7IawDDMhGjOILbHANk/6/rXr4Z3uY /E2oeZPHsPNjukC2NGbACc5Vow== X-Google-Smtp-Source: ABdhPJxmWKNnKn1ZnvzoG0xdvu5Bdy49rVPU/Lq/QiX+M+CvHIjzjxyXr55yyhURvwMafYqQW6+oYw== X-Received: by 2002:a17:90a:7f8b:: with SMTP id m11mr5894667pjl.121.1601697785820; Fri, 02 Oct 2020 21:03:05 -0700 (PDT) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 190sm3909290pfy.22.2020.10.02.21.03.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Oct 2020 21:03:05 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v3 1/7] dma-buf: system_heap: Rework system heap to use sgtables instead of pagelists Date: Sat, 3 Oct 2020 04:02:51 +0000 Message-Id: <20201003040257.62768-2-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201003040257.62768-1-john.stultz@linaro.org> References: <20201003040257.62768-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no In preparation for some patches to optmize the system heap code, rework the dmabuf exporter to utilize sgtables rather then pageslists for tracking the associated pages. This will allow for large order page allocations, as well as more efficient page pooling. In doing so, the system heap stops using the heap-helpers logic which sadly is not quite as generic as I was hoping it to be, so this patch adds heap specific implementations of the dma_buf_ops function handlers. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Fix locking issue and an unused return value Reported-by: kernel test robot Julia Lawall * Make system_heap_buf_ops static Reported-by: kernel test robot v3: * Use the new sgtable mapping functions, as Suggested-by: Daniel Mentz --- drivers/dma-buf/heaps/system_heap.c | 344 ++++++++++++++++++++++++---- 1 file changed, 298 insertions(+), 46 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 0bf688e3c023..00ed107b3b76 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -3,7 +3,11 @@ * DMABUF System heap exporter * * Copyright (C) 2011 Google, Inc. - * Copyright (C) 2019 Linaro Ltd. + * Copyright (C) 2019, 2020 Linaro Ltd. + * + * Portions based off of Andrew Davis' SRAM heap: + * Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/ + * Andrew F. Davis */ #include @@ -15,72 +19,321 @@ #include #include #include -#include -#include - -#include "heap-helpers.h" +#include struct dma_heap *sys_heap; -static void system_heap_free(struct heap_helper_buffer *buffer) +struct system_heap_buffer { + struct dma_heap *heap; + struct list_head attachments; + struct mutex lock; + unsigned long len; + struct sg_table sg_table; + int vmap_cnt; + void *vaddr; +}; + +struct dma_heap_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; +}; + +static struct sg_table *dup_sg_table(struct sg_table *table) { - pgoff_t pg; + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static int system_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + struct sg_table *table; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(&buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void system_heap_detatch(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table *system_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct dma_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + int ret; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(ret); + + return table; +} + +static void system_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sgtable_for_cpu(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} - for (pg = 0; pg < buffer->pagecount; pg++) - __free_page(buffer->pages[pg]); - kfree(buffer->pages); +static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sgtable_for_device(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table = &buffer->sg_table; + unsigned long addr = vma->vm_start; + struct sg_page_iter piter; + int ret; + + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { + struct page *page = sg_page_iter_page(&piter); + + ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + addr += PAGE_SIZE; + if (addr >= vma->vm_end) + return 0; + } + return 0; +} + +static void *system_heap_do_vmap(struct system_heap_buffer *buffer) +{ + struct sg_table *table = &buffer->sg_table; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + struct page **pages = vmalloc(sizeof(struct page *) * npages); + struct page **tmp = pages; + struct sg_page_iter piter; + void *vaddr; + + if (!pages) + return ERR_PTR(-ENOMEM); + + for_each_sgtable_page(table, &piter, 0) { + WARN_ON(tmp - pages >= npages); + *tmp++ = sg_page_iter_page(&piter); + } + + vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static void *system_heap_vmap(struct dma_buf *dmabuf) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + void *vaddr; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + buffer->vmap_cnt++; + vaddr = buffer->vaddr; + goto out; + } + + vaddr = system_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) + goto out; + + buffer->vaddr = vaddr; + buffer->vmap_cnt++; +out: + mutex_unlock(&buffer->lock); + + return vaddr; +} + +static void system_heap_vunmap(struct dma_buf *dmabuf, void *vaddr) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } + mutex_unlock(&buffer->lock); +} + +static void system_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table; + struct scatterlist *sg; + int i; + + table = &buffer->sg_table; + for_each_sgtable_sg(table, sg, i) + __free_page(sg_page(sg)); + sg_free_table(table); kfree(buffer); } +static const struct dma_buf_ops system_heap_buf_ops = { + .attach = system_heap_attach, + .detach = system_heap_detatch, + .map_dma_buf = system_heap_map_dma_buf, + .unmap_dma_buf = system_heap_unmap_dma_buf, + .begin_cpu_access = system_heap_dma_buf_begin_cpu_access, + .end_cpu_access = system_heap_dma_buf_end_cpu_access, + .mmap = system_heap_mmap, + .vmap = system_heap_vmap, + .vunmap = system_heap_vunmap, + .release = system_heap_dma_buf_release, +}; + static int system_heap_allocate(struct dma_heap *heap, unsigned long len, unsigned long fd_flags, unsigned long heap_flags) { - struct heap_helper_buffer *helper_buffer; + struct system_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; - int ret = -ENOMEM; + struct sg_table *table; + struct scatterlist *sg; + pgoff_t pagecount; pgoff_t pg; + int i, ret = -ENOMEM; - helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); - if (!helper_buffer) + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) return -ENOMEM; - init_heap_helper_buffer(helper_buffer, system_heap_free); - helper_buffer->heap = heap; - helper_buffer->size = len; - - helper_buffer->pagecount = len / PAGE_SIZE; - helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, - sizeof(*helper_buffer->pages), - GFP_KERNEL); - if (!helper_buffer->pages) { - ret = -ENOMEM; - goto err0; - } + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->heap = heap; + buffer->len = len; - for (pg = 0; pg < helper_buffer->pagecount; pg++) { + table = &buffer->sg_table; + pagecount = len / PAGE_SIZE; + if (sg_alloc_table(table, pagecount, GFP_KERNEL)) + goto free_buffer; + + sg = table->sgl; + for (pg = 0; pg < pagecount; pg++) { + struct page *page; /* * Avoid trying to allocate memory if the process - * has been killed by by SIGKILL + * has been killed by SIGKILL */ if (fatal_signal_pending(current)) - goto err1; - - helper_buffer->pages[pg] = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (!helper_buffer->pages[pg]) - goto err1; + goto free_pages; + page = alloc_page(GFP_KERNEL | __GFP_ZERO); + if (!page) + goto free_pages; + sg_set_page(sg, page, page_size(page), 0); + sg = sg_next(sg); } /* create the dmabuf */ - dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags); + exp_info.ops = &system_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf); - goto err1; + goto free_pages; } - helper_buffer->dmabuf = dmabuf; - ret = dma_buf_fd(dmabuf, fd_flags); if (ret < 0) { dma_buf_put(dmabuf); @@ -90,12 +343,12 @@ static int system_heap_allocate(struct dma_heap *heap, return ret; -err1: - while (pg > 0) - __free_page(helper_buffer->pages[--pg]); - kfree(helper_buffer->pages); -err0: - kfree(helper_buffer); +free_pages: + for_each_sgtable_sg(table, sg, i) + __free_page(sg_page(sg)); + sg_free_table(table); +free_buffer: + kfree(buffer); return ret; } @@ -107,7 +360,6 @@ static const struct dma_heap_ops system_heap_ops = { static int system_heap_create(void) { struct dma_heap_export_info exp_info; - int ret = 0; exp_info.name = "system"; exp_info.ops = &system_heap_ops; @@ -115,9 +367,9 @@ static int system_heap_create(void) sys_heap = dma_heap_add(&exp_info); if (IS_ERR(sys_heap)) - ret = PTR_ERR(sys_heap); + return PTR_ERR(sys_heap); - return ret; + return 0; } module_init(system_heap_create); MODULE_LICENSE("GPL v2"); From patchwork Sat Oct 3 04:02:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 67690 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kOYkj-00HSD8-Ge; Sat, 03 Oct 2020 04:02:35 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725819AbgJCEDL (ORCPT + 1 other); Sat, 3 Oct 2020 00:03:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725807AbgJCEDI (ORCPT ); Sat, 3 Oct 2020 00:03:08 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86347C0613D0 for ; Fri, 2 Oct 2020 21:03:08 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id gm14so2311358pjb.2 for ; Fri, 02 Oct 2020 21:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qDmdA3E3k0WcvEKJL539r8KYOExUff4XTLceLBQuupY=; b=a0cf+nPLMpMjpQI82SemGlUIWrWmLb94fhl7q68tkPZtczXIePaAoqkriup1tEyOaX k1SAv+BOBuCC6K6t7KKxKgS45ZA42laRzhJ/qD5LX9w9Ez31ERIUvyFyk2qCt9RBrNWz XhLFReMWTBDViftARAVq1zBh0Ao1/qQGJLSZUGABcSWoWzf87UfGtzDAH1Rty4XuXvxy WoBmF95j9EHyZCWTpfq6+Dvf+sdh1bk4GsiDumRhkFnJIRcF8Z4fczhRbOI7jnC9FvVM 28yFnq+zs2WTnSaC+9wtp08RPCdBtlBKVUAtOfai3pn3yBlQkFN2UbudGt32Ja+dYQZT CBUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qDmdA3E3k0WcvEKJL539r8KYOExUff4XTLceLBQuupY=; b=PcPccUEP+KNQ2KcPjOhOfkZADWLpS8nqvXoNB9Lu4ewEwW0qGMXi8KHNK3oOCYu0gz 6l12oUlU1kUGErddfWh0gvcqMilOM9H0y29BiTt9I2LnKWX9D3o69zTnIVkFME7wExw4 5+Vx2r212UvGGN7zVBi8hQNVSFhyfvuA/R2OW5LBe++E7du1Ps9VzPy1L8ApbQ1y++zR wzvecSKlRXcQ4/FkRXDZnmOWb3M9mfM/hXLIFOuJKYui1tblp5E5qtesnJMZ/1jFrtAm mvbDIC8sjUeFXptIfDgy1tnngcKr/J3rE2V8xabeyYd23uc/S5p0Thp7ot01Ro//4TaE a8NA== X-Gm-Message-State: AOAM533A+Yp/wh/kj6imTu12vQPtqSNZwI2FsLsLgdyRotfDafE0jPs7 mhkmD6ilA1HWxakaNUPH09ODRw== X-Google-Smtp-Source: ABdhPJwni5E7y6nissv8Ogs+irApUo4wFV4n6mqH2ximcudDbsXjewVZdnIDC7B9NIUSCJz8OUEDxw== X-Received: by 2002:a17:90a:ab8f:: with SMTP id n15mr5821518pjq.139.1601697788005; Fri, 02 Oct 2020 21:03:08 -0700 (PDT) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 190sm3909290pfy.22.2020.10.02.21.03.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Oct 2020 21:03:07 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v3 2/7] dma-buf: heaps: Move heap-helper logic into the cma_heap implementation Date: Sat, 3 Oct 2020 04:02:52 +0000 Message-Id: <20201003040257.62768-3-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201003040257.62768-1-john.stultz@linaro.org> References: <20201003040257.62768-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no Since the heap-helpers logic ended up not being as generic as hoped, move the heap-helpers dma_buf_ops implementations into the cma_heap directly. This will allow us to remove the heap_helpers code in a following patch. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Fix unused return value and locking issue Reported-by: kernel test robot Julia Lawall * Make cma_heap_buf_ops static suggested by kernel test robot * Fix uninitialized return in cma Reported-by: kernel test robot * Minor cleanups v3: * Use the new sgtable mapping functions, as Suggested-by: Daniel Mentz --- drivers/dma-buf/heaps/cma_heap.c | 317 ++++++++++++++++++++++++++----- 1 file changed, 267 insertions(+), 50 deletions(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 626cf7fd033a..366963b94c72 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -2,76 +2,291 @@ /* * DMABUF CMA heap exporter * - * Copyright (C) 2012, 2019 Linaro Ltd. + * Copyright (C) 2012, 2019, 2020 Linaro Ltd. * Author: for ST-Ericsson. + * + * Also utilizing parts of Andrew Davis' SRAM heap: + * Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/ + * Andrew F. Davis */ - #include -#include #include -#include #include +#include +#include #include -#include #include +#include +#include #include -#include #include -#include +#include -#include "heap-helpers.h" struct cma_heap { struct dma_heap *heap; struct cma *cma; }; -static void cma_heap_free(struct heap_helper_buffer *buffer) +struct cma_heap_buffer { + struct cma_heap *heap; + struct list_head attachments; + struct mutex lock; + unsigned long len; + struct page *cma_pages; + struct page **pages; + pgoff_t pagecount; + int vmap_cnt; + void *vaddr; +}; + +struct dma_heap_attachment { + struct device *dev; + struct sg_table table; + struct list_head list; +}; + +static int cma_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) { - struct cma_heap *cma_heap = dma_heap_get_drvdata(buffer->heap); - unsigned long nr_pages = buffer->pagecount; - struct page *cma_pages = buffer->priv_virt; + struct cma_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + int ret; - /* free page list */ - kfree(buffer->pages); - /* release memory */ - cma_release(cma_heap->cma, cma_pages, nr_pages); + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + ret = sg_alloc_table_from_pages(&a->table, buffer->pages, + buffer->pagecount, 0, + buffer->pagecount << PAGE_SHIFT, + GFP_KERNEL); + if (ret) { + kfree(a); + return ret; + } + + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void cma_heap_detatch(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table *cma_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct dma_heap_attachment *a = attachment->priv; + struct sg_table *table = &a->table; + int ret; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(-ENOMEM); + return table; +} + +static void cma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + struct dma_heap_attachment *a; + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sgtable_for_device(a->dev, &a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static vm_fault_t cma_heap_vm_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct cma_heap_buffer *buffer = vma->vm_private_data; + + if (vmf->pgoff > buffer->pagecount) + return VM_FAULT_SIGBUS; + + vmf->page = buffer->pages[vmf->pgoff]; + get_page(vmf->page); + + return 0; +} + +static const struct vm_operations_struct dma_heap_vm_ops = { + .fault = cma_heap_vm_fault, +}; + +static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) + return -EINVAL; + + vma->vm_ops = &dma_heap_vm_ops; + vma->vm_private_data = buffer; + + return 0; +} + +static void *cma_heap_do_vmap(struct cma_heap_buffer *buffer) +{ + void *vaddr; + + vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL); + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static void *cma_heap_vmap(struct dma_buf *dmabuf) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + void *vaddr; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + buffer->vmap_cnt++; + vaddr = buffer->vaddr; + goto out; + } + + vaddr = cma_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) + goto out; + + buffer->vaddr = vaddr; + buffer->vmap_cnt++; +out: + mutex_unlock(&buffer->lock); + + return vaddr; +} + +static void cma_heap_vunmap(struct dma_buf *dmabuf, void *vaddr) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } + mutex_unlock(&buffer->lock); +} + +static void cma_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct cma_heap_buffer *buffer = dmabuf->priv; + struct cma_heap *cma_heap = buffer->heap; + + if (buffer->vmap_cnt > 0) { + WARN(1, "%s: buffer still mapped in the kernel\n", __func__); + vunmap(buffer->vaddr); + } + + cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount); kfree(buffer); } -/* dmabuf heap CMA operations functions */ +static const struct dma_buf_ops cma_heap_buf_ops = { + .attach = cma_heap_attach, + .detach = cma_heap_detatch, + .map_dma_buf = cma_heap_map_dma_buf, + .unmap_dma_buf = cma_heap_unmap_dma_buf, + .begin_cpu_access = cma_heap_dma_buf_begin_cpu_access, + .end_cpu_access = cma_heap_dma_buf_end_cpu_access, + .mmap = cma_heap_mmap, + .vmap = cma_heap_vmap, + .vunmap = cma_heap_vunmap, + .release = cma_heap_dma_buf_release, +}; + static int cma_heap_allocate(struct dma_heap *heap, - unsigned long len, - unsigned long fd_flags, - unsigned long heap_flags) + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) { struct cma_heap *cma_heap = dma_heap_get_drvdata(heap); - struct heap_helper_buffer *helper_buffer; - struct page *cma_pages; + struct cma_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); size_t size = PAGE_ALIGN(len); - unsigned long nr_pages = size >> PAGE_SHIFT; + pgoff_t pagecount = size >> PAGE_SHIFT; unsigned long align = get_order(size); + struct page *cma_pages; struct dma_buf *dmabuf; int ret = -ENOMEM; pgoff_t pg; - if (align > CONFIG_CMA_ALIGNMENT) - align = CONFIG_CMA_ALIGNMENT; - - helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); - if (!helper_buffer) + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) return -ENOMEM; - init_heap_helper_buffer(helper_buffer, cma_heap_free); - helper_buffer->heap = heap; - helper_buffer->size = len; + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->len = size; + + if (align > CONFIG_CMA_ALIGNMENT) + align = CONFIG_CMA_ALIGNMENT; - cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false); + cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false); if (!cma_pages) - goto free_buf; + goto free_buffer; + /* Clear the cma pages */ if (PageHighMem(cma_pages)) { - unsigned long nr_clear_pages = nr_pages; + unsigned long nr_clear_pages = pagecount; struct page *page = cma_pages; while (nr_clear_pages > 0) { @@ -85,7 +300,6 @@ static int cma_heap_allocate(struct dma_heap *heap, */ if (fatal_signal_pending(current)) goto free_cma; - page++; nr_clear_pages--; } @@ -93,28 +307,30 @@ static int cma_heap_allocate(struct dma_heap *heap, memset(page_address(cma_pages), 0, size); } - helper_buffer->pagecount = nr_pages; - helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, - sizeof(*helper_buffer->pages), - GFP_KERNEL); - if (!helper_buffer->pages) { + buffer->pages = kmalloc_array(pagecount, sizeof(*buffer->pages), GFP_KERNEL); + if (!buffer->pages) { ret = -ENOMEM; goto free_cma; } - for (pg = 0; pg < helper_buffer->pagecount; pg++) - helper_buffer->pages[pg] = &cma_pages[pg]; + for (pg = 0; pg < pagecount; pg++) + buffer->pages[pg] = &cma_pages[pg]; + + buffer->cma_pages = cma_pages; + buffer->heap = cma_heap; + buffer->pagecount = pagecount; /* create the dmabuf */ - dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags); + exp_info.ops = &cma_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf); goto free_pages; } - helper_buffer->dmabuf = dmabuf; - helper_buffer->priv_virt = cma_pages; - ret = dma_buf_fd(dmabuf, fd_flags); if (ret < 0) { dma_buf_put(dmabuf); @@ -125,11 +341,12 @@ static int cma_heap_allocate(struct dma_heap *heap, return ret; free_pages: - kfree(helper_buffer->pages); + kfree(buffer->pages); free_cma: - cma_release(cma_heap->cma, cma_pages, nr_pages); -free_buf: - kfree(helper_buffer); + cma_release(cma_heap->cma, cma_pages, pagecount); +free_buffer: + kfree(buffer); + return ret; } From patchwork Sat Oct 3 04:02:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 67691 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kOYkm-00HSD8-5L; Sat, 03 Oct 2020 04:02:37 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725824AbgJCEDM (ORCPT + 1 other); Sat, 3 Oct 2020 00:03:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725648AbgJCEDK (ORCPT ); Sat, 3 Oct 2020 00:03:10 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 845CCC0613E3 for ; Fri, 2 Oct 2020 21:03:10 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id gm14so2311417pjb.2 for ; Fri, 02 Oct 2020 21:03:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cGUIBEkWC11rg0RjWa3vJFsNGj9IzWP6hXBSnRH8Oaw=; b=K8ky0oUxql2HLFd8MIjDQvT4orFZB0O42vP9Th86mfgsKq/w1Bw9+uCz1Ta2/HnoFB j24YfmJIeWEFLnujUTbnF4IMDKI8YiTz+FLqWAXzVOaQs2e2RFqi8wtJyE3E6n3qKyNi qH0tkmk4ZV569hBY3Urp1/GsKcKSA9RbpwKfgYJ2z4b99YVFnFcO/sCabH1/MRn1NcpI 0BQk/uZ+me85TryF1tz0BTsO8DBU8Ipy54YOrVm6gEBT31PftsbwifzC63GWBQMrwSlH zy7b9Dndqm8K1zZKOnFAV/0KF8W6nltmGOcln163Gh6MLRZhzqEMxG2PxAh9zn5UopWj 5F8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cGUIBEkWC11rg0RjWa3vJFsNGj9IzWP6hXBSnRH8Oaw=; b=GUw345P0OttGhEcfyOtoFtZTtzp6KDLNIts34AXnnwlfaW5VV4QusSOrfFarl9kbBF Rp96XW407IGXYUMBRK410Rk9aq+C9HdeoBiVy634oU1qfa43js+T3KsJiWIgTglAFOom aZMsAvU1VRLUGdzA3NzfEBtlaBID5OkWM0lMjvw0VASsdpNH96ToMdzmtbpicwVdL8EK uui0EdP/fVk6F4b8HqE06dKxM+eh9MyAD5EkAIsxGobHXSsfIKqMLUFgcwVT59faPdZJ xclXmwyUetP7+vlq0wz6tbKQvzUIKF3azojVa6BIvq6v6VgrBAAt2C9YzPTchzooXJKe GXzA== X-Gm-Message-State: AOAM533G4wsHWj9sJE1xiEBPqaT7z/A5GH9LtHGTURwAjXZp5luxVSKa umlTAlkRCgtAW+0HPvGkj+UDbw== X-Google-Smtp-Source: ABdhPJysIPzeniAT+FLR9ZqqO7sDU57m+f4yfcl2pxHxmYqvXhPTXVUD3/G+uEh6ZkA8UyhfKSb5GA== X-Received: by 2002:a17:90b:3197:: with SMTP id hc23mr6151496pjb.78.1601697789904; Fri, 02 Oct 2020 21:03:09 -0700 (PDT) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 190sm3909290pfy.22.2020.10.02.21.03.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Oct 2020 21:03:09 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v3 3/7] dma-buf: heaps: Remove heap-helpers code Date: Sat, 3 Oct 2020 04:02:53 +0000 Message-Id: <20201003040257.62768-4-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201003040257.62768-1-john.stultz@linaro.org> References: <20201003040257.62768-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no The heap-helpers code was not as generic as initially hoped and it is now not being used, so remove it from the tree. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/heaps/Makefile | 1 - drivers/dma-buf/heaps/heap-helpers.c | 271 --------------------------- drivers/dma-buf/heaps/heap-helpers.h | 53 ------ 3 files changed, 325 deletions(-) delete mode 100644 drivers/dma-buf/heaps/heap-helpers.c delete mode 100644 drivers/dma-buf/heaps/heap-helpers.h diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 6e54cdec3da0..974467791032 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,4 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 -obj-y += heap-helpers.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c deleted file mode 100644 index 9f964ca3f59c..000000000000 --- a/drivers/dma-buf/heaps/heap-helpers.c +++ /dev/null @@ -1,271 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "heap-helpers.h" - -void init_heap_helper_buffer(struct heap_helper_buffer *buffer, - void (*free)(struct heap_helper_buffer *)) -{ - buffer->priv_virt = NULL; - mutex_init(&buffer->lock); - buffer->vmap_cnt = 0; - buffer->vaddr = NULL; - buffer->pagecount = 0; - buffer->pages = NULL; - INIT_LIST_HEAD(&buffer->attachments); - buffer->free = free; -} - -struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer, - int fd_flags) -{ - DEFINE_DMA_BUF_EXPORT_INFO(exp_info); - - exp_info.ops = &heap_helper_ops; - exp_info.size = buffer->size; - exp_info.flags = fd_flags; - exp_info.priv = buffer; - - return dma_buf_export(&exp_info); -} - -static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer) -{ - void *vaddr; - - vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL); - if (!vaddr) - return ERR_PTR(-ENOMEM); - - return vaddr; -} - -static void dma_heap_buffer_destroy(struct heap_helper_buffer *buffer) -{ - if (buffer->vmap_cnt > 0) { - WARN(1, "%s: buffer still mapped in the kernel\n", __func__); - vunmap(buffer->vaddr); - } - - buffer->free(buffer); -} - -static void *dma_heap_buffer_vmap_get(struct heap_helper_buffer *buffer) -{ - void *vaddr; - - if (buffer->vmap_cnt) { - buffer->vmap_cnt++; - return buffer->vaddr; - } - vaddr = dma_heap_map_kernel(buffer); - if (IS_ERR(vaddr)) - return vaddr; - buffer->vaddr = vaddr; - buffer->vmap_cnt++; - return vaddr; -} - -static void dma_heap_buffer_vmap_put(struct heap_helper_buffer *buffer) -{ - if (!--buffer->vmap_cnt) { - vunmap(buffer->vaddr); - buffer->vaddr = NULL; - } -} - -struct dma_heaps_attachment { - struct device *dev; - struct sg_table table; - struct list_head list; -}; - -static int dma_heap_attach(struct dma_buf *dmabuf, - struct dma_buf_attachment *attachment) -{ - struct dma_heaps_attachment *a; - struct heap_helper_buffer *buffer = dmabuf->priv; - int ret; - - a = kzalloc(sizeof(*a), GFP_KERNEL); - if (!a) - return -ENOMEM; - - ret = sg_alloc_table_from_pages(&a->table, buffer->pages, - buffer->pagecount, 0, - buffer->pagecount << PAGE_SHIFT, - GFP_KERNEL); - if (ret) { - kfree(a); - return ret; - } - - a->dev = attachment->dev; - INIT_LIST_HEAD(&a->list); - - attachment->priv = a; - - mutex_lock(&buffer->lock); - list_add(&a->list, &buffer->attachments); - mutex_unlock(&buffer->lock); - - return 0; -} - -static void dma_heap_detach(struct dma_buf *dmabuf, - struct dma_buf_attachment *attachment) -{ - struct dma_heaps_attachment *a = attachment->priv; - struct heap_helper_buffer *buffer = dmabuf->priv; - - mutex_lock(&buffer->lock); - list_del(&a->list); - mutex_unlock(&buffer->lock); - - sg_free_table(&a->table); - kfree(a); -} - -static -struct sg_table *dma_heap_map_dma_buf(struct dma_buf_attachment *attachment, - enum dma_data_direction direction) -{ - struct dma_heaps_attachment *a = attachment->priv; - struct sg_table *table; - - table = &a->table; - - if (!dma_map_sg(attachment->dev, table->sgl, table->nents, - direction)) - table = ERR_PTR(-ENOMEM); - return table; -} - -static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, - struct sg_table *table, - enum dma_data_direction direction) -{ - dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); -} - -static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf) -{ - struct vm_area_struct *vma = vmf->vma; - struct heap_helper_buffer *buffer = vma->vm_private_data; - - if (vmf->pgoff > buffer->pagecount) - return VM_FAULT_SIGBUS; - - vmf->page = buffer->pages[vmf->pgoff]; - get_page(vmf->page); - - return 0; -} - -static const struct vm_operations_struct dma_heap_vm_ops = { - .fault = dma_heap_vm_fault, -}; - -static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - - if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) - return -EINVAL; - - vma->vm_ops = &dma_heap_vm_ops; - vma->vm_private_data = buffer; - - return 0; -} - -static void dma_heap_dma_buf_release(struct dma_buf *dmabuf) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - - dma_heap_buffer_destroy(buffer); -} - -static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, - enum dma_data_direction direction) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - struct dma_heaps_attachment *a; - int ret = 0; - - mutex_lock(&buffer->lock); - - if (buffer->vmap_cnt) - invalidate_kernel_vmap_range(buffer->vaddr, buffer->size); - - list_for_each_entry(a, &buffer->attachments, list) { - dma_sync_sg_for_cpu(a->dev, a->table.sgl, a->table.nents, - direction); - } - mutex_unlock(&buffer->lock); - - return ret; -} - -static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, - enum dma_data_direction direction) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - struct dma_heaps_attachment *a; - - mutex_lock(&buffer->lock); - - if (buffer->vmap_cnt) - flush_kernel_vmap_range(buffer->vaddr, buffer->size); - - list_for_each_entry(a, &buffer->attachments, list) { - dma_sync_sg_for_device(a->dev, a->table.sgl, a->table.nents, - direction); - } - mutex_unlock(&buffer->lock); - - return 0; -} - -static void *dma_heap_dma_buf_vmap(struct dma_buf *dmabuf) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - void *vaddr; - - mutex_lock(&buffer->lock); - vaddr = dma_heap_buffer_vmap_get(buffer); - mutex_unlock(&buffer->lock); - - return vaddr; -} - -static void dma_heap_dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) -{ - struct heap_helper_buffer *buffer = dmabuf->priv; - - mutex_lock(&buffer->lock); - dma_heap_buffer_vmap_put(buffer); - mutex_unlock(&buffer->lock); -} - -const struct dma_buf_ops heap_helper_ops = { - .map_dma_buf = dma_heap_map_dma_buf, - .unmap_dma_buf = dma_heap_unmap_dma_buf, - .mmap = dma_heap_mmap, - .release = dma_heap_dma_buf_release, - .attach = dma_heap_attach, - .detach = dma_heap_detach, - .begin_cpu_access = dma_heap_dma_buf_begin_cpu_access, - .end_cpu_access = dma_heap_dma_buf_end_cpu_access, - .vmap = dma_heap_dma_buf_vmap, - .vunmap = dma_heap_dma_buf_vunmap, -}; diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h deleted file mode 100644 index 805d2df88024..000000000000 --- a/drivers/dma-buf/heaps/heap-helpers.h +++ /dev/null @@ -1,53 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * DMABUF Heaps helper code - * - * Copyright (C) 2011 Google, Inc. - * Copyright (C) 2019 Linaro Ltd. - */ - -#ifndef _HEAP_HELPERS_H -#define _HEAP_HELPERS_H - -#include -#include - -/** - * struct heap_helper_buffer - helper buffer metadata - * @heap: back pointer to the heap the buffer came from - * @dmabuf: backing dma-buf for this buffer - * @size: size of the buffer - * @priv_virt pointer to heap specific private value - * @lock mutext to protect the data in this structure - * @vmap_cnt count of vmap references on the buffer - * @vaddr vmap'ed virtual address - * @pagecount number of pages in the buffer - * @pages list of page pointers - * @attachments list of device attachments - * - * @free heap callback to free the buffer - */ -struct heap_helper_buffer { - struct dma_heap *heap; - struct dma_buf *dmabuf; - size_t size; - - void *priv_virt; - struct mutex lock; - int vmap_cnt; - void *vaddr; - pgoff_t pagecount; - struct page **pages; - struct list_head attachments; - - void (*free)(struct heap_helper_buffer *buffer); -}; - -void init_heap_helper_buffer(struct heap_helper_buffer *buffer, - void (*free)(struct heap_helper_buffer *)); - -struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer, - int fd_flags); - -extern const struct dma_buf_ops heap_helper_ops; -#endif /* _HEAP_HELPERS_H */ From patchwork Sat Oct 3 04:02:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 67695 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kOYks-00HSD8-SP; Sat, 03 Oct 2020 04:02:43 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725831AbgJCEDW (ORCPT + 1 other); Sat, 3 Oct 2020 00:03:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725730AbgJCEDO (ORCPT ); Sat, 3 Oct 2020 00:03:14 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 348AFC0613D0 for ; Fri, 2 Oct 2020 21:03:12 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id e10so2230125pfj.1 for ; Fri, 02 Oct 2020 21:03:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/3T6DtyMdgIaqBLXnI/IP8eViK4GR+w936JVTX0oqqo=; b=pumPlKkppWyU+6tvZrtcIGLQrPzfL7DZXw2xIEL7N1BD4bqnhs4eXFAUn64o4Nn+nX HrWqWdezJ7Iv0l61f5U1GYVetpPGokCd+2dreOx84XOxe9YfzLy6R20suB4EDD27mkjg uYJrTDRBihA/lhdcA91c68fBm2ljxvaovtymqyWtTs7sHeTgdwClFfSoXHReGV+pltU3 a2MU9TcOJDFuAPkf19EV/qr5VqEhwwpu+gtDWnIfFQTNyfbFjlrlBnSfd/qWeJMH99bz /B5ab7LGawktkvIzNmXHWHHvnogmlLssMAe/wpRtGOe16eviByyjRE4XlUqcHZocd4X6 JQ1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/3T6DtyMdgIaqBLXnI/IP8eViK4GR+w936JVTX0oqqo=; b=Xbjcif4KXue5v6N3mCJuDFzacmJxKQ1t8VXRV+pgndxCpDg8BOhsq62W/y7FdzVFCV gkzad4nj0wL+nxFvm6dZZuXVauwAGWCla2WSyiPwqBEkWaW6vOhEJbswJg2JSXMBP0J3 Umc6NNWow3Wkpnheg90PXPCr8Rj0bmA0MqySuPmiMmTxXKiPSxj1ZYR3SKhuuNrp+VKC 63tffHvW40xoJoDbKWoCwhc+X2E/bsABWC9i81yzzXs1PYNuysQZKRQT9CnbB4D2zdJt nxZnnXGP2d1hzpoFWEX1CjSnaBf81bq1cVD1gQbTr1WDh5LZsd2a2jwA0RtkWVIc9GOQ xkcQ== X-Gm-Message-State: AOAM530gmy6fsnlbz9uL58uZ8edjm4/ldKj16AzJ7zrlKlUrJngO+guy at91hNxFR8ldZg3Y1CoCqkCESw== X-Google-Smtp-Source: ABdhPJyC/0wtN9vlme0BdjUqwt1Gpcijj0nApYBm+ZJ5owlf/V7bA8Y3GedcktbkKhwMga2K22EpjQ== X-Received: by 2002:a63:a70d:: with SMTP id d13mr5033122pgf.65.1601697791699; Fri, 02 Oct 2020 21:03:11 -0700 (PDT) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 190sm3909290pfy.22.2020.10.02.21.03.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Oct 2020 21:03:11 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v3 4/7] dma-buf: heaps: Skip sync if not mapped Date: Sat, 3 Oct 2020 04:02:54 +0000 Message-Id: <20201003040257.62768-5-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201003040257.62768-1-john.stultz@linaro.org> References: <20201003040257.62768-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no This patch is basically a port of Ørjan Eide's similar patch for ION https://lore.kernel.org/lkml/20200414134629.54567-1-orjan.eide@arm.com/ Only sync the sg-list of dma-buf heap attachment when the attachment is actually mapped on the device. dma-bufs may be synced at any time. It can be reached from user space via DMA_BUF_IOCTL_SYNC, so there are no guarantees from callers on when syncs may be attempted, and dma_buf_end_cpu_access() and dma_buf_begin_cpu_access() may not be paired. Since the sg_list's dma_address isn't set up until the buffer is used on the device, and dma_map_sg() is called on it, the dma_address will be NULL if sync is attempted on the dma-buf before it's mapped on a device. Before v5.0 (commit 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code")) this was a problem as the dma-api (at least the swiotlb_dma_ops on arm64) would use the potentially invalid dma_address. How that failed depended on how the device handled physical address 0. If 0 was a valid address to physical ram, that page would get flushed a lot, while the actual pages in the buffer would not get synced correctly. While if 0 is an invalid physical address it may cause a fault and trigger a crash. In v5.0 this was incidentally fixed by commit 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code"), as this moved the dma-api to use the page pointer in the sg_list, and (for Ion buffers at least) this will always be valid if the sg_list exists at all. But, this issue is re-introduced in v5.3 with commit 449fa54d6815 ("dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device") moves the dma-api back to the old behaviour and picks the dma_address that may be invalid. dma-buf core doesn't ensure that the buffer is mapped on the device, and thus have a valid sg_list, before calling the exporter's begin_cpu_access. Logic and commit message originally by: Ørjan Eide Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/heaps/cma_heap.c | 10 ++++++++++ drivers/dma-buf/heaps/system_heap.c | 10 ++++++++++ 2 files changed, 20 insertions(+) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 366963b94c72..fece9d7739ae 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -44,6 +44,7 @@ struct dma_heap_attachment { struct device *dev; struct sg_table table; struct list_head list; + bool mapped; }; static int cma_heap_attach(struct dma_buf *dmabuf, @@ -68,6 +69,7 @@ static int cma_heap_attach(struct dma_buf *dmabuf, a->dev = attachment->dev; INIT_LIST_HEAD(&a->list); + a->mapped = false; attachment->priv = a; @@ -102,6 +104,7 @@ static struct sg_table *cma_heap_map_dma_buf(struct dma_buf_attachment *attachme ret = dma_map_sgtable(attachment->dev, table, direction, 0); if (ret) return ERR_PTR(-ENOMEM); + a->mapped = true; return table; } @@ -109,6 +112,9 @@ static void cma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { + struct dma_heap_attachment *a = attachment->priv; + + a->mapped = false; dma_unmap_sgtable(attachment->dev, table, direction, 0); } @@ -123,6 +129,8 @@ static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); } mutex_unlock(&buffer->lock); @@ -141,6 +149,8 @@ static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; dma_sync_sgtable_for_device(a->dev, &a->table, direction); } mutex_unlock(&buffer->lock); diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 00ed107b3b76..ef8d47e5a7ff 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -37,6 +37,7 @@ struct dma_heap_attachment { struct device *dev; struct sg_table *table; struct list_head list; + bool mapped; }; static struct sg_table *dup_sg_table(struct sg_table *table) @@ -84,6 +85,7 @@ static int system_heap_attach(struct dma_buf *dmabuf, a->table = table; a->dev = attachment->dev; INIT_LIST_HEAD(&a->list); + a->mapped = false; attachment->priv = a; @@ -120,6 +122,7 @@ static struct sg_table *system_heap_map_dma_buf(struct dma_buf_attachment *attac if (ret) return ERR_PTR(ret); + a->mapped = true; return table; } @@ -127,6 +130,9 @@ static void system_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { + struct dma_heap_attachment *a = attachment->priv; + + a->mapped = false; dma_unmap_sgtable(attachment->dev, table, direction, 0); } @@ -142,6 +148,8 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; dma_sync_sgtable_for_cpu(a->dev, a->table, direction); } mutex_unlock(&buffer->lock); @@ -161,6 +169,8 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, flush_kernel_vmap_range(buffer->vaddr, buffer->len); list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; dma_sync_sgtable_for_device(a->dev, a->table, direction); } mutex_unlock(&buffer->lock); From patchwork Sat Oct 3 04:02:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 67694 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kOYkr-00HSD8-8k; Sat, 03 Oct 2020 04:02:42 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725843AbgJCEDV (ORCPT + 1 other); Sat, 3 Oct 2020 00:03:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725829AbgJCEDP (ORCPT ); Sat, 3 Oct 2020 00:03:15 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F58AC0613E3 for ; Fri, 2 Oct 2020 21:03:13 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id t7so2310855pjd.3 for ; Fri, 02 Oct 2020 21:03:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PGBUf2FeLay+nQHyuni0t4hgT0kPOlsr2Zfrvy6inwQ=; b=dlmqtogwHIsTBC7zsDHkdYC3vPfe+xOYAzKF2XBWDKezceULr8cxnchrdhZ47vJkF/ Yf/yIs3uNKubchja1cAPvlaDQidn2DY6o8YYHf/fpF2plRQpgv17W6ZlvPIByJjPUc8j hKwxAeraiMdPRWzmkfvf3NqIHRR/XxkvHPq8KmUxmRN2gx2t9VuX0B5U18JMuym+7JKL 8Qo4oAg7HHxMO9aHrBMqnI3bFtarP/DlMF+7ExOErzVxgtH3w2o3flvLCgvA7AbhNKgJ XiDTYWh/CjA2rvvFRZLWJUN27l8D1nKZYPHF50RtTeVPc90jyvidnR30FruSb9vT6WnB aLhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PGBUf2FeLay+nQHyuni0t4hgT0kPOlsr2Zfrvy6inwQ=; b=qupVQC+lUAcuO9FlyVdP+YlE3nU586ij9ZWSa3udjSpTLxaELPVCVgvwxoNLRtOs/Q snh9uS5yu6w4C92BnpuIvrEQ1cgnhAcifE5vIj//hdFgAcfk27bFno22fM4euIDH8fa0 qBnTbixqj/FFvGjfohMeyQxXb51th41FjVyP+mq/Ox3qR0RP5+TEmHHBTq78amWt7y/c qkycwPhjypJGC4+n7d0hFiZbQcabuZfWBqYP1iFSLSqpNdBYaMl8qYgsl7dkAR5uw+bK 71IEwl0skStNz+gtUyPGRY3zXPdOgkKcbvjeuxoFvtKSmAPtZQLWFeku0lMb05Jc/xEd Y21A== X-Gm-Message-State: AOAM532KcKLT6aMbWY49MK6N0GnWzBmuG3n1qLhmqRmQy3nnLdXthJzP J/OH5RTOeUZkqe9fwKa6xqnPqw== X-Google-Smtp-Source: ABdhPJw6x2iomjFWYCIIvf5UjciBYCwEdh1abbohvzeAGK4/bURbgJbONxDzpE2X9WEYhi02wVdmLQ== X-Received: by 2002:a17:90b:1114:: with SMTP id gi20mr5784908pjb.12.1601697793150; Fri, 02 Oct 2020 21:03:13 -0700 (PDT) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 190sm3909290pfy.22.2020.10.02.21.03.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Oct 2020 21:03:12 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v3 5/7] dma-buf: system_heap: Allocate higher order pages if available Date: Sat, 3 Oct 2020 04:02:55 +0000 Message-Id: <20201003040257.62768-6-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201003040257.62768-1-john.stultz@linaro.org> References: <20201003040257.62768-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no While the system heap can return non-contiguous pages, try to allocate larger order pages if possible. This will allow slight performance gains and make implementing page pooling easier. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v3: * Use page_size() rather then opencoding it --- drivers/dma-buf/heaps/system_heap.c | 83 ++++++++++++++++++++++------- 1 file changed, 65 insertions(+), 18 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index ef8d47e5a7ff..2b8d4b6abacb 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -40,6 +40,14 @@ struct dma_heap_attachment { bool mapped; }; +#define HIGH_ORDER_GFP (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \ + | __GFP_NORETRY) & ~__GFP_RECLAIM) \ + | __GFP_COMP) +#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO | __GFP_COMP) +static gfp_t order_flags[] = {HIGH_ORDER_GFP, LOW_ORDER_GFP, LOW_ORDER_GFP}; +static const unsigned int orders[] = {8, 4, 0}; +#define NUM_ORDERS ARRAY_SIZE(orders) + static struct sg_table *dup_sg_table(struct sg_table *table) { struct sg_table *new_table; @@ -270,8 +278,11 @@ static void system_heap_dma_buf_release(struct dma_buf *dmabuf) int i; table = &buffer->sg_table; - for_each_sgtable_sg(table, sg, i) - __free_page(sg_page(sg)); + for_each_sg(table->sgl, sg, table->nents, i) { + struct page *page = sg_page(sg); + + __free_pages(page, compound_order(page)); + } sg_free_table(table); kfree(buffer); } @@ -289,6 +300,26 @@ static const struct dma_buf_ops system_heap_buf_ops = { .release = system_heap_dma_buf_release, }; +static struct page *alloc_largest_available(unsigned long size, + unsigned int max_order) +{ + struct page *page; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + if (size < (PAGE_SIZE << orders[i])) + continue; + if (max_order < orders[i]) + continue; + + page = alloc_pages(order_flags[i], orders[i]); + if (!page) + continue; + return page; + } + return NULL; +} + static int system_heap_allocate(struct dma_heap *heap, unsigned long len, unsigned long fd_flags, @@ -296,11 +327,13 @@ static int system_heap_allocate(struct dma_heap *heap, { struct system_heap_buffer *buffer; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + unsigned long size_remaining = len; + unsigned int max_order = orders[0]; struct dma_buf *dmabuf; struct sg_table *table; struct scatterlist *sg; - pgoff_t pagecount; - pgoff_t pg; + struct list_head pages; + struct page *page, *tmp_page; int i, ret = -ENOMEM; buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); @@ -312,25 +345,35 @@ static int system_heap_allocate(struct dma_heap *heap, buffer->heap = heap; buffer->len = len; - table = &buffer->sg_table; - pagecount = len / PAGE_SIZE; - if (sg_alloc_table(table, pagecount, GFP_KERNEL)) - goto free_buffer; - - sg = table->sgl; - for (pg = 0; pg < pagecount; pg++) { - struct page *page; + INIT_LIST_HEAD(&pages); + i = 0; + while (size_remaining > 0) { /* * Avoid trying to allocate memory if the process * has been killed by SIGKILL */ if (fatal_signal_pending(current)) - goto free_pages; - page = alloc_page(GFP_KERNEL | __GFP_ZERO); + goto free_buffer; + + page = alloc_largest_available(size_remaining, max_order); if (!page) - goto free_pages; + goto free_buffer; + + list_add_tail(&page->lru, &pages); + size_remaining -= page_size(page); + max_order = compound_order(page); + i++; + } + + table = &buffer->sg_table; + if (sg_alloc_table(table, i, GFP_KERNEL)) + goto free_buffer; + + sg = table->sgl; + list_for_each_entry_safe(page, tmp_page, &pages, lru) { sg_set_page(sg, page, page_size(page), 0); sg = sg_next(sg); + list_del(&page->lru); } /* create the dmabuf */ @@ -350,14 +393,18 @@ static int system_heap_allocate(struct dma_heap *heap, /* just return, as put will call release and that will free */ return ret; } - return ret; free_pages: - for_each_sgtable_sg(table, sg, i) - __free_page(sg_page(sg)); + for_each_sgtable_sg(table, sg, i) { + struct page *p = sg_page(sg); + + __free_pages(p, compound_order(p)); + } sg_free_table(table); free_buffer: + list_for_each_entry_safe(page, tmp_page, &pages, lru) + __free_pages(page, compound_order(page)); kfree(buffer); return ret; From patchwork Sat Oct 3 04:02:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 67692 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kOYkn-00HSD8-Ue; Sat, 03 Oct 2020 04:02:38 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725834AbgJCEDU (ORCPT + 1 other); Sat, 3 Oct 2020 00:03:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725777AbgJCEDR (ORCPT ); Sat, 3 Oct 2020 00:03:17 -0400 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B49D3C0613E6 for ; Fri, 2 Oct 2020 21:03:15 -0700 (PDT) Received: by mail-pj1-x1042.google.com with SMTP id x2so2146088pjk.0 for ; Fri, 02 Oct 2020 21:03:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P68wKVWfyCPNBm8l+XiI7Y585elPVzN5iuk0IO/H+/E=; b=DoA/WXci/N6o2iCahmaE/6s0q36QnObD7Kz0rTkp+Wir/LUz0b97SkvNv7xR8a7ae1 RhoBSY2JhZVsel487aadjO1WZBme5eKV9taGbs7OEZj7tP1hdrTrDyftqkQwVGwgrW61 FiW7nbBi6IUykERJktulqpG5s1Mx4/OOXLINs2shI5aG3o/yj9b+6340TrewWhofNsKh at1Plc5eH1zH3vg7LejixlzlKVVC8vJNdvOHnzLXvQCLdVq15VJl7/7z3xH0J/VNtxJQ K4cNL4vwhilksSyNVrIEvRW/vsqhEu3pCbvcRCAwzTKKo81cnhKZPRUOQaVp2V4vrQaP hzRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P68wKVWfyCPNBm8l+XiI7Y585elPVzN5iuk0IO/H+/E=; b=llxpq3nxXwUB/R4Cgkm200nWOCZ6cCD6twjRPHovkA/hZjpfpnjnGe6Y4Rg9acYCvn iRWVEutZ/cb4bo5661NmgWVQ9j+g5JB24Oc4m6IWJQyZBref7CwH+yvFgCAKGeIQ1KCn pNWTN8S1lZTXnJ+N6tGdSbauQ8esYKV7UR5u3pAf6XgtvAXE8QUYvIVzpcugbUt1xc6Y /nTVl5bkle0G6605qeKiPB3ostCNxYKxK4rytS6Lqsk+qW2QxXrLSR8qv9Lp38/6OgMI tJMmx0hltgFJDZqJX2HUzPWTLE7+LDuZ1u9R6BGW02mOBphawrKohAaoHm1EeviF4oWm 9lMQ== X-Gm-Message-State: AOAM532YxpYt6bm8o0IaSA6yoRPvx+74oXh6+xvtCfOZAV8pFxAnoVAz UM+pZkCFa6IO+9VxRFZ0eYxhHg== X-Google-Smtp-Source: ABdhPJx3+jjcIjYbmsHUy2FJCfess91XkiwkBnhEIbWxL3NSmESgIifCive3HkthH2uT5nuo8Gx91A== X-Received: by 2002:a17:90a:ead5:: with SMTP id ev21mr5465614pjb.188.1601697795315; Fri, 02 Oct 2020 21:03:15 -0700 (PDT) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 190sm3909290pfy.22.2020.10.02.21.03.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Oct 2020 21:03:14 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v3 6/7] dma-buf: dma-heap: Keep track of the heap device struct Date: Sat, 3 Oct 2020 04:02:56 +0000 Message-Id: <20201003040257.62768-7-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201003040257.62768-1-john.stultz@linaro.org> References: <20201003040257.62768-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no Keep track of the heap device struct. This will be useful for special DMA allocations and actions. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/dma-heap.c | 33 +++++++++++++++++++++++++-------- include/linux/dma-heap.h | 9 +++++++++ 2 files changed, 34 insertions(+), 8 deletions(-) diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index afd22c9dbdcf..72c746755d89 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -30,6 +30,7 @@ * @heap_devt heap device node * @list list head connecting to list of heaps * @heap_cdev heap char device + * @heap_dev heap device struct * * Represents a heap of memory from which buffers can be made. */ @@ -40,6 +41,7 @@ struct dma_heap { dev_t heap_devt; struct list_head list; struct cdev heap_cdev; + struct device *heap_dev; }; static LIST_HEAD(heap_list); @@ -190,10 +192,21 @@ void *dma_heap_get_drvdata(struct dma_heap *heap) return heap->priv; } +/** + * dma_heap_get_dev() - get device struct for the heap + * @heap: DMA-Heap to retrieve device struct from + * + * Returns: + * The device struct for the heap. + */ +struct device *dma_heap_get_dev(struct dma_heap *heap) +{ + return heap->heap_dev; +} + struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) { struct dma_heap *heap, *h, *err_ret; - struct device *dev_ret; unsigned int minor; int ret; @@ -247,16 +260,20 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) goto err1; } - dev_ret = device_create(dma_heap_class, - NULL, - heap->heap_devt, - NULL, - heap->name); - if (IS_ERR(dev_ret)) { + heap->heap_dev = device_create(dma_heap_class, + NULL, + heap->heap_devt, + NULL, + heap->name); + if (IS_ERR(heap->heap_dev)) { pr_err("dma_heap: Unable to create device\n"); - err_ret = ERR_CAST(dev_ret); + err_ret = ERR_CAST(heap->heap_dev); goto err2; } + + /* Make sure it doesn't disappear on us */ + heap->heap_dev = get_device(heap->heap_dev); + /* Add heap to the list */ mutex_lock(&heap_list_lock); list_add(&heap->list, &heap_list); diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h index 454e354d1ffb..82857e096910 100644 --- a/include/linux/dma-heap.h +++ b/include/linux/dma-heap.h @@ -50,6 +50,15 @@ struct dma_heap_export_info { */ void *dma_heap_get_drvdata(struct dma_heap *heap); +/** + * dma_heap_get_dev() - get device struct for the heap + * @heap: DMA-Heap to retrieve device struct from + * + * Returns: + * The device struct for the heap. + */ +struct device *dma_heap_get_dev(struct dma_heap *heap); + /** * dma_heap_add - adds a heap to dmabuf heaps * @exp_info: information needed to register this heap From patchwork Sat Oct 3 04:02:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 67693 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1kOYkp-00HSD8-Ei; Sat, 03 Oct 2020 04:02:40 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725839AbgJCEDV (ORCPT + 1 other); Sat, 3 Oct 2020 00:03:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725825AbgJCEDU (ORCPT ); Sat, 3 Oct 2020 00:03:20 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AD58C0613E8 for ; Fri, 2 Oct 2020 21:03:18 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id i3so2131646pjz.4 for ; Fri, 02 Oct 2020 21:03:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tpTaD+pd4ck4bVP3jdYyxJbjW42E08ISZPU7AHa+XuI=; b=iCtAp16i2PFqCuAtAc95pctbZClTzNcfeELmUCYBbz+JDgZFdJjc0ge+UW+FHMbNfh UrI0haS/vXZV+aUY9NLhpLM8R1G9G10iO44wZ5pKo1Ojd2xDJqr+TaMTIu2j1nU3hN3+ L6cXOWui/1+aw3xCcYDN4c9ohitsWYOwJkOUXSWRf0mjoGhfuqIAg/i94L09LkwUxsP5 I7gYH7jrY6i3VFNxRuwHqfW97kkF3mF3eGO550s63rK3LzEGo1fu/DIAVXsLAHd997bd WH30VF3FZrXEq4+mV31Yt9rgnaJ3ivCqg1/j3J6vmLZO2pIqYUP7v1SnRqJ975C38mN4 cvwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tpTaD+pd4ck4bVP3jdYyxJbjW42E08ISZPU7AHa+XuI=; b=Ifu3nhl0uSBVuQNygH5uierrwZWjTwxGo1BhLhvtmBuQePbUhvIjEhUXVUK0MmHiUc rME7Nl3UgyzPhaPaKiAUksJQxlsJpoN4mNiSNIKdxV6DAIRK8FGuiI9/j0hlXKCA+B0z CJqNIF2f4bsMQzdhM0wqAXv9+CFN7Sn1DX7NlXnaOeQ4dSCf9wJLO38NYz7E8ULbW9Fn 4mudCW8FElkAfjVaDy18QLJu0UoSwwBW3cONplSolJ8We/YGOmAWQbzLifoiwAzZLLUj UF/1AMFtvGtRCpYUbkIQwNNywFtNsHeMVRKKpkJ0vHKqwNGP70u/C+fuaFmSuH/eb+GB EsRA== X-Gm-Message-State: AOAM531gmSNJ9fdm6mNwo016mDdRTocklvnQFLF/Fvx50paQar6/pasw zsT0P9mVA/OwpoCX0/5wXvgrig== X-Google-Smtp-Source: ABdhPJxtWqhtvVCWUchRdru+/v7VgHdom2HPrTSSjkwt5rvTB7sDHJfAy64XBhzTdVSAJlo5uf1IMA== X-Received: by 2002:a17:90a:da06:: with SMTP id e6mr5906398pjv.8.1601697797650; Fri, 02 Oct 2020 21:03:17 -0700 (PDT) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 190sm3909290pfy.22.2020.10.02.21.03.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Oct 2020 21:03:16 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v3 7/7] dma-buf: system_heap: Add a system-uncached heap re-using the system heap Date: Sat, 3 Oct 2020 04:02:57 +0000 Message-Id: <20201003040257.62768-8-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201003040257.62768-1-john.stultz@linaro.org> References: <20201003040257.62768-1-john.stultz@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no This adds a heap that allocates non-contiguous buffers that are marked as writecombined, so they are not cached by the CPU. This is useful, as most graphics buffers are usually not touched by the CPU or only written into once by the CPU. So when mapping the buffer over and over between devices, we can skip the CPU syncing, which saves a lot of cache management overhead, greatly improving performance. For folk using ION, there was a ION_FLAG_CACHED flag, which signaled if the returned buffer should be CPU cacheable or not. With DMA-BUF heaps, we do not yet have such a flag, and by default the current heaps (system and cma) produce CPU cachable buffers. So for folks transitioning from ION to DMA-BUF Heaps, this fills in some of that missing functionality. There has been a suggestion to make this functionality a flag (DMAHEAP_FLAG_UNCACHED?) on the system heap, similar to how ION used the ION_FLAG_CACHED. But I want to make sure an _UNCACHED flag would truely be a generic attribute across all heaps. So far that has been unclear, so having it as a separate heap seemes better for now. (But I'm open to discussion on this point!) This is a rework of earlier efforts to add a uncached system heap, done utilizing the exisitng system heap, adding just a bit of logic to handle the uncached case. Feedback would be very welcome! Many thanks to Liam Mark for his help to get this working. Pending opensource users of this code include: * AOSP HiKey960 gralloc: - https://android-review.googlesource.com/c/device/linaro/hikey/+/1399519 - Visibly improves performance over the system heap * AOSP Codec2 (possibly, needs more review): - https://android-review.googlesource.com/c/platform/frameworks/av/+/1360640/17/media/codec2/vndk/C2DmaBufAllocator.cpp#325 Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/heaps/system_heap.c | 87 ++++++++++++++++++++++++++--- 1 file changed, 79 insertions(+), 8 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 2b8d4b6abacb..952f1fd9dacf 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -22,6 +22,7 @@ #include struct dma_heap *sys_heap; +struct dma_heap *sys_uncached_heap; struct system_heap_buffer { struct dma_heap *heap; @@ -31,6 +32,8 @@ struct system_heap_buffer { struct sg_table sg_table; int vmap_cnt; void *vaddr; + + bool uncached; }; struct dma_heap_attachment { @@ -38,6 +41,8 @@ struct dma_heap_attachment { struct sg_table *table; struct list_head list; bool mapped; + + bool uncached; }; #define HIGH_ORDER_GFP (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \ @@ -94,7 +99,7 @@ static int system_heap_attach(struct dma_buf *dmabuf, a->dev = attachment->dev; INIT_LIST_HEAD(&a->list); a->mapped = false; - + a->uncached = buffer->uncached; attachment->priv = a; mutex_lock(&buffer->lock); @@ -124,9 +129,13 @@ static struct sg_table *system_heap_map_dma_buf(struct dma_buf_attachment *attac { struct dma_heap_attachment *a = attachment->priv; struct sg_table *table = a->table; + int attr = 0; int ret; - ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (a->uncached) + attr = DMA_ATTR_SKIP_CPU_SYNC; + + ret = dma_map_sgtable(attachment->dev, table, direction, attr); if (ret) return ERR_PTR(ret); @@ -139,9 +148,12 @@ static void system_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direction) { struct dma_heap_attachment *a = attachment->priv; + int attr = 0; + if (a->uncached) + attr = DMA_ATTR_SKIP_CPU_SYNC; a->mapped = false; - dma_unmap_sgtable(attachment->dev, table, direction, 0); + dma_unmap_sgtable(attachment->dev, table, direction, attr); } static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, @@ -150,6 +162,9 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, struct system_heap_buffer *buffer = dmabuf->priv; struct dma_heap_attachment *a; + if (buffer->uncached) + return 0; + mutex_lock(&buffer->lock); if (buffer->vmap_cnt) @@ -171,6 +186,9 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, struct system_heap_buffer *buffer = dmabuf->priv; struct dma_heap_attachment *a; + if (buffer->uncached) + return 0; + mutex_lock(&buffer->lock); if (buffer->vmap_cnt) @@ -194,6 +212,9 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) struct sg_page_iter piter; int ret; + if (buffer->uncached) + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { struct page *page = sg_page_iter_page(&piter); @@ -215,8 +236,12 @@ static void *system_heap_do_vmap(struct system_heap_buffer *buffer) struct page **pages = vmalloc(sizeof(struct page *) * npages); struct page **tmp = pages; struct sg_page_iter piter; + pgprot_t pgprot = PAGE_KERNEL; void *vaddr; + if (buffer->uncached) + pgprot = pgprot_writecombine(PAGE_KERNEL); + if (!pages) return ERR_PTR(-ENOMEM); @@ -225,7 +250,7 @@ static void *system_heap_do_vmap(struct system_heap_buffer *buffer) *tmp++ = sg_page_iter_page(&piter); } - vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vaddr = vmap(pages, npages, VM_MAP, pgprot); vfree(pages); if (!vaddr) @@ -278,6 +303,10 @@ static void system_heap_dma_buf_release(struct dma_buf *dmabuf) int i; table = &buffer->sg_table; + /* Unmap the uncached buffers from the heap device (pairs with the map on allocate) */ + if (buffer->uncached) + dma_unmap_sgtable(dma_heap_get_dev(buffer->heap), table, DMA_BIDIRECTIONAL, 0); + for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); @@ -320,10 +349,11 @@ static struct page *alloc_largest_available(unsigned long size, return NULL; } -static int system_heap_allocate(struct dma_heap *heap, - unsigned long len, - unsigned long fd_flags, - unsigned long heap_flags) +static int system_heap_do_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags, + bool uncached) { struct system_heap_buffer *buffer; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); @@ -344,6 +374,7 @@ static int system_heap_allocate(struct dma_heap *heap, mutex_init(&buffer->lock); buffer->heap = heap; buffer->len = len; + buffer->uncached = uncached; INIT_LIST_HEAD(&pages); i = 0; @@ -393,6 +424,16 @@ static int system_heap_allocate(struct dma_heap *heap, /* just return, as put will call release and that will free */ return ret; } + + /* + * For uncached buffers, we need to initially flush cpu cache, since + * the __GFP_ZERO on the allocation means the zeroing was done by the + * cpu and thus it is likely cached. Map (and implicitly flush) it out + * now so we don't get corruption later on. + */ + if (buffer->uncached) + dma_map_sgtable(dma_heap_get_dev(heap), table, DMA_BIDIRECTIONAL, 0); + return ret; free_pages: @@ -410,10 +451,30 @@ static int system_heap_allocate(struct dma_heap *heap, return ret; } +static int system_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) +{ + return system_heap_do_allocate(heap, len, fd_flags, heap_flags, false); +} + static const struct dma_heap_ops system_heap_ops = { .allocate = system_heap_allocate, }; +static int system_uncached_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) +{ + return system_heap_do_allocate(heap, len, fd_flags, heap_flags, true); +} + +static const struct dma_heap_ops system_uncached_heap_ops = { + .allocate = system_uncached_heap_allocate, +}; + static int system_heap_create(void) { struct dma_heap_export_info exp_info; @@ -426,6 +487,16 @@ static int system_heap_create(void) if (IS_ERR(sys_heap)) return PTR_ERR(sys_heap); + exp_info.name = "system-uncached"; + exp_info.ops = &system_uncached_heap_ops; + exp_info.priv = NULL; + + sys_uncached_heap = dma_heap_add(&exp_info); + if (IS_ERR(sys_uncached_heap)) + return PTR_ERR(sys_heap); + + dma_coerce_mask_and_coherent(dma_heap_get_dev(sys_uncached_heap), DMA_BIT_MASK(64)); + return 0; } module_init(system_heap_create);