From patchwork Tue Feb 2 09:51:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 71231 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1l6sMU-007FQr-Qf; Tue, 02 Feb 2021 09:52:44 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230194AbhBBJwW (ORCPT + 1 other); Tue, 2 Feb 2021 04:52:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229611AbhBBJwK (ORCPT ); Tue, 2 Feb 2021 04:52:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 925BCC06174A; Tue, 2 Feb 2021 01:51:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=C4gwfxOJ/vXre3lN9UufmmjUIasSmHttFaQ+GpK794U=; b=WLYQ6ATISrWnytFZgqVHcBywtp qW19X7wWNafG/J/XPvNuvxTO05RlagTk2At59gRvsoHW1OMB6epZJDjbLtpiRzIJX7bpdKZ/oI5dB MDduPa7nJTfJQVHO9RCoExXFSKGxDMCAKP5qX9DB58bi///dgemhhyfxc98P9cjdp2wYnugju8Mhc aiAwdFnZ7S7+gJDbrpiK8Zc0h2ee33P9kqB/fFng3+ZKQi/MAp62iEbcACNrkT9xl/UzJWc8OIq6K 0Bif+vWkjvzQw6CIkPKqe2M3du/OzOwYoEpE2W6eXeCLthc5nMigNplUGSAZGXsynpFX8Jiac+Sc4 cOAhQfZA==; Received: from [2001:4bb8:198:6bf4:7f38:755e:a6e0:73e9] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l6sL5-00F0wB-6r; Tue, 02 Feb 2021 09:51:15 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , Ricardo Ribalda , Sergey Senozhatsky , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 1/7] dma-mapping: remove the {alloc,free}_noncoherent methods Date: Tue, 2 Feb 2021 10:51:04 +0100 Message-Id: <20210202095110.1215346-2-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210202095110.1215346-1-hch@lst.de> References: <20210202095110.1215346-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.4 (--) X-LSpam-Report: No, score=-2.4 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no It turns out allowing non-contigous allocations here was a rather bad idea, as we'll now need to define ways to get the pages for mmaping or dma_buf sharing. Revert this change and stick to the original concept. A different API for the use case of non-contigous allocations will be added back later. Signed-off-by: Christoph Hellwig --- Documentation/core-api/dma-api.rst | 64 ++++++++++-------------------- drivers/iommu/dma-iommu.c | 30 -------------- include/linux/dma-map-ops.h | 5 --- include/linux/dma-mapping.h | 17 ++++++-- kernel/dma/mapping.c | 40 ------------------- 5 files changed, 35 insertions(+), 121 deletions(-) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index 75cb757bbff00a..e6d23f117308df 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -528,16 +528,14 @@ an I/O device, you should not be using this part of the API. :: - void * - dma_alloc_noncoherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, enum dma_data_direction dir, - gfp_t gfp) + struct page * + dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, + enum dma_data_direction dir, gfp_t gfp) -This routine allocates a region of bytes of consistent memory. It -returns a pointer to the allocated region (in the processor's virtual address -space) or NULL if the allocation failed. The returned memory may or may not -be in the kernel direct mapping. Drivers must not call virt_to_page on -the returned memory region. +This routine allocates a region of bytes of non-coherent memory. It +returns a pointer to first struct page for the region, or NULL if the +allocation failed. The resulting struct page can be used for everything a +struct page is suitable for. It also returns a which may be cast to an unsigned integer the same width as the bus and given to the device as the DMA address base of @@ -558,51 +556,33 @@ reused. :: void - dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, + dma_free_pages(struct device *dev, size_t size, struct page *page, dma_addr_t dma_handle, enum dma_data_direction dir) -Free a region of memory previously allocated using dma_alloc_noncoherent(). -dev, size and dma_handle and dir must all be the same as those passed into -dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by -dma_alloc_noncoherent(). +Free a region of memory previously allocated using dma_alloc_pages(). +dev, size, dma_handle and dir must all be the same as those passed into +dma_alloc_pages(). page must be the pointer returned by dma_alloc_pages(). :: - struct page * - dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, - enum dma_data_direction dir, gfp_t gfp) - -This routine allocates a region of bytes of non-coherent memory. It -returns a pointer to first struct page for the region, or NULL if the -allocation failed. The resulting struct page can be used for everything a -struct page is suitable for. - -It also returns a which may be cast to an unsigned integer the -same width as the bus and given to the device as the DMA address base of -the region. - -The dir parameter specified if data is read and/or written by the device, -see dma_map_single() for details. - -The gfp parameter allows the caller to specify the ``GFP_`` flags (see -kmalloc()) for the allocation, but rejects flags used to specify a memory -zone such as GFP_DMA or GFP_HIGHMEM. + void * + dma_alloc_noncoherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, enum dma_data_direction dir, + gfp_t gfp) -Before giving the memory to the device, dma_sync_single_for_device() needs -to be called, and before reading memory written by the device, -dma_sync_single_for_cpu(), just like for streaming DMA mappings that are -reused. +This routine is a convenient wrapper around dma_alloc_pages that returns the +kernel virtual address for the allocated memory instead of the page structure. :: void - dma_free_pages(struct device *dev, size_t size, struct page *page, + dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_handle, enum dma_data_direction dir) -Free a region of memory previously allocated using dma_alloc_pages(). -dev, size and dma_handle and dir must all be the same as those passed into -dma_alloc_noncoherent(). page must be the pointer returned by -dma_alloc_pages(). +Free a region of memory previously allocated using dma_alloc_noncoherent(). +dev, size, dma_handle and dir must all be the same as those passed into +dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by +dma_alloc_noncoherent(). :: diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 4078358ed66ea8..255533faf90599 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1197,34 +1197,6 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, return cpu_addr; } -#ifdef CONFIG_DMA_REMAP -static void *iommu_dma_alloc_noncoherent(struct device *dev, size_t size, - dma_addr_t *handle, enum dma_data_direction dir, gfp_t gfp) -{ - if (!gfpflags_allow_blocking(gfp)) { - struct page *page; - - page = dma_common_alloc_pages(dev, size, handle, dir, gfp); - if (!page) - return NULL; - return page_address(page); - } - - return iommu_dma_alloc_remap(dev, size, handle, gfp | __GFP_ZERO, - PAGE_KERNEL, 0); -} - -static void iommu_dma_free_noncoherent(struct device *dev, size_t size, - void *cpu_addr, dma_addr_t handle, enum dma_data_direction dir) -{ - __iommu_dma_unmap(dev, handle, size); - __iommu_dma_free(dev, size, cpu_addr); -} -#else -#define iommu_dma_alloc_noncoherent NULL -#define iommu_dma_free_noncoherent NULL -#endif /* CONFIG_DMA_REMAP */ - static int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) @@ -1295,8 +1267,6 @@ static const struct dma_map_ops iommu_dma_ops = { .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, - .alloc_noncoherent = iommu_dma_alloc_noncoherent, - .free_noncoherent = iommu_dma_free_noncoherent, .mmap = iommu_dma_mmap, .get_sgtable = iommu_dma_get_sgtable, .map_page = iommu_dma_map_page, diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 70fcd0f610ea48..11e02537b9e01b 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -22,11 +22,6 @@ struct dma_map_ops { gfp_t gfp); void (*free_pages)(struct device *dev, size_t size, struct page *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir); - void *(*alloc_noncoherent)(struct device *dev, size_t size, - dma_addr_t *dma_handle, enum dma_data_direction dir, - gfp_t gfp); - void (*free_noncoherent)(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, enum dma_data_direction dir); int (*mmap)(struct device *, struct vm_area_struct *, void *, dma_addr_t, size_t, unsigned long attrs); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 2e49996a8f391a..fbfa3f5abd9498 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -263,10 +263,19 @@ struct page *dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void dma_free_pages(struct device *dev, size_t size, struct page *page, dma_addr_t dma_handle, enum dma_data_direction dir); -void *dma_alloc_noncoherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); -void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, enum dma_data_direction dir); + +static inline void *dma_alloc_noncoherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) +{ + struct page *page = dma_alloc_pages(dev, size, dma_handle, dir, gfp); + return page ? page_address(page) : NULL; +} + +static inline void dma_free_noncoherent(struct device *dev, size_t size, + void *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir) +{ + dma_free_pages(dev, size, virt_to_page(vaddr), dma_handle, dir); +} static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index f87a89d086544b..68992e35c8c3a7 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -515,46 +515,6 @@ void dma_free_pages(struct device *dev, size_t size, struct page *page, } EXPORT_SYMBOL_GPL(dma_free_pages); -void *dma_alloc_noncoherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - void *vaddr; - - if (!ops || !ops->alloc_noncoherent) { - struct page *page; - - page = dma_alloc_pages(dev, size, dma_handle, dir, gfp); - if (!page) - return NULL; - return page_address(page); - } - - size = PAGE_ALIGN(size); - vaddr = ops->alloc_noncoherent(dev, size, dma_handle, dir, gfp); - if (vaddr) - debug_dma_map_page(dev, virt_to_page(vaddr), 0, size, dir, - *dma_handle); - return vaddr; -} -EXPORT_SYMBOL_GPL(dma_alloc_noncoherent); - -void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, enum dma_data_direction dir) -{ - const struct dma_map_ops *ops = get_dma_ops(dev); - - if (!ops || !ops->free_noncoherent) { - dma_free_pages(dev, size, virt_to_page(vaddr), dma_handle, dir); - return; - } - - size = PAGE_ALIGN(size); - debug_dma_unmap_page(dev, dma_handle, size, dir); - ops->free_noncoherent(dev, size, vaddr, dma_handle, dir); -} -EXPORT_SYMBOL_GPL(dma_free_noncoherent); - int dma_supported(struct device *dev, u64 mask) { const struct dma_map_ops *ops = get_dma_ops(dev); From patchwork Tue Feb 2 09:51:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 71232 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1l6sN6-007FSn-TW; Tue, 02 Feb 2021 09:53:21 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231157AbhBBJwq (ORCPT + 1 other); Tue, 2 Feb 2021 04:52:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229631AbhBBJwK (ORCPT ); Tue, 2 Feb 2021 04:52:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57CB7C0613D6; Tue, 2 Feb 2021 01:51:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=V9Mg8tXW0rNlZuOFD1i/bt5R8THD0IcaaC9lFgbmReU=; b=L1fWioLwTtRfOXtTgZb+howLjx 1EOfyYP2fxMMgZOqeg6cYBIek1eFvmNWURkW/ho5a4l+tE/MUIANWk3E0S7h/Mt7LCiz6TxpjYp6E x5mWQKiAczR8jnfqIiZyAvVaMj7O+YebW2+i/5VgKXJWTWR6m9sYkwPwbLX5hqxSyaYFPbLWGRkE5 7m93kXAk6+bWufC3sEZD67B8GM6J0auo3xk96027YqMgGr2L+eLrS91h7Yc9QKKdmsghG4ZPc9VV8 79cNjJiENTav+zXR7nuu8SDFnkffhIwzK5Hl3epzy1iGNrSVRGIo4fizoql04VTcLzdf7wIVMfUBC jkkdGOfw==; Received: from [2001:4bb8:198:6bf4:7f38:755e:a6e0:73e9] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l6sL7-00F0wH-31; Tue, 02 Feb 2021 09:51:20 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , Ricardo Ribalda , Sergey Senozhatsky , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 2/7] dma-mapping: add a dma_mmap_pages helper Date: Tue, 2 Feb 2021 10:51:05 +0100 Message-Id: <20210202095110.1215346-3-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210202095110.1215346-1-hch@lst.de> References: <20210202095110.1215346-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.4 (--) X-LSpam-Report: No, score=-2.4 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no Add a helper to map memory allocated using dma_alloc_pages into a user address space, similar to the dma_alloc_attrs function for coherent allocations. Signed-off-by: Christoph Hellwig --- Documentation/core-api/dma-api.rst | 10 ++++++++++ include/linux/dma-mapping.h | 2 ++ kernel/dma/mapping.c | 13 +++++++++++++ 3 files changed, 25 insertions(+) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index e6d23f117308df..157a474ae54416 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -563,6 +563,16 @@ Free a region of memory previously allocated using dma_alloc_pages(). dev, size, dma_handle and dir must all be the same as those passed into dma_alloc_pages(). page must be the pointer returned by dma_alloc_pages(). +:: + + int + dma_mmap_pages(struct device *dev, struct vm_area_struct *vma, + size_t size, struct page *page) + +Map an allocation returned from dma_alloc_pages() into a user address space. +dev and size must be the same as those passed into dma_alloc_pages(). +page must be the pointer returned by dma_alloc_pages(). + :: void * diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index fbfa3f5abd9498..4977a748cb9483 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -263,6 +263,8 @@ struct page *dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void dma_free_pages(struct device *dev, size_t size, struct page *page, dma_addr_t dma_handle, enum dma_data_direction dir); +int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma, + size_t size, struct page *page); static inline void *dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 68992e35c8c3a7..c1e515496c067b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -515,6 +515,19 @@ void dma_free_pages(struct device *dev, size_t size, struct page *page, } EXPORT_SYMBOL_GPL(dma_free_pages); +int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma, + size_t size, struct page *page) +{ + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + + if (vma->vm_pgoff >= count || vma_pages(vma) > count - vma->vm_pgoff) + return -ENXIO; + return remap_pfn_range(vma, vma->vm_start, + page_to_pfn(page) + vma->vm_pgoff, + vma_pages(vma) << PAGE_SHIFT, vma->vm_page_prot); +} +EXPORT_SYMBOL_GPL(dma_mmap_pages); + int dma_supported(struct device *dev, u64 mask) { const struct dma_map_ops *ops = get_dma_ops(dev); From patchwork Tue Feb 2 09:51:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 71237 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1l6sOG-007FWH-4Y; Tue, 02 Feb 2021 09:54:32 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229685AbhBBJwm (ORCPT + 1 other); Tue, 2 Feb 2021 04:52:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229716AbhBBJwP (ORCPT ); Tue, 2 Feb 2021 04:52:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0213AC0613ED; Tue, 2 Feb 2021 01:51:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wktlSKmgQbwwiYv5D/Zo0pNnWDNYO3mB3BRFoRHQVPA=; b=D5EUYpPnPco6p0ayBhZ3pYAuHz OuIgzSI+/OZkEnlL1svJxf1/VsNHiNRlfx47TqLezZRKB0GOgS6Jsp3mZEc8F1+RaPI+sCjqp+jC2 sYwWZphpF6IvHkwyotBy38DRpDL9msuNQr+CsY7YvycLHjBhv0kEc0hZ+e6zBWy9NktjTmleQf7fC cZeXV2BMF7qflg3trCHYQy7j+uDPdF3UnWfB5Foh1BQYhI+JoMaI5Twtro+FWBAShJujR0nEL8BDH bSFeYR50AWdL4Qr78WpLBjTreNk57RxUDRO3BQWOHlxKHW0wHorKyWpGNxneFPD1UYEAa2/K6vPCi oAfgsQtQ==; Received: from [2001:4bb8:198:6bf4:7f38:755e:a6e0:73e9] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l6sLC-00F0wZ-UQ; Tue, 02 Feb 2021 09:51:23 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , Ricardo Ribalda , Sergey Senozhatsky , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 3/7] dma-mapping: refactor dma_{alloc,free}_pages Date: Tue, 2 Feb 2021 10:51:06 +0100 Message-Id: <20210202095110.1215346-4-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210202095110.1215346-1-hch@lst.de> References: <20210202095110.1215346-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.4 (--) X-LSpam-Report: No, score=-2.4 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no Factour out internal versions without the dma_debug calls in preparation for callers that will need different dma_debug calls. Note that this changes the dma_debug calls to get the not page aligned size values, but as long as alloc and free agree on one variant we are fine. Signed-off-by: Christoph Hellwig --- kernel/dma/mapping.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index c1e515496c067b..5e87dac6cc6d9a 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -475,11 +475,10 @@ void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, } EXPORT_SYMBOL(dma_free_attrs); -struct page *dma_alloc_pages(struct device *dev, size_t size, +static struct page *__dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) { const struct dma_map_ops *ops = get_dma_ops(dev); - struct page *page; if (WARN_ON_ONCE(!dev->coherent_dma_mask)) return NULL; @@ -488,31 +487,41 @@ struct page *dma_alloc_pages(struct device *dev, size_t size, size = PAGE_ALIGN(size); if (dma_alloc_direct(dev, ops)) - page = dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp); - else if (ops->alloc_pages) - page = ops->alloc_pages(dev, size, dma_handle, dir, gfp); - else + return dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp); + if (!ops->alloc_pages) return NULL; + return ops->alloc_pages(dev, size, dma_handle, dir, gfp); +} - debug_dma_map_page(dev, page, 0, size, dir, *dma_handle); +struct page *dma_alloc_pages(struct device *dev, size_t size, + dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) +{ + struct page *page = __dma_alloc_pages(dev, size, dma_handle, dir, gfp); + if (page) + debug_dma_map_page(dev, page, 0, size, dir, *dma_handle); return page; } EXPORT_SYMBOL_GPL(dma_alloc_pages); -void dma_free_pages(struct device *dev, size_t size, struct page *page, +static void __dma_free_pages(struct device *dev, size_t size, struct page *page, dma_addr_t dma_handle, enum dma_data_direction dir) { const struct dma_map_ops *ops = get_dma_ops(dev); size = PAGE_ALIGN(size); - debug_dma_unmap_page(dev, dma_handle, size, dir); - if (dma_alloc_direct(dev, ops)) dma_direct_free_pages(dev, size, page, dma_handle, dir); else if (ops->free_pages) ops->free_pages(dev, size, page, dma_handle, dir); } + +void dma_free_pages(struct device *dev, size_t size, struct page *page, + dma_addr_t dma_handle, enum dma_data_direction dir) +{ + debug_dma_unmap_page(dev, dma_handle, size, dir); + __dma_free_pages(dev, size, page, dma_handle, dir); +} EXPORT_SYMBOL_GPL(dma_free_pages); int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma, From patchwork Tue Feb 2 09:51:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 71234 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1l6sN9-007FSn-NX; Tue, 02 Feb 2021 09:53:24 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231216AbhBBJxB (ORCPT + 1 other); Tue, 2 Feb 2021 04:53:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230518AbhBBJwm (ORCPT ); Tue, 2 Feb 2021 04:52:42 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF7B7C061786; Tue, 2 Feb 2021 01:51:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8ctc3SQtOdY3GnKsKprPxf0Aq17X8Ps39Pig0nEd4YM=; b=ASpQNkvlmq9AOapGaxK50bnIh1 iNG3G8YYtR6rz9sPxx+y9j3EJpHQ3jYfp53R6kiPa4oG3K8GAvTgVoNum5rCJ/DdtRKUBGJoD//L2 nIwNgjhyYWZe7ojtGffc9XMOE0nMHTaMzYcYvqzv3HypGxtXz8B3O43SDwMq0CS1ZdS8i57Bo9h/l JLJnUC7rau0Xv6jNBn+weQkE9WFgcIfkNc8qr3laVXkuf/0y6o35/c4TapFOtYg+BXvfspz5iXNEh GaR7L78mvL6cTpsRq/ixwhPoC41sIy/YH/bydFjmbh34n/s+lIh8AlaJNTl2AYWNfGZqIus/jFDFq WK8o+umA==; Received: from [2001:4bb8:198:6bf4:7f38:755e:a6e0:73e9] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l6sLG-00F0wy-4I; Tue, 02 Feb 2021 09:51:27 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , Ricardo Ribalda , Sergey Senozhatsky , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 4/7] dma-mapping: add a dma_alloc_noncontiguous API Date: Tue, 2 Feb 2021 10:51:07 +0100 Message-Id: <20210202095110.1215346-5-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210202095110.1215346-1-hch@lst.de> References: <20210202095110.1215346-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.4 (--) X-LSpam-Report: No, score=-2.4 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no Add a new API that returns a potentiall virtually non-contigous sg_table and a DMA address. This API is only properly implemented for dma-iommu and will simply return a contigious chunk as a fallback. The intent is that media drivers can use this API if either: - no kernel mapping or only temporary kernel mappings are required. That is as a better replacement for DMA_ATTR_NO_KERNEL_MAPPING - a kernel mapping is required for cached and DMA mapped pages, but the driver also needs the pages to e.g. map them to userspace. In that sense it is a replacement for some aspects of the recently removed and never fully implemented DMA_ATTR_NON_CONSISTENT Signed-off-by: Christoph Hellwig --- Documentation/core-api/dma-api.rst | 74 +++++++++++++++++++++ include/linux/dma-map-ops.h | 18 +++++ include/linux/dma-mapping.h | 31 +++++++++ kernel/dma/mapping.c | 103 +++++++++++++++++++++++++++++ 4 files changed, 226 insertions(+) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index 157a474ae54416..e24b2447f4bfe6 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -594,6 +594,80 @@ dev, size, dma_handle and dir must all be the same as those passed into dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by dma_alloc_noncoherent(). +:: + + struct sg_table * + dma_alloc_noncontiguous(struct device *dev, size_t size, + enum dma_data_direction dir, gfp_t gfp) + +This routine allocates bytes of non-coherent and possibly non-contiguous +memory. It returns a pointer to struct sg_table that describes the allocated +and DMA mapped memory, or NULL if the allocation failed. The resulting memory +can be used for struct page mapped into a scatterlist are suitable for. + +The return sg_table is guaranteed to have 1 single DMA mapped segment as +indicated by sgt->nents, but it might have multiple CPU side segments as +indicated by sgt->orig_nents. + +The dir parameter specified if data is read and/or written by the device, +see dma_map_single() for details. + +The gfp parameter allows the caller to specify the ``GFP_`` flags (see +kmalloc()) for the allocation, but rejects flags used to specify a memory +zone such as GFP_DMA or GFP_HIGHMEM. + +Before giving the memory to the device, dma_sync_sgtable_for_device() needs +to be called, and before reading memory written by the device, +dma_sync_sgtable_for_cpu(), just like for streaming DMA mappings that are +reused. + +:: + + void + dma_free_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt, + enum dma_data_direction dir) + +Free memory previously allocated using dma_alloc_noncontiguous(). dev, size, +and dir must all be the same as those passed into dma_alloc_noncontiguous(). +sgt must be the pointer returned by dma_alloc_noncontiguous(). + +:: + + void * + dma_vmap_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt) + +Return a contiguous kernel mapping for an allocation returned from +dma_alloc_noncontiguous(). dev and size must be the same as those passed into +dma_alloc_noncontiguous(). sgt must be the pointer returned by +dma_alloc_noncontiguous(). + +Once a non-contiguous allocation is mapped using this function, the +flush_kernel_vmap_range() and invalidate_kernel_vmap_range() APIs must be used +to manage the coherency of the kernel mapping. + +:: + + void + dma_vunmap_noncontiguous(struct device *dev, void *vaddr) + +Unmap a kernel mapping returned by dma_vmap_noncontiguous(). dev must be the +same the one passed into dma_alloc_noncontiguous(). vaddr must be the pointer +returned by dma_vmap_noncontiguous(). + + +:: + + int + dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, + size_t size, struct sg_table *sgt) + +Map an allocation returned from dma_alloc_noncontiguous() into a user address +space. dev and size must be the same as those passed into +dma_alloc_noncontiguous(). sgt must be the pointer returned by +dma_alloc_noncontiguous(). + :: int diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 11e02537b9e01b..fe46a41130e662 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -22,6 +22,10 @@ struct dma_map_ops { gfp_t gfp); void (*free_pages)(struct device *dev, size_t size, struct page *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir); + struct sg_table *(*alloc_noncontiguous)(struct device *dev, size_t size, + enum dma_data_direction dir, gfp_t gfp); + void (*free_noncontiguous)(struct device *dev, size_t size, + struct sg_table *sgt, enum dma_data_direction dir); int (*mmap)(struct device *, struct vm_area_struct *, void *, dma_addr_t, size_t, unsigned long attrs); @@ -198,6 +202,20 @@ static inline int dma_mmap_from_global_coherent(struct vm_area_struct *vma, } #endif /* CONFIG_DMA_DECLARE_COHERENT */ +/* + * This is the actual return value from the ->alloc_noncontiguous method. + * The users of the DMA API should only care about the sg_table, but to make + * the DMA-API internal vmaping and freeing easier we stash away the page + * array as well (except for the fallback case). This can go away any time, + * e.g. when a vmap-variant that takes a scatterlist comes along. + */ +struct dma_sgt_handle { + struct sg_table sgt; + struct page **pages; +}; +#define sgt_handle(sgt) \ + container_of((sgt), struct dma_sgt_handle, sgt) + int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4977a748cb9483..6f4d34739f5cc6 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -144,6 +144,15 @@ u64 dma_get_required_mask(struct device *dev); size_t dma_max_mapping_size(struct device *dev); bool dma_need_sync(struct device *dev, dma_addr_t dma_addr); unsigned long dma_get_merge_boundary(struct device *dev); +struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size, + enum dma_data_direction dir, gfp_t gfp); +void dma_free_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt, enum dma_data_direction dir); +void *dma_vmap_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt); +void dma_vunmap_noncontiguous(struct device *dev, void *vaddr); +int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, + size_t size, struct sg_table *sgt); #else /* CONFIG_HAS_DMA */ static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, @@ -257,6 +266,28 @@ static inline unsigned long dma_get_merge_boundary(struct device *dev) { return 0; } +static inline struct sg_table *dma_alloc_noncontiguous(struct device *dev, + size_t size, enum dma_data_direction dir, gfp_t gfp) +{ + return NULL; +} +static inline void dma_free_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt, enum dma_data_direction dir) +{ +} +static inline void *dma_vmap_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt) +{ + return NULL; +} +static inline void dma_vunmap_noncontiguous(struct device *dev, void *vaddr) +{ +} +static inline int dma_mmap_noncontiguous(struct device *dev, + struct vm_area_struct *vma, size_t size, struct sg_table *sgt) +{ + return -EINVAL; +} #endif /* CONFIG_HAS_DMA */ struct page *dma_alloc_pages(struct device *dev, size_t size, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 5e87dac6cc6d9a..5a62439ed483af 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -537,6 +537,109 @@ int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma, } EXPORT_SYMBOL_GPL(dma_mmap_pages); +static struct sg_table *alloc_single_sgt(struct device *dev, size_t size, + enum dma_data_direction dir, gfp_t gfp) +{ + struct sg_table *sgt; + struct page *page; + + sgt = kmalloc(sizeof(*sgt), gfp); + if (!sgt) + return NULL; + if (sg_alloc_table(sgt, 1, gfp)) + goto out_free_sgt; + page = __dma_alloc_pages(dev, size, &sgt->sgl->dma_address, dir, gfp); + if (!page) + goto out_free_table; + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); + sg_dma_len(sgt->sgl) = sgt->sgl->length; + return sgt; +out_free_table: + sg_free_table(sgt); +out_free_sgt: + kfree(sgt); + return NULL; +} + +struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size, + enum dma_data_direction dir, gfp_t gfp) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + struct sg_table *sgt; + + if (ops && ops->alloc_noncontiguous) + sgt = ops->alloc_noncontiguous(dev, size, dir, gfp); + else + sgt = alloc_single_sgt(dev, size, dir, gfp); + + if (sgt) { + sgt->nents = 1; + debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir); + } + return sgt; +} +EXPORT_SYMBOL_GPL(dma_alloc_noncontiguous); + +static void free_single_sgt(struct device *dev, size_t size, + struct sg_table *sgt, enum dma_data_direction dir) +{ + __dma_free_pages(dev, size, sg_page(sgt->sgl), sgt->sgl->dma_address, + dir); + sg_free_table(sgt); + kfree(sgt); +} + +void dma_free_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt, enum dma_data_direction dir) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + debug_dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir); + if (ops && ops->free_noncontiguous) + ops->free_noncontiguous(dev, size, sgt, dir); + else + free_single_sgt(dev, size, sgt, dir); +} +EXPORT_SYMBOL_GPL(dma_free_noncontiguous); + +void *dma_vmap_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + + if (ops && ops->alloc_noncontiguous) + return vmap(sgt_handle(sgt)->pages, count, VM_MAP, PAGE_KERNEL); + return page_address(sg_page(sgt->sgl)); +} +EXPORT_SYMBOL_GPL(dma_vmap_noncontiguous); + +void dma_vunmap_noncontiguous(struct device *dev, void *vaddr) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (ops && ops->alloc_noncontiguous) + vunmap(vaddr); +} +EXPORT_SYMBOL_GPL(dma_vunmap_noncontiguous); + +int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, + size_t size, struct sg_table *sgt) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (ops && ops->alloc_noncontiguous) { + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + + if (vma->vm_pgoff >= count || + vma_pages(vma) > count - vma->vm_pgoff) + return -ENXIO; + return vm_map_pages(vma, sgt_handle(sgt)->pages, count); + } + return dma_mmap_pages(dev, vma, size, sg_page(sgt->sgl)); +} +EXPORT_SYMBOL_GPL(dma_mmap_noncontiguous); + int dma_supported(struct device *dev, u64 mask) { const struct dma_map_ops *ops = get_dma_ops(dev); From patchwork Tue Feb 2 09:51:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 71233 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1l6sN8-007FSn-6r; Tue, 02 Feb 2021 09:53:22 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231220AbhBBJxH (ORCPT + 1 other); Tue, 2 Feb 2021 04:53:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230527AbhBBJwm (ORCPT ); Tue, 2 Feb 2021 04:52:42 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70F05C061788; Tue, 2 Feb 2021 01:51:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=64A7herlcZlvWcI6alDQM37QEUYclkAj4BcQkJo+WtM=; b=pFdpqMg6CvZGy4ZL615fryYJ8n joZqk/xF5frihgPn0JceJeFdTrvKXswbmFzxAmEeZIg3OXVj1rmMN7URrfz8N6RlBUaJ4Vn4/n2ZA V1BKC8sMoqx6fwUqsi7vxaM1VqwMZyLJ+lFA3iyDotKiAoc9T07f8gBdo762kMRrslpQfGgz9pA5b afS2Gxdp3OFXQmzBMYFmnbOAhvUXRECY08505M4eNrZySQ2HBwxaP6+wtsEwYMYW0gTtLk51462Ul 10BaoWWdKnbYAIueZ7noXhwGus+FYOesbD/RQyiHFROP1c0y4Ij0f/NfYcffZau9f17GVqcUd+S2m m6fPDrTA==; Received: from [2001:4bb8:198:6bf4:7f38:755e:a6e0:73e9] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l6sLI-00F0xO-Sk; Tue, 02 Feb 2021 09:51:29 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , Ricardo Ribalda , Sergey Senozhatsky , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 5/7] dma-iommu: refactor iommu_dma_alloc_remap Date: Tue, 2 Feb 2021 10:51:08 +0100 Message-Id: <20210202095110.1215346-6-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210202095110.1215346-1-hch@lst.de> References: <20210202095110.1215346-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.4 (--) X-LSpam-Report: No, score=-2.4 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no Split out a new helper that only allocates a sg_table worth of memory without mapping it into contiguous kernel address space. Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 67 ++++++++++++++++++++------------------- 1 file changed, 35 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 255533faf90599..85cb004d7a44c6 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -661,23 +661,12 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, return pages; } -/** - * iommu_dma_alloc_remap - Allocate and map a buffer contiguous in IOVA space - * @dev: Device to allocate memory for. Must be a real device - * attached to an iommu_dma_domain - * @size: Size of buffer in bytes - * @dma_handle: Out argument for allocated DMA handle - * @gfp: Allocation flags - * @prot: pgprot_t to use for the remapped mapping - * @attrs: DMA attributes for this allocation - * - * If @size is less than PAGE_SIZE, then a full CPU page will be allocated, +/* + * If size is less than PAGE_SIZE, then a full CPU page will be allocated, * but an IOMMU which supports smaller pages might not map the whole thing. - * - * Return: Mapped virtual address, or NULL on failure. */ -static void *iommu_dma_alloc_remap(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot, +static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, + size_t size, struct sg_table *sgt, gfp_t gfp, pgprot_t prot, unsigned long attrs) { struct iommu_domain *domain = iommu_get_dma_domain(dev); @@ -687,11 +676,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs); unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap; struct page **pages; - struct sg_table sgt; dma_addr_t iova; - void *vaddr; - - *dma_handle = DMA_MAPPING_ERROR; if (unlikely(iommu_dma_deferred_attach(dev, domain))) return NULL; @@ -717,38 +702,56 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, if (!iova) goto out_free_pages; - if (sg_alloc_table_from_pages(&sgt, pages, count, 0, size, GFP_KERNEL)) + if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL)) goto out_free_iova; if (!(ioprot & IOMMU_CACHE)) { struct scatterlist *sg; int i; - for_each_sg(sgt.sgl, sg, sgt.orig_nents, i) + for_each_sg(sgt->sgl, sg, sgt->orig_nents, i) arch_dma_prep_coherent(sg_page(sg), sg->length); } - if (iommu_map_sg_atomic(domain, iova, sgt.sgl, sgt.orig_nents, ioprot) + if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot) < size) goto out_free_sg; + sgt->sgl->dma_address = iova; + return pages; + +out_free_sg: + sg_free_table(sgt); +out_free_iova: + iommu_dma_free_iova(cookie, iova, size, NULL); +out_free_pages: + __iommu_dma_free_pages(pages, count); + return NULL; +} + +static void *iommu_dma_alloc_remap(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot, + unsigned long attrs) +{ + struct page **pages; + struct sg_table sgt; + void *vaddr; + + pages = __iommu_dma_alloc_noncontiguous(dev, size, &sgt, gfp, prot, + attrs); + if (!pages) + return NULL; + *dma_handle = sgt.sgl->dma_address; + sg_free_table(&sgt); vaddr = dma_common_pages_remap(pages, size, prot, __builtin_return_address(0)); if (!vaddr) goto out_unmap; - - *dma_handle = iova; - sg_free_table(&sgt); return vaddr; out_unmap: - __iommu_dma_unmap(dev, iova, size); -out_free_sg: - sg_free_table(&sgt); -out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); -out_free_pages: - __iommu_dma_free_pages(pages, count); + __iommu_dma_unmap(dev, *dma_handle, size); + __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); return NULL; } From patchwork Tue Feb 2 09:51:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 71235 Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1l6sNq-007FV9-Cv; Tue, 02 Feb 2021 09:54:07 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231270AbhBBJxh (ORCPT + 1 other); Tue, 2 Feb 2021 04:53:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231196AbhBBJw5 (ORCPT ); Tue, 2 Feb 2021 04:52:57 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 299B4C06178A; Tue, 2 Feb 2021 01:51:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=1V/H+mhD3tcZ52EzqPv1oWFhuvy8mDMctNaFk3Uutdw=; b=gGLVzoVsVpQF946JOL/QCPFHJJ P6cyFN8qSNoSjsA6F5aRwdB+GLJ2vH/wtkakDEp24xS4Uwf2koPsHiathaxKdiW3D5CJadk4FwdZS T4Se8WfT0ZFWzmuIjESOxVtyJNWVtLNtmkleqSQamPvc0a7/1JC44VbG1maSq0WjR0sktj2kEnAfU 1MYpLKVAENwq4Irw50ZuwtPtZc3vtez7C5E9xocfeTcbdFYrf9VsuhQA0MSE4W21bVsP31ODFAJ2M awVKW/C8GmxlcWQsX1A7yhB3C5kq7yxcuqi5up8kOgt0nbpFdnwQVZax2+j74iZ7Uv8G8iprQ89pS EQXwku4w==; Received: from [2001:4bb8:198:6bf4:7f38:755e:a6e0:73e9] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l6sLL-00F0xY-4H; Tue, 02 Feb 2021 09:51:32 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , Ricardo Ribalda , Sergey Senozhatsky , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 6/7] dma-iommu: implement ->alloc_noncontiguous Date: Tue, 2 Feb 2021 10:51:09 +0100 Message-Id: <20210202095110.1215346-7-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210202095110.1215346-1-hch@lst.de> References: <20210202095110.1215346-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.4 (--) X-LSpam-Report: No, score=-2.4 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no Implement support for allocating a non-contiguous DMA region. Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 85cb004d7a44c6..4e0b170d38d57a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -718,6 +718,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, goto out_free_sg; sgt->sgl->dma_address = iova; + sgt->sgl->dma_length = size; return pages; out_free_sg: @@ -755,6 +756,36 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, return NULL; } +#ifdef CONFIG_DMA_REMAP +static struct sg_table *iommu_dma_alloc_noncontiguous(struct device *dev, + size_t size, enum dma_data_direction dir, gfp_t gfp) +{ + struct dma_sgt_handle *sh; + + sh = kmalloc(sizeof(*sh), gfp); + if (!sh) + return NULL; + + sh->pages = __iommu_dma_alloc_noncontiguous(dev, size, &sh->sgt, gfp, + PAGE_KERNEL, 0); + if (!sh->pages) { + kfree(sh); + return NULL; + } + return &sh->sgt; +} + +static void iommu_dma_free_noncontiguous(struct device *dev, size_t size, + struct sg_table *sgt, enum dma_data_direction dir) +{ + struct dma_sgt_handle *sh = sgt_handle(sgt); + + __iommu_dma_unmap(dev, sgt->sgl->dma_address, size); + __iommu_dma_free_pages(sh->pages, PAGE_ALIGN(size) >> PAGE_SHIFT); + sg_free_table(&sh->sgt); +} +#endif /* CONFIG_DMA_REMAP */ + static void iommu_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) { @@ -1270,6 +1301,10 @@ static const struct dma_map_ops iommu_dma_ops = { .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, +#ifdef CONFIG_DMA_REMAP + .alloc_noncontiguous = iommu_dma_alloc_noncontiguous, + .free_noncontiguous = iommu_dma_free_noncontiguous, +#endif .mmap = iommu_dma_mmap, .get_sgtable = iommu_dma_get_sgtable, .map_page = iommu_dma_map_page, From patchwork Tue Feb 2 09:51:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 71236 X-Patchwork-Delegate: laurent.pinchart@ideasonboard.com Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from ) id 1l6sNr-007FV9-LD; Tue, 02 Feb 2021 09:54:08 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231290AbhBBJx6 (ORCPT + 1 other); Tue, 2 Feb 2021 04:53:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231199AbhBBJw5 (ORCPT ); Tue, 2 Feb 2021 04:52:57 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C52F8C06178B; Tue, 2 Feb 2021 01:51:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=CJcaYUiPjVTyvwNSOYZClHYIRGdspJ6eieqS3CCMvlU=; b=cm0oUK3vTncu8Xzm88HyQdsJ7d 3VOANRFzlWIY5HQ5hClw1AGsVi19TQs9dkWi8/UjSfaLs6EfaVtmX4vE6lj6LtRWKrzbsJRd36I9X 2m1OF6SXSEgpB/rDhGz2GgK88kcp1+RPp0KQ7X8vHCA+SulVWGENAPKnRwcMsFKdWE5XCfK2xDv8M rLhhMven7ToeLaTPdPHcWXERHLAB/LJuelUbagt0zdG/uqc47kIOTXrfiai6+UqlY3WZkjxB2XcIa AbndQbBMtcq0GFEhJOgLh8C92gNMmwSDS8TpN7BFcI3qFQj7BCzX25/Zb1D2F63rC1qmANBZ4nuuH yvGyseFg==; Received: from [2001:4bb8:198:6bf4:7f38:755e:a6e0:73e9] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l6sLN-00F0xk-Bj; Tue, 02 Feb 2021 09:51:34 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , Ricardo Ribalda , Sergey Senozhatsky , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 7/7] media: uvcvideo: Use dma_alloc_noncontiguos API Date: Tue, 2 Feb 2021 10:51:10 +0100 Message-Id: <20210202095110.1215346-8-hch@lst.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210202095110.1215346-1-hch@lst.de> References: <20210202095110.1215346-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.4 (--) X-LSpam-Report: No, score=-2.4 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no From: Ricardo Ribalda On architectures where the is no coherent caching such as ARM use the dma_alloc_noncontiguos API and handle manually the cache flushing using dma_sync_sgtable(). With this patch on the affected architectures we can measure up to 20x performance improvement in uvc_video_copy_data_work(). Eg: aarch64 with an external usb camera NON_CONTIGUOUS frames: 999 packets: 999 empty: 0 (0 %) errors: 0 invalid: 0 pts: 0 early, 0 initial, 999 ok scr: 0 count ok, 0 diff ok sof: 2048 <= sof <= 0, freq 0.000 kHz bytes 67034480 : duration 33303 FPS: 29.99 URB: 523446/4993 uS/qty: 104.836 avg 132.532 std 13.230 min 831.094 max (uS) header: 76564/4993 uS/qty: 15.334 avg 15.229 std 3.438 min 186.875 max (uS) latency: 468945/4992 uS/qty: 93.939 avg 132.577 std 9.531 min 824.010 max (uS) decode: 54161/4993 uS/qty: 10.847 avg 6.313 std 1.614 min 111.458 max (uS) raw decode speed: 9.931 Gbits/s raw URB handling speed: 1.025 Gbits/s throughput: 16.102 Mbits/s URB decode CPU usage 0.162600 % COHERENT frames: 999 packets: 999 empty: 0 (0 %) errors: 0 invalid: 0 pts: 0 early, 0 initial, 999 ok scr: 0 count ok, 0 diff ok sof: 2048 <= sof <= 0, freq 0.000 kHz bytes 54683536 : duration 33302 FPS: 29.99 URB: 1478135/4000 uS/qty: 369.533 avg 390.357 std 22.968 min 3337.865 max (uS) header: 79761/4000 uS/qty: 19.940 avg 18.495 std 1.875 min 336.719 max (uS) latency: 281077/4000 uS/qty: 70.269 avg 83.102 std 5.104 min 735.000 max (uS) decode: 1197057/4000 uS/qty: 299.264 avg 318.080 std 1.615 min 2806.667 max (uS) raw decode speed: 365.470 Mbits/s raw URB handling speed: 295.986 Mbits/s throughput: 13.136 Mbits/s URB decode CPU usage 3.594500 % Signed-off-by: Ricardo Ribalda Signed-off-by: Christoph Hellwig --- drivers/media/usb/uvc/uvc_video.c | 79 ++++++++++++++++++++++--------- drivers/media/usb/uvc/uvcvideo.h | 4 +- 2 files changed, 60 insertions(+), 23 deletions(-) diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c index a6a441d92b9488..0a7d287dc41528 100644 --- a/drivers/media/usb/uvc/uvc_video.c +++ b/drivers/media/usb/uvc/uvc_video.c @@ -6,11 +6,13 @@ * Laurent Pinchart (laurent.pinchart@ideasonboard.com) */ +#include #include #include #include #include #include +#include #include #include #include @@ -1097,6 +1099,26 @@ static int uvc_video_decode_start(struct uvc_streaming *stream, return data[0]; } +static inline struct device *stream_to_dmadev(struct uvc_streaming *stream) +{ + return bus_to_hcd(stream->dev->udev->bus)->self.sysdev; +} + +static void uvc_urb_dma_sync(struct uvc_urb *uvc_urb, bool for_device) +{ + struct device *dma_dev = dma_dev = stream_to_dmadev(uvc_urb->stream); + + if (for_device) { + dma_sync_sgtable_for_device(dma_dev, uvc_urb->sgt, + DMA_FROM_DEVICE); + } else { + dma_sync_sgtable_for_cpu(dma_dev, uvc_urb->sgt, + DMA_FROM_DEVICE); + invalidate_kernel_vmap_range(uvc_urb->buffer, + uvc_urb->stream->urb_size); + } +} + /* * uvc_video_decode_data_work: Asynchronous memcpy processing * @@ -1118,6 +1140,8 @@ static void uvc_video_copy_data_work(struct work_struct *work) uvc_queue_buffer_release(op->buf); } + uvc_urb_dma_sync(uvc_urb, true); + ret = usb_submit_urb(uvc_urb->urb, GFP_KERNEL); if (ret < 0) uvc_printk(KERN_ERR, "Failed to resubmit video URB (%d).\n", @@ -1539,10 +1563,12 @@ static void uvc_video_complete(struct urb *urb) * Process the URB headers, and optionally queue expensive memcpy tasks * to be deferred to a work queue. */ + uvc_urb_dma_sync(uvc_urb, false); stream->decode(uvc_urb, buf, buf_meta); /* If no async work is needed, resubmit the URB immediately. */ if (!uvc_urb->async_operations) { + uvc_urb_dma_sync(uvc_urb, true); ret = usb_submit_urb(uvc_urb->urb, GFP_ATOMIC); if (ret < 0) uvc_printk(KERN_ERR, @@ -1559,24 +1585,46 @@ static void uvc_video_complete(struct urb *urb) */ static void uvc_free_urb_buffers(struct uvc_streaming *stream) { + struct device *dma_dev = dma_dev = stream_to_dmadev(stream); struct uvc_urb *uvc_urb; for_each_uvc_urb(uvc_urb, stream) { if (!uvc_urb->buffer) continue; -#ifndef CONFIG_DMA_NONCOHERENT - usb_free_coherent(stream->dev->udev, stream->urb_size, - uvc_urb->buffer, uvc_urb->dma); -#else - kfree(uvc_urb->buffer); -#endif + dma_vunmap_noncontiguous(dma_dev, uvc_urb->buffer); + dma_free_noncontiguous(dma_dev, stream->urb_size, uvc_urb->sgt, + DMA_FROM_DEVICE); + uvc_urb->buffer = NULL; } stream->urb_size = 0; } +static bool uvc_alloc_urb_buffer(struct uvc_streaming *stream, + struct uvc_urb *uvc_urb, gfp_t gfp_flags) +{ + struct device *dma_dev = stream_to_dmadev(stream); + + + uvc_urb->sgt = dma_alloc_noncontiguous(dma_dev, stream->urb_size, + DMA_FROM_DEVICE, gfp_flags); + if (!uvc_urb->sgt) + return false; + uvc_urb->dma = uvc_urb->sgt->sgl->dma_address; + + uvc_urb->buffer = dma_vmap_noncontiguous(dma_dev, stream->urb_size, + uvc_urb->sgt); + if (!uvc_urb->buffer) { + dma_free_noncontiguous(dma_dev, stream->urb_size, + uvc_urb->sgt, DMA_FROM_DEVICE); + return false; + } + + return true; +} + /* * Allocate transfer buffers. This function can be called with buffers * already allocated when resuming from suspend, in which case it will @@ -1607,19 +1655,11 @@ static int uvc_alloc_urb_buffers(struct uvc_streaming *stream, /* Retry allocations until one succeed. */ for (; npackets > 1; npackets /= 2) { + stream->urb_size = psize * npackets; for (i = 0; i < UVC_URBS; ++i) { struct uvc_urb *uvc_urb = &stream->uvc_urb[i]; - stream->urb_size = psize * npackets; -#ifndef CONFIG_DMA_NONCOHERENT - uvc_urb->buffer = usb_alloc_coherent( - stream->dev->udev, stream->urb_size, - gfp_flags | __GFP_NOWARN, &uvc_urb->dma); -#else - uvc_urb->buffer = - kmalloc(stream->urb_size, gfp_flags | __GFP_NOWARN); -#endif - if (!uvc_urb->buffer) { + if (!uvc_alloc_urb_buffer(stream, uvc_urb, gfp_flags)) { uvc_free_urb_buffers(stream); break; } @@ -1728,12 +1768,8 @@ static int uvc_init_video_isoc(struct uvc_streaming *stream, urb->context = uvc_urb; urb->pipe = usb_rcvisocpipe(stream->dev->udev, ep->desc.bEndpointAddress); -#ifndef CONFIG_DMA_NONCOHERENT urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP; urb->transfer_dma = uvc_urb->dma; -#else - urb->transfer_flags = URB_ISO_ASAP; -#endif urb->interval = ep->desc.bInterval; urb->transfer_buffer = uvc_urb->buffer; urb->complete = uvc_video_complete; @@ -1793,10 +1829,8 @@ static int uvc_init_video_bulk(struct uvc_streaming *stream, usb_fill_bulk_urb(urb, stream->dev->udev, pipe, uvc_urb->buffer, size, uvc_video_complete, uvc_urb); -#ifndef CONFIG_DMA_NONCOHERENT urb->transfer_flags = URB_NO_TRANSFER_DMA_MAP; urb->transfer_dma = uvc_urb->dma; -#endif uvc_urb->urb = urb; } @@ -1891,6 +1925,7 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream, /* Submit the URBs. */ for_each_uvc_urb(uvc_urb, stream) { + uvc_urb_dma_sync(uvc_urb, true); ret = usb_submit_urb(uvc_urb->urb, gfp_flags); if (ret < 0) { uvc_printk(KERN_ERR, "Failed to submit URB %u (%d).\n", diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h index a3dfacf069c44d..a386114bd22999 100644 --- a/drivers/media/usb/uvc/uvcvideo.h +++ b/drivers/media/usb/uvc/uvcvideo.h @@ -521,7 +521,8 @@ struct uvc_copy_op { * @urb: the URB described by this context structure * @stream: UVC streaming context * @buffer: memory storage for the URB - * @dma: DMA coherent addressing for the urb_buffer + * @dma: Allocated DMA handle + * @sgt: sgt_table with the urb locations in memory * @async_operations: counter to indicate the number of copy operations * @copy_operations: work descriptors for asynchronous copy operations * @work: work queue entry for asynchronous decode @@ -532,6 +533,7 @@ struct uvc_urb { char *buffer; dma_addr_t dma; + struct sg_table *sgt; unsigned int async_operations; struct uvc_copy_op copy_operations[UVC_MAX_PACKETS];