Message ID | 20240112092014.23999-5-yong.wu@mediatek.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers |
Received: from sy.mirrors.kernel.org ([147.75.48.161]) by www.linuxtv.org with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from <linux-media+bounces-3632-patchwork=linuxtv.org@vger.kernel.org>) id 1rODkL-004GLx-W8 for patchwork@linuxtv.org; Fri, 12 Jan 2024 09:22:39 +0000 Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 9CB80B24B08 for <patchwork@linuxtv.org>; Fri, 12 Jan 2024 09:22:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3C75F56B9D; Fri, 12 Jan 2024 09:21:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=mediatek.com header.i=@mediatek.com header.b="K+Qb3h5j" X-Original-To: linux-media@vger.kernel.org Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E935155C06; Fri, 12 Jan 2024 09:21:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=mediatek.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=mediatek.com X-UUID: ee89ce1eb12b11ee9e680517dc993faa-20240112 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=y+0mAD68siKHkA4YtgC2Wh8sXu2aPnlLjtzoxs7OLwk=; b=K+Qb3h5jK5ZR/4x6tTjPTZV2ehRN4QVagffRT45k75khzyN8wVA55TwlwnX56KJfpgmJmbLZ3oGfDkkdUmk2xJBHuaar1QmlozuNawGfVnfgpMj/mbDKwj3+x73GhX4r5VmG25eg+5oU6iaUGADmDWiQPF9k/awmj+6YK91601g=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.35,REQID:e8339b2b-fce8-4de3-9517-4baa1bc1b789,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:5d391d7,CLOUDID:6c6a477f-4f93-4875-95e7-8c66ea833d57,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES:1,SPR:NO, DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR X-UUID: ee89ce1eb12b11ee9e680517dc993faa-20240112 Received: from mtkmbs14n2.mediatek.inc [(172.21.101.76)] by mailgw01.mediatek.com (envelope-from <yong.wu@mediatek.com>) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 627212575; Fri, 12 Jan 2024 17:21:13 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs11n1.mediatek.inc (172.21.101.185) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Fri, 12 Jan 2024 17:21:12 +0800 Received: from mhfsdcap04.gcn.mediatek.inc (10.17.3.154) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Fri, 12 Jan 2024 17:21:11 +0800 From: Yong Wu <yong.wu@mediatek.com> To: Rob Herring <robh+dt@kernel.org>, Matthias Brugger <matthias.bgg@gmail.com>, <christian.koenig@amd.com>, Sumit Semwal <sumit.semwal@linaro.org> CC: Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>, Conor Dooley <conor+dt@kernel.org>, Benjamin Gaignard <benjamin.gaignard@collabora.com>, Brian Starkey <Brian.Starkey@arm.com>, John Stultz <jstultz@google.com>, <tjmercier@google.com>, AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>, Yong Wu <yong.wu@mediatek.com>, <devicetree@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <linux-media@vger.kernel.org>, <dri-devel@lists.freedesktop.org>, <linaro-mm-sig@lists.linaro.org>, <linux-arm-kernel@lists.infradead.org>, <linux-mediatek@lists.infradead.org>, Robin Murphy <robin.murphy@arm.com>, Vijayanand Jitta <quic_vjitta@quicinc.com>, Joakim Bech <joakim.bech@linaro.org>, Jeffrey Kardatzke <jkardatzke@google.com>, Pavel Machek <pavel@ucw.cz>, Simon Ser <contact@emersion.fr>, Pekka Paalanen <ppaalanen@gmail.com>, <jianjiao.zeng@mediatek.com>, <kuohong.wang@mediatek.com>, <youlin.pei@mediatek.com> Subject: [PATCH v4 4/7] dma-buf: heaps: restricted_heap: Add dma_ops Date: Fri, 12 Jan 2024 17:20:11 +0800 Message-ID: <20240112092014.23999-5-yong.wu@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240112092014.23999-1-yong.wu@mediatek.com> References: <20240112092014.23999-1-yong.wu@mediatek.com> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: <linux-media.vger.kernel.org> List-Subscribe: <mailto:linux-media+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-media+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-MTK: N X-LSpam-Score: -4.8 (----) X-LSpam-Report: No, score=-4.8 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1,RCVD_IN_DNSWL_MED=-2.3,UNPARSEABLE_RELAY=0.001 autolearn=unavailable autolearn_force=no |
Series |
dma-buf: heaps: Add restricted heap
|
|
Commit Message
Yong Wu (吴勇)
Jan. 12, 2024, 9:20 a.m. UTC
Add the dma_ops for this restricted heap. For restricted buffer,
cache_ops/mmap are not allowed, thus return EPERM for them.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
---
drivers/dma-buf/heaps/restricted_heap.c | 103 ++++++++++++++++++++++++
1 file changed, 103 insertions(+)
Comments
On Fri, Jan 12, 2024 at 05:20:11PM +0800, Yong Wu wrote: > Add the dma_ops for this restricted heap. For restricted buffer, > cache_ops/mmap are not allowed, thus return EPERM for them. > > Signed-off-by: Yong Wu <yong.wu@mediatek.com> > --- > drivers/dma-buf/heaps/restricted_heap.c | 103 ++++++++++++++++++++++++ > 1 file changed, 103 insertions(+) > > diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c > index 8c266a0f6192..ec4c63d2112d 100644 > --- a/drivers/dma-buf/heaps/restricted_heap.c > +++ b/drivers/dma-buf/heaps/restricted_heap.c > @@ -12,6 +12,10 @@ > > #include "restricted_heap.h" > > +struct restricted_heap_attachment { > + struct sg_table *table; > +}; > + > static int > restricted_heap_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) > { > @@ -45,6 +49,104 @@ restricted_heap_memory_free(struct restricted_heap *heap, struct restricted_buff > ops->memory_free(heap, buf); > } > > +static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) > +{ > + struct restricted_buffer *restricted_buf = dmabuf->priv; > + struct restricted_heap_attachment *a; > + struct sg_table *table; > + int ret; > + > + a = kzalloc(sizeof(*a), GFP_KERNEL); > + if (!a) > + return -ENOMEM; > + > + table = kzalloc(sizeof(*table), GFP_KERNEL); > + if (!table) { > + ret = -ENOMEM; > + goto err_free_attach; > + } > + > + ret = sg_alloc_table(table, 1, GFP_KERNEL); > + if (ret) > + goto err_free_sgt; > + sg_set_page(table->sgl, NULL, restricted_buf->size, 0); So this is definitely broken and violating the dma-buf api rules. You cannot let attach succed and supply a dummy/invalid sg table. Two options: - Reject ->attach for all this buffers with -EBUSY and provide instead a private api for these secure buffers, similar to how virtio_dma_buf has private virto-specific apis. This interface would need to be standardized across all arm TEE users, so that we don't have a disastrous proliferation of apis. - Allow ->attach, but _only_ for drivers/devices which can access the secure buffer correctly, and only if you can put the right secure buffer address into the sg table directly. If dma to a secure buffer for a given struct device * will not work correctly (i.e. without data corruption), you _must_ reject the attach attempt with -EBUSY. The 2nd approach would be my preferred one, if it's technically possible. Also my understanding is that arm TEE is standardized, so I think we'll at least want some acks from other soc people whether this will work for them too. Finally the usual drill: - this also needs the driver side support, if there's any changes needed. Just the new heap isn't enough. - and for drm you need open userspace for this. Doesn't have to be the full content protection decode pipeline, the drivers in drm that landed secure buffer support thus far enabled it using the EGL_EXT_protected_content extension using gl, which side steps all the complications around content decryption keys and support Cheers, Sima > + > + a->table = table; > + attachment->priv = a; > + > + return 0; > + > +err_free_sgt: > + kfree(table); > +err_free_attach: > + kfree(a); > + return ret; > +} > + > +static void restricted_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) > +{ > + struct restricted_heap_attachment *a = attachment->priv; > + > + sg_free_table(a->table); > + kfree(a->table); > + kfree(a); > +} > + > +static struct sg_table * > +restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direct) > +{ > + struct restricted_heap_attachment *a = attachment->priv; > + struct sg_table *table = a->table; > + > + return table; > +} > + > +static void > +restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, > + enum dma_data_direction direction) > +{ > + struct restricted_heap_attachment *a = attachment->priv; > + > + WARN_ON(a->table != table); > +} > + > +static int > +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) > +{ > + return -EPERM; > +} > + > +static int > +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) > +{ > + return -EPERM; > +} > + > +static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) > +{ > + return -EPERM; > +} > + > +static void restricted_heap_free(struct dma_buf *dmabuf) > +{ > + struct restricted_buffer *restricted_buf = dmabuf->priv; > + struct restricted_heap *heap = dma_heap_get_drvdata(restricted_buf->heap); > + > + restricted_heap_memory_free(heap, restricted_buf); > + kfree(restricted_buf); > +} > + > +static const struct dma_buf_ops restricted_heap_buf_ops = { > + .attach = restricted_heap_attach, > + .detach = restricted_heap_detach, > + .map_dma_buf = restricted_heap_map_dma_buf, > + .unmap_dma_buf = restricted_heap_unmap_dma_buf, > + .begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access, > + .end_cpu_access = restricted_heap_dma_buf_end_cpu_access, > + .mmap = restricted_heap_dma_buf_mmap, > + .release = restricted_heap_free, > +}; > + > static struct dma_buf * > restricted_heap_allocate(struct dma_heap *heap, unsigned long size, > unsigned long fd_flags, unsigned long heap_flags) > @@ -66,6 +168,7 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, > if (ret) > goto err_free_buf; > exp_info.exp_name = dma_heap_get_name(heap); > + exp_info.ops = &restricted_heap_buf_ops; > exp_info.size = restricted_buf->size; > exp_info.flags = fd_flags; > exp_info.priv = restricted_buf; > -- > 2.25.1 >
On Fri, Jan 12, 2024 at 10:41:14AM +0100, Daniel Vetter wrote: > On Fri, Jan 12, 2024 at 05:20:11PM +0800, Yong Wu wrote: > > Add the dma_ops for this restricted heap. For restricted buffer, > > cache_ops/mmap are not allowed, thus return EPERM for them. > > > > Signed-off-by: Yong Wu <yong.wu@mediatek.com> > > --- > > drivers/dma-buf/heaps/restricted_heap.c | 103 ++++++++++++++++++++++++ > > 1 file changed, 103 insertions(+) > > > > diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c > > index 8c266a0f6192..ec4c63d2112d 100644 > > --- a/drivers/dma-buf/heaps/restricted_heap.c > > +++ b/drivers/dma-buf/heaps/restricted_heap.c > > @@ -12,6 +12,10 @@ > > > > #include "restricted_heap.h" > > > > +struct restricted_heap_attachment { > > + struct sg_table *table; > > +}; > > + > > static int > > restricted_heap_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) > > { > > @@ -45,6 +49,104 @@ restricted_heap_memory_free(struct restricted_heap *heap, struct restricted_buff > > ops->memory_free(heap, buf); > > } > > > > +static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) > > +{ > > + struct restricted_buffer *restricted_buf = dmabuf->priv; > > + struct restricted_heap_attachment *a; > > + struct sg_table *table; > > + int ret; > > + > > + a = kzalloc(sizeof(*a), GFP_KERNEL); > > + if (!a) > > + return -ENOMEM; > > + > > + table = kzalloc(sizeof(*table), GFP_KERNEL); > > + if (!table) { > > + ret = -ENOMEM; > > + goto err_free_attach; > > + } > > + > > + ret = sg_alloc_table(table, 1, GFP_KERNEL); > > + if (ret) > > + goto err_free_sgt; > > + sg_set_page(table->sgl, NULL, restricted_buf->size, 0); > > So this is definitely broken and violating the dma-buf api rules. You > cannot let attach succed and supply a dummy/invalid sg table. > > Two options: > > - Reject ->attach for all this buffers with -EBUSY and provide instead a > private api for these secure buffers, similar to how virtio_dma_buf has > private virto-specific apis. This interface would need to be > standardized across all arm TEE users, so that we don't have a > disastrous proliferation of apis. > > - Allow ->attach, but _only_ for drivers/devices which can access the > secure buffer correctly, and only if you can put the right secure buffer > address into the sg table directly. If dma to a secure buffer for a > given struct device * will not work correctly (i.e. without data > corruption), you _must_ reject the attach attempt with -EBUSY. > > The 2nd approach would be my preferred one, if it's technically possible. > > Also my understanding is that arm TEE is standardized, so I think we'll at > least want some acks from other soc people whether this will work for them > too. > > Finally the usual drill: > - this also needs the driver side support, if there's any changes needed. > Just the new heap isn't enough. Ok I quickly scrolled through your drm patches and that confirms that the current dma-buf interface you're implementing is just completely breaking the api. And you need to paper over that will all kinds of very icky special-casing. So definitely need to rethink the overall design between dma-buf heaps and drivers here. -Sima > - and for drm you need open userspace for this. Doesn't have to be the > full content protection decode pipeline, the drivers in drm that landed > secure buffer support thus far enabled it using the > EGL_EXT_protected_content extension using gl, which side steps all the > complications around content decryption keys and support > > Cheers, Sima > > > + > > + a->table = table; > > + attachment->priv = a; > > + > > + return 0; > > + > > +err_free_sgt: > > + kfree(table); > > +err_free_attach: > > + kfree(a); > > + return ret; > > +} > > + > > +static void restricted_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) > > +{ > > + struct restricted_heap_attachment *a = attachment->priv; > > + > > + sg_free_table(a->table); > > + kfree(a->table); > > + kfree(a); > > +} > > + > > +static struct sg_table * > > +restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direct) > > +{ > > + struct restricted_heap_attachment *a = attachment->priv; > > + struct sg_table *table = a->table; > > + > > + return table; > > +} > > + > > +static void > > +restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, > > + enum dma_data_direction direction) > > +{ > > + struct restricted_heap_attachment *a = attachment->priv; > > + > > + WARN_ON(a->table != table); > > +} > > + > > +static int > > +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) > > +{ > > + return -EPERM; > > +} > > + > > +static int > > +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) > > +{ > > + return -EPERM; > > +} > > + > > +static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) > > +{ > > + return -EPERM; > > +} > > + > > +static void restricted_heap_free(struct dma_buf *dmabuf) > > +{ > > + struct restricted_buffer *restricted_buf = dmabuf->priv; > > + struct restricted_heap *heap = dma_heap_get_drvdata(restricted_buf->heap); > > + > > + restricted_heap_memory_free(heap, restricted_buf); > > + kfree(restricted_buf); > > +} > > + > > +static const struct dma_buf_ops restricted_heap_buf_ops = { > > + .attach = restricted_heap_attach, > > + .detach = restricted_heap_detach, > > + .map_dma_buf = restricted_heap_map_dma_buf, > > + .unmap_dma_buf = restricted_heap_unmap_dma_buf, > > + .begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access, > > + .end_cpu_access = restricted_heap_dma_buf_end_cpu_access, > > + .mmap = restricted_heap_dma_buf_mmap, > > + .release = restricted_heap_free, > > +}; > > + > > static struct dma_buf * > > restricted_heap_allocate(struct dma_heap *heap, unsigned long size, > > unsigned long fd_flags, unsigned long heap_flags) > > @@ -66,6 +168,7 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, > > if (ret) > > goto err_free_buf; > > exp_info.exp_name = dma_heap_get_name(heap); > > + exp_info.ops = &restricted_heap_buf_ops; > > exp_info.size = restricted_buf->size; > > exp_info.flags = fd_flags; > > exp_info.priv = restricted_buf; > > -- > > 2.25.1 > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch
On Fri, 2024-01-12 at 10:49 +0100, Daniel Vetter wrote: > > External email : Please do not click links or open attachments until > you have verified the sender or the content. > On Fri, Jan 12, 2024 at 10:41:14AM +0100, Daniel Vetter wrote: > > On Fri, Jan 12, 2024 at 05:20:11PM +0800, Yong Wu wrote: > > > Add the dma_ops for this restricted heap. For restricted buffer, > > > cache_ops/mmap are not allowed, thus return EPERM for them. > > > > > > Signed-off-by: Yong Wu <yong.wu@mediatek.com> > > > --- > > > drivers/dma-buf/heaps/restricted_heap.c | 103 > ++++++++++++++++++++++++ > > > 1 file changed, 103 insertions(+) > > > > > > diff --git a/drivers/dma-buf/heaps/restricted_heap.c > b/drivers/dma-buf/heaps/restricted_heap.c > > > index 8c266a0f6192..ec4c63d2112d 100644 > > > --- a/drivers/dma-buf/heaps/restricted_heap.c > > > +++ b/drivers/dma-buf/heaps/restricted_heap.c > > > @@ -12,6 +12,10 @@ > > > > > > #include "restricted_heap.h" > > > > > > +struct restricted_heap_attachment { > > > +struct sg_table*table; > > > +}; > > > + > > > static int > > > restricted_heap_memory_allocate(struct restricted_heap *heap, > struct restricted_buffer *buf) > > > { > > > @@ -45,6 +49,104 @@ restricted_heap_memory_free(struct > restricted_heap *heap, struct restricted_buff > > > ops->memory_free(heap, buf); > > > } > > > > > > +static int restricted_heap_attach(struct dma_buf *dmabuf, struct > dma_buf_attachment *attachment) > > > +{ > > > +struct restricted_buffer *restricted_buf = dmabuf->priv; > > > +struct restricted_heap_attachment *a; > > > +struct sg_table *table; > > > +int ret; > > > + > > > +a = kzalloc(sizeof(*a), GFP_KERNEL); > > > +if (!a) > > > +return -ENOMEM; > > > + > > > +table = kzalloc(sizeof(*table), GFP_KERNEL); > > > +if (!table) { > > > +ret = -ENOMEM; > > > +goto err_free_attach; > > > +} > > > + > > > +ret = sg_alloc_table(table, 1, GFP_KERNEL); > > > +if (ret) > > > +goto err_free_sgt; > > > +sg_set_page(table->sgl, NULL, restricted_buf->size, 0); > > > > So this is definitely broken and violating the dma-buf api rules. > You > > cannot let attach succed and supply a dummy/invalid sg table. > > > > Two options: > > > > - Reject ->attach for all this buffers with -EBUSY and provide > instead a > > private api for these secure buffers, similar to how > virtio_dma_buf has > > private virto-specific apis. This interface would need to be > > standardized across all arm TEE users, so that we don't have a > > disastrous proliferation of apis. > > > > - Allow ->attach, but _only_ for drivers/devices which can access > the > > secure buffer correctly, and only if you can put the right secure > buffer > > address into the sg table directly. If dma to a secure buffer for > a > > given struct device * will not work correctly (i.e. without data > > corruption), you _must_ reject the attach attempt with -EBUSY. > > > > The 2nd approach would be my preferred one, if it's technically > possible. > > > > Also my understanding is that arm TEE is standardized, so I think > we'll at > > least want some acks from other soc people whether this will work > for them > > too. > > > > Finally the usual drill: > > - this also needs the driver side support, if there's any changes > needed. > > Just the new heap isn't enough. > > Ok I quickly scrolled through your drm patches and that confirms that > the > current dma-buf interface you're implementing is just completely > breaking > the api. And you need to paper over that will all kinds of very icky > special-casing. > > So definitely need to rethink the overall design between dma-buf > heaps and > drivers here. Hi, Thanks very much for the review, and sorry for reply so late. We reconstructed our TEE commands so that the kernel can obtain the valid PA/pages, then the sg operations can run normally. I will send the next version. Thanks. > -Sima > > > - and for drm you need open userspace for this. Doesn't have to be > the > > full content protection decode pipeline, the drivers in drm that > landed > > secure buffer support thus far enabled it using the > > EGL_EXT_protected_content extension using gl, which side steps > all the > > complications around content decryption keys and support > > > > Cheers, Sima > > > > > + > > > +a->table = table; > > > +attachment->priv = a; > > > + > > > +return 0; > > > + > > > +err_free_sgt: > > > +kfree(table); > > > +err_free_attach: > > > +kfree(a); > > > +return ret; > > > +} > > > + > > > +static void restricted_heap_detach(struct dma_buf *dmabuf, > struct dma_buf_attachment *attachment) > > > +{ > > > +struct restricted_heap_attachment *a = attachment->priv; > > > + > > > +sg_free_table(a->table); > > > +kfree(a->table); > > > +kfree(a); > > > +} > > > + > > > +static struct sg_table * > > > +restricted_heap_map_dma_buf(struct dma_buf_attachment > *attachment, enum dma_data_direction direct) > > > +{ > > > +struct restricted_heap_attachment *a = attachment->priv; > > > +struct sg_table *table = a->table; > > > + > > > +return table; > > > +} > > > + > > > +static void > > > +restricted_heap_unmap_dma_buf(struct dma_buf_attachment > *attachment, struct sg_table *table, > > > + enum dma_data_direction direction) > > > +{ > > > +struct restricted_heap_attachment *a = attachment->priv; > > > + > > > +WARN_ON(a->table != table); > > > +} > > > + > > > +static int > > > +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > enum dma_data_direction direction) > > > +{ > > > +return -EPERM; > > > +} > > > + > > > +static int > > > +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, > enum dma_data_direction direction) > > > +{ > > > +return -EPERM; > > > +} > > > + > > > +static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf, > struct vm_area_struct *vma) > > > +{ > > > +return -EPERM; > > > +} > > > + > > > +static void restricted_heap_free(struct dma_buf *dmabuf) > > > +{ > > > +struct restricted_buffer *restricted_buf = dmabuf->priv; > > > +struct restricted_heap *heap = > dma_heap_get_drvdata(restricted_buf->heap); > > > + > > > +restricted_heap_memory_free(heap, restricted_buf); > > > +kfree(restricted_buf); > > > +} > > > + > > > +static const struct dma_buf_ops restricted_heap_buf_ops = { > > > +.attach= restricted_heap_attach, > > > +.detach= restricted_heap_detach, > > > +.map_dma_buf= restricted_heap_map_dma_buf, > > > +.unmap_dma_buf= restricted_heap_unmap_dma_buf, > > > +.begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access, > > > +.end_cpu_access= restricted_heap_dma_buf_end_cpu_access, > > > +.mmap= restricted_heap_dma_buf_mmap, > > > +.release= restricted_heap_free, > > > +}; > > > + > > > static struct dma_buf * > > > restricted_heap_allocate(struct dma_heap *heap, unsigned long > size, > > > unsigned long fd_flags, unsigned long heap_flags) > > > @@ -66,6 +168,7 @@ restricted_heap_allocate(struct dma_heap > *heap, unsigned long size, > > > if (ret) > > > goto err_free_buf; > > > exp_info.exp_name = dma_heap_get_name(heap); > > > +exp_info.ops = &restricted_heap_buf_ops; > > > exp_info.size = restricted_buf->size; > > > exp_info.flags = fd_flags; > > > exp_info.priv = restricted_buf; > > > -- > > > 2.25.1 > > > > > > > -- > > Daniel Vetter > > Software Engineer, Intel Corporation > > http://blog.ffwll.ch >
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8c266a0f6192..ec4c63d2112d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,6 +12,10 @@ #include "restricted_heap.h" +struct restricted_heap_attachment { + struct sg_table *table; +}; + static int restricted_heap_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) { @@ -45,6 +49,104 @@ restricted_heap_memory_free(struct restricted_heap *heap, struct restricted_buff ops->memory_free(heap, buf); } +static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct restricted_buffer *restricted_buf = dmabuf->priv; + struct restricted_heap_attachment *a; + struct sg_table *table; + int ret; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = kzalloc(sizeof(*table), GFP_KERNEL); + if (!table) { + ret = -ENOMEM; + goto err_free_attach; + } + + ret = sg_alloc_table(table, 1, GFP_KERNEL); + if (ret) + goto err_free_sgt; + sg_set_page(table->sgl, NULL, restricted_buf->size, 0); + + a->table = table; + attachment->priv = a; + + return 0; + +err_free_sgt: + kfree(table); +err_free_attach: + kfree(a); + return ret; +} + +static void restricted_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct restricted_heap_attachment *a = attachment->priv; + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table * +restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direct) +{ + struct restricted_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + + return table; +} + +static void +restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, + enum dma_data_direction direction) +{ + struct restricted_heap_attachment *a = attachment->priv; + + WARN_ON(a->table != table); +} + +static int +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{ + return -EPERM; +} + +static int +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{ + return -EPERM; +} + +static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + return -EPERM; +} + +static void restricted_heap_free(struct dma_buf *dmabuf) +{ + struct restricted_buffer *restricted_buf = dmabuf->priv; + struct restricted_heap *heap = dma_heap_get_drvdata(restricted_buf->heap); + + restricted_heap_memory_free(heap, restricted_buf); + kfree(restricted_buf); +} + +static const struct dma_buf_ops restricted_heap_buf_ops = { + .attach = restricted_heap_attach, + .detach = restricted_heap_detach, + .map_dma_buf = restricted_heap_map_dma_buf, + .unmap_dma_buf = restricted_heap_unmap_dma_buf, + .begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access, + .end_cpu_access = restricted_heap_dma_buf_end_cpu_access, + .mmap = restricted_heap_dma_buf_mmap, + .release = restricted_heap_free, +}; + static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) @@ -66,6 +168,7 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, if (ret) goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap); + exp_info.ops = &restricted_heap_buf_ops; exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; exp_info.priv = restricted_buf;