[v6,02/22] drm/gem: Move mapping of imported dma-bufs to drm_gem_mmap_obj()

Message ID 20220526235040.678984-3-dmitry.osipenko@collabora.com (mailing list archive)
State Not Applicable
Headers
Series Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers |

Commit Message

Dmitry Osipenko May 26, 2022, 11:50 p.m. UTC
  Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't
handle imported dma-bufs properly, which results in mapping of something
else than the imported dma-buf. For example, on NVIDIA Tegra we get a hard
lockup when userspace writes to the memory mapping of a dma-buf that was
imported into Tegra's DRM GEM.

To fix this bug, move mapping of imported dma-bufs to drm_gem_mmap_obj().
Now mmaping of imported dma-bufs works properly for all DRM drivers.

Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/drm_gem.c              | 3 +++
 drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ---------
 drivers/gpu/drm/tegra/gem.c            | 4 ++++
 3 files changed, 7 insertions(+), 9 deletions(-)
  

Comments

Thomas Hellström (Intel) June 29, 2022, 6:40 a.m. UTC | #1
On 5/27/22 01:50, Dmitry Osipenko wrote:
> Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't
> handle imported dma-bufs properly, which results in mapping of something
> else than the imported dma-buf. For example, on NVIDIA Tegra we get a hard
> lockup when userspace writes to the memory mapping of a dma-buf that was
> imported into Tegra's DRM GEM.
>
> To fix this bug, move mapping of imported dma-bufs to drm_gem_mmap_obj().
> Now mmaping of imported dma-bufs works properly for all DRM drivers.
Same comment about Fixes: as in patch 1,
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>   drivers/gpu/drm/drm_gem.c              | 3 +++
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ---------
>   drivers/gpu/drm/tegra/gem.c            | 4 ++++
>   3 files changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 86d670c71286..7c0b025508e4 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1038,6 +1038,9 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
>   	if (obj_size < vma->vm_end - vma->vm_start)
>   		return -EINVAL;
>   
> +	if (obj->import_attach)
> +		return dma_buf_mmap(obj->dma_buf, vma, 0);

If we start enabling mmaping of imported dma-bufs on a majority of 
drivers in this way, how do we ensure that user-space is not blindly 
using the object mmap without calling the needed DMA_BUF_IOCTL_SYNC 
which is needed before and after cpu access of mmap'ed dma-bufs?

I was under the impression (admittedly without looking) that the few 
drivers that actually called into dma_buf_mmap() had some private 
user-mode driver code in place that ensured this happened.

/Thomas


> +
>   	/* Take a ref for this mapping of the object, so that the fault
>   	 * handler can dereference the mmap offset's pointer to the object.
>   	 * This reference is cleaned up by the corresponding vm_close
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 8ad0e02991ca..6190f5018986 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -609,17 +609,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>    */
>   int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma)
>   {
> -	struct drm_gem_object *obj = &shmem->base;
>   	int ret;
>   
> -	if (obj->import_attach) {
> -		/* Drop the reference drm_gem_mmap_obj() acquired.*/
> -		drm_gem_object_put(obj);
> -		vma->vm_private_data = NULL;
> -
> -		return dma_buf_mmap(obj->dma_buf, vma, 0);
> -	}
> -
>   	ret = drm_gem_shmem_get_pages(shmem);
>   	if (ret) {
>   		drm_gem_vm_close(vma);
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 7c7dd84e6db8..f92aa20d63bb 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -564,6 +564,10 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
>   {
>   	struct tegra_bo *bo = to_tegra_bo(gem);
>   
> +	/* imported dmu-buf is mapped by drm_gem_mmap_obj()  */
> +	if (gem->import_attach)
> +		return 0;
> +
>   	if (!bo->pages) {
>   		unsigned long vm_pgoff = vma->vm_pgoff;
>   		int err;
  
Dmitry Osipenko June 29, 2022, 8:22 a.m. UTC | #2
On 6/29/22 09:40, Thomas Hellström (Intel) wrote:
> 
> On 5/27/22 01:50, Dmitry Osipenko wrote:
>> Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't
>> handle imported dma-bufs properly, which results in mapping of something
>> else than the imported dma-buf. For example, on NVIDIA Tegra we get a
>> hard
>> lockup when userspace writes to the memory mapping of a dma-buf that was
>> imported into Tegra's DRM GEM.
>>
>> To fix this bug, move mapping of imported dma-bufs to drm_gem_mmap_obj().
>> Now mmaping of imported dma-bufs works properly for all DRM drivers.
> Same comment about Fixes: as in patch 1,
>>
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>> ---
>>   drivers/gpu/drm/drm_gem.c              | 3 +++
>>   drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ---------
>>   drivers/gpu/drm/tegra/gem.c            | 4 ++++
>>   3 files changed, 7 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>> index 86d670c71286..7c0b025508e4 100644
>> --- a/drivers/gpu/drm/drm_gem.c
>> +++ b/drivers/gpu/drm/drm_gem.c
>> @@ -1038,6 +1038,9 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj,
>> unsigned long obj_size,
>>       if (obj_size < vma->vm_end - vma->vm_start)
>>           return -EINVAL;
>>   +    if (obj->import_attach)
>> +        return dma_buf_mmap(obj->dma_buf, vma, 0);
> 
> If we start enabling mmaping of imported dma-bufs on a majority of
> drivers in this way, how do we ensure that user-space is not blindly
> using the object mmap without calling the needed DMA_BUF_IOCTL_SYNC
> which is needed before and after cpu access of mmap'ed dma-bufs?
> 
> I was under the impression (admittedly without looking) that the few
> drivers that actually called into dma_buf_mmap() had some private
> user-mode driver code in place that ensured this happened.

Since it's a userspace who does the mapping, then it should be a
responsibility of userspace to do all the necessary syncing. I'm not
sure whether anyone in userspace really needs to map imported dma-bufs
in practice. Nevertheless, this use-case is broken and should be fixed
by either allowing to do the mapping or prohibiting it.
  
Thomas Hellström (Intel) June 29, 2022, 8:43 a.m. UTC | #3
On 6/29/22 10:22, Dmitry Osipenko wrote:
> On 6/29/22 09:40, Thomas Hellström (Intel) wrote:
>> On 5/27/22 01:50, Dmitry Osipenko wrote:
>>> Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't
>>> handle imported dma-bufs properly, which results in mapping of something
>>> else than the imported dma-buf. For example, on NVIDIA Tegra we get a
>>> hard
>>> lockup when userspace writes to the memory mapping of a dma-buf that was
>>> imported into Tegra's DRM GEM.
>>>
>>> To fix this bug, move mapping of imported dma-bufs to drm_gem_mmap_obj().
>>> Now mmaping of imported dma-bufs works properly for all DRM drivers.
>> Same comment about Fixes: as in patch 1,
>>> Cc: stable@vger.kernel.org
>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>> ---
>>>    drivers/gpu/drm/drm_gem.c              | 3 +++
>>>    drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ---------
>>>    drivers/gpu/drm/tegra/gem.c            | 4 ++++
>>>    3 files changed, 7 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>> index 86d670c71286..7c0b025508e4 100644
>>> --- a/drivers/gpu/drm/drm_gem.c
>>> +++ b/drivers/gpu/drm/drm_gem.c
>>> @@ -1038,6 +1038,9 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj,
>>> unsigned long obj_size,
>>>        if (obj_size < vma->vm_end - vma->vm_start)
>>>            return -EINVAL;
>>>    +    if (obj->import_attach)
>>> +        return dma_buf_mmap(obj->dma_buf, vma, 0);
>> If we start enabling mmaping of imported dma-bufs on a majority of
>> drivers in this way, how do we ensure that user-space is not blindly
>> using the object mmap without calling the needed DMA_BUF_IOCTL_SYNC
>> which is needed before and after cpu access of mmap'ed dma-bufs?
>>
>> I was under the impression (admittedly without looking) that the few
>> drivers that actually called into dma_buf_mmap() had some private
>> user-mode driver code in place that ensured this happened.
> Since it's a userspace who does the mapping, then it should be a
> responsibility of userspace to do all the necessary syncing.

Sure, but nothing prohibits user-space to ignore the syncing thinking 
"It works anyway", testing those drivers where the syncing is a NOP. And 
when a driver that finally needs syncing is tested it's too late to fix 
all broken user-space.

>   I'm not
> sure whether anyone in userspace really needs to map imported dma-bufs
> in practice. Nevertheless, this use-case is broken and should be fixed
> by either allowing to do the mapping or prohibiting it.
>
Then I'd vote for prohibiting it, at least for now. And for the future 
moving forward we could perhaps revisit the dma-buf need for syncing, 
requiring those drivers that actually need it to implement emulated 
coherent memory which can be done not too inefficiently (vmwgfx being 
one example).

/Thomas
  
Dmitry Osipenko June 29, 2022, 11:06 p.m. UTC | #4
On 6/29/22 11:43, Thomas Hellström (Intel) wrote:
> 
> On 6/29/22 10:22, Dmitry Osipenko wrote:
>> On 6/29/22 09:40, Thomas Hellström (Intel) wrote:
>>> On 5/27/22 01:50, Dmitry Osipenko wrote:
>>>> Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't
>>>> handle imported dma-bufs properly, which results in mapping of
>>>> something
>>>> else than the imported dma-buf. For example, on NVIDIA Tegra we get a
>>>> hard
>>>> lockup when userspace writes to the memory mapping of a dma-buf that
>>>> was
>>>> imported into Tegra's DRM GEM.
>>>>
>>>> To fix this bug, move mapping of imported dma-bufs to
>>>> drm_gem_mmap_obj().
>>>> Now mmaping of imported dma-bufs works properly for all DRM drivers.
>>> Same comment about Fixes: as in patch 1,
>>>> Cc: stable@vger.kernel.org
>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>> ---
>>>>    drivers/gpu/drm/drm_gem.c              | 3 +++
>>>>    drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ---------
>>>>    drivers/gpu/drm/tegra/gem.c            | 4 ++++
>>>>    3 files changed, 7 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>>> index 86d670c71286..7c0b025508e4 100644
>>>> --- a/drivers/gpu/drm/drm_gem.c
>>>> +++ b/drivers/gpu/drm/drm_gem.c
>>>> @@ -1038,6 +1038,9 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj,
>>>> unsigned long obj_size,
>>>>        if (obj_size < vma->vm_end - vma->vm_start)
>>>>            return -EINVAL;
>>>>    +    if (obj->import_attach)
>>>> +        return dma_buf_mmap(obj->dma_buf, vma, 0);
>>> If we start enabling mmaping of imported dma-bufs on a majority of
>>> drivers in this way, how do we ensure that user-space is not blindly
>>> using the object mmap without calling the needed DMA_BUF_IOCTL_SYNC
>>> which is needed before and after cpu access of mmap'ed dma-bufs?
>>>
>>> I was under the impression (admittedly without looking) that the few
>>> drivers that actually called into dma_buf_mmap() had some private
>>> user-mode driver code in place that ensured this happened.
>> Since it's a userspace who does the mapping, then it should be a
>> responsibility of userspace to do all the necessary syncing.
> 
> Sure, but nothing prohibits user-space to ignore the syncing thinking
> "It works anyway", testing those drivers where the syncing is a NOP. And
> when a driver that finally needs syncing is tested it's too late to fix
> all broken user-space.
> 
>>   I'm not
>> sure whether anyone in userspace really needs to map imported dma-bufs
>> in practice. Nevertheless, this use-case is broken and should be fixed
>> by either allowing to do the mapping or prohibiting it.
>>
> Then I'd vote for prohibiting it, at least for now. And for the future
> moving forward we could perhaps revisit the dma-buf need for syncing,
> requiring those drivers that actually need it to implement emulated
> coherent memory which can be done not too inefficiently (vmwgfx being
> one example).

Alright, I'll change it to prohibit the mapping. This indeed should be a
better option.
  
Christian König July 4, 2022, 12:33 p.m. UTC | #5
Am 30.06.22 um 01:06 schrieb Dmitry Osipenko:
> On 6/29/22 11:43, Thomas Hellström (Intel) wrote:
>> On 6/29/22 10:22, Dmitry Osipenko wrote:
>>> On 6/29/22 09:40, Thomas Hellström (Intel) wrote:
>>>> On 5/27/22 01:50, Dmitry Osipenko wrote:
>>>>> Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't
>>>>> handle imported dma-bufs properly, which results in mapping of
>>>>> something
>>>>> else than the imported dma-buf. For example, on NVIDIA Tegra we get a
>>>>> hard
>>>>> lockup when userspace writes to the memory mapping of a dma-buf that
>>>>> was
>>>>> imported into Tegra's DRM GEM.
>>>>>
>>>>> To fix this bug, move mapping of imported dma-bufs to
>>>>> drm_gem_mmap_obj().
>>>>> Now mmaping of imported dma-bufs works properly for all DRM drivers.
>>>> Same comment about Fixes: as in patch 1,
>>>>> Cc: stable@vger.kernel.org
>>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>>> ---
>>>>>     drivers/gpu/drm/drm_gem.c              | 3 +++
>>>>>     drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ---------
>>>>>     drivers/gpu/drm/tegra/gem.c            | 4 ++++
>>>>>     3 files changed, 7 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>>>> index 86d670c71286..7c0b025508e4 100644
>>>>> --- a/drivers/gpu/drm/drm_gem.c
>>>>> +++ b/drivers/gpu/drm/drm_gem.c
>>>>> @@ -1038,6 +1038,9 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj,
>>>>> unsigned long obj_size,
>>>>>         if (obj_size < vma->vm_end - vma->vm_start)
>>>>>             return -EINVAL;
>>>>>     +    if (obj->import_attach)
>>>>> +        return dma_buf_mmap(obj->dma_buf, vma, 0);
>>>> If we start enabling mmaping of imported dma-bufs on a majority of
>>>> drivers in this way, how do we ensure that user-space is not blindly
>>>> using the object mmap without calling the needed DMA_BUF_IOCTL_SYNC
>>>> which is needed before and after cpu access of mmap'ed dma-bufs?
>>>>
>>>> I was under the impression (admittedly without looking) that the few
>>>> drivers that actually called into dma_buf_mmap() had some private
>>>> user-mode driver code in place that ensured this happened.
>>> Since it's a userspace who does the mapping, then it should be a
>>> responsibility of userspace to do all the necessary syncing.
>> Sure, but nothing prohibits user-space to ignore the syncing thinking
>> "It works anyway", testing those drivers where the syncing is a NOP. And
>> when a driver that finally needs syncing is tested it's too late to fix
>> all broken user-space.
>>
>>>    I'm not
>>> sure whether anyone in userspace really needs to map imported dma-bufs
>>> in practice. Nevertheless, this use-case is broken and should be fixed
>>> by either allowing to do the mapping or prohibiting it.
>>>
>> Then I'd vote for prohibiting it, at least for now. And for the future
>> moving forward we could perhaps revisit the dma-buf need for syncing,
>> requiring those drivers that actually need it to implement emulated
>> coherent memory which can be done not too inefficiently (vmwgfx being
>> one example).
> Alright, I'll change it to prohibit the mapping. This indeed should be a
> better option.

Oh, yes please. But I would expect that some people start screaming.

Over time I've got tons of TTM patches because people illegally tried to 
mmap() imported DMA-bufs in their driver.

Anyway this is probably the right thing to do and we can work on fixing 
the fallout later on.

Regards,
Christian.
  
Dmitry Osipenko July 4, 2022, 10:44 p.m. UTC | #6
On 7/4/22 15:33, Christian König wrote:
> Am 30.06.22 um 01:06 schrieb Dmitry Osipenko:
>> On 6/29/22 11:43, Thomas Hellström (Intel) wrote:
>>> On 6/29/22 10:22, Dmitry Osipenko wrote:
>>>> On 6/29/22 09:40, Thomas Hellström (Intel) wrote:
>>>>> On 5/27/22 01:50, Dmitry Osipenko wrote:
>>>>>> Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't
>>>>>> handle imported dma-bufs properly, which results in mapping of
>>>>>> something
>>>>>> else than the imported dma-buf. For example, on NVIDIA Tegra we get a
>>>>>> hard
>>>>>> lockup when userspace writes to the memory mapping of a dma-buf that
>>>>>> was
>>>>>> imported into Tegra's DRM GEM.
>>>>>>
>>>>>> To fix this bug, move mapping of imported dma-bufs to
>>>>>> drm_gem_mmap_obj().
>>>>>> Now mmaping of imported dma-bufs works properly for all DRM drivers.
>>>>> Same comment about Fixes: as in patch 1,
>>>>>> Cc: stable@vger.kernel.org
>>>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>>>> ---
>>>>>>     drivers/gpu/drm/drm_gem.c              | 3 +++
>>>>>>     drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ---------
>>>>>>     drivers/gpu/drm/tegra/gem.c            | 4 ++++
>>>>>>     3 files changed, 7 insertions(+), 9 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>>>>> index 86d670c71286..7c0b025508e4 100644
>>>>>> --- a/drivers/gpu/drm/drm_gem.c
>>>>>> +++ b/drivers/gpu/drm/drm_gem.c
>>>>>> @@ -1038,6 +1038,9 @@ int drm_gem_mmap_obj(struct drm_gem_object
>>>>>> *obj,
>>>>>> unsigned long obj_size,
>>>>>>         if (obj_size < vma->vm_end - vma->vm_start)
>>>>>>             return -EINVAL;
>>>>>>     +    if (obj->import_attach)
>>>>>> +        return dma_buf_mmap(obj->dma_buf, vma, 0);
>>>>> If we start enabling mmaping of imported dma-bufs on a majority of
>>>>> drivers in this way, how do we ensure that user-space is not blindly
>>>>> using the object mmap without calling the needed DMA_BUF_IOCTL_SYNC
>>>>> which is needed before and after cpu access of mmap'ed dma-bufs?
>>>>>
>>>>> I was under the impression (admittedly without looking) that the few
>>>>> drivers that actually called into dma_buf_mmap() had some private
>>>>> user-mode driver code in place that ensured this happened.
>>>> Since it's a userspace who does the mapping, then it should be a
>>>> responsibility of userspace to do all the necessary syncing.
>>> Sure, but nothing prohibits user-space to ignore the syncing thinking
>>> "It works anyway", testing those drivers where the syncing is a NOP. And
>>> when a driver that finally needs syncing is tested it's too late to fix
>>> all broken user-space.
>>>
>>>>    I'm not
>>>> sure whether anyone in userspace really needs to map imported dma-bufs
>>>> in practice. Nevertheless, this use-case is broken and should be fixed
>>>> by either allowing to do the mapping or prohibiting it.
>>>>
>>> Then I'd vote for prohibiting it, at least for now. And for the future
>>> moving forward we could perhaps revisit the dma-buf need for syncing,
>>> requiring those drivers that actually need it to implement emulated
>>> coherent memory which can be done not too inefficiently (vmwgfx being
>>> one example).
>> Alright, I'll change it to prohibit the mapping. This indeed should be a
>> better option.
> 
> Oh, yes please. But I would expect that some people start screaming.
> 
> Over time I've got tons of TTM patches because people illegally tried to
> mmap() imported DMA-bufs in their driver.
> 
> Anyway this is probably the right thing to do and we can work on fixing
> the fallout later on.

I already sent out the patch [1] that prohibits the mapping. Would be
great if you all could take a look and give a r-b, thanks in advance.

[1] https://patchwork.freedesktop.org/patch/492148/
  

Patch

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 86d670c71286..7c0b025508e4 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1038,6 +1038,9 @@  int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
 	if (obj_size < vma->vm_end - vma->vm_start)
 		return -EINVAL;
 
+	if (obj->import_attach)
+		return dma_buf_mmap(obj->dma_buf, vma, 0);
+
 	/* Take a ref for this mapping of the object, so that the fault
 	 * handler can dereference the mmap offset's pointer to the object.
 	 * This reference is cleaned up by the corresponding vm_close
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 8ad0e02991ca..6190f5018986 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -609,17 +609,8 @@  EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
  */
 int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma)
 {
-	struct drm_gem_object *obj = &shmem->base;
 	int ret;
 
-	if (obj->import_attach) {
-		/* Drop the reference drm_gem_mmap_obj() acquired.*/
-		drm_gem_object_put(obj);
-		vma->vm_private_data = NULL;
-
-		return dma_buf_mmap(obj->dma_buf, vma, 0);
-	}
-
 	ret = drm_gem_shmem_get_pages(shmem);
 	if (ret) {
 		drm_gem_vm_close(vma);
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 7c7dd84e6db8..f92aa20d63bb 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -564,6 +564,10 @@  int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
 {
 	struct tegra_bo *bo = to_tegra_bo(gem);
 
+	/* imported dmu-buf is mapped by drm_gem_mmap_obj()  */
+	if (gem->import_attach)
+		return 0;
+
 	if (!bo->pages) {
 		unsigned long vm_pgoff = vma->vm_pgoff;
 		int err;