Message ID | 1691634304-2158-5-git-send-email-quic_vgarodia@quicinc.com (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | Stanimir Varbanov |
Headers |
Received: from vger.kernel.org ([23.128.96.18]) by www.linuxtv.org with esmtp (Exim 4.92) (envelope-from <linux-media-owner@vger.kernel.org>) id 1qTvNR-00ELDO-U0; Thu, 10 Aug 2023 02:26:18 +0000 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232422AbjHJC0Q (ORCPT <rfc822;mkrufky@linuxtv.org> + 1 other); Wed, 9 Aug 2023 22:26:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232363AbjHJC0O (ORCPT <rfc822;linux-media@vger.kernel.org>); Wed, 9 Aug 2023 22:26:14 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F9AA2132; Wed, 9 Aug 2023 19:26:07 -0700 (PDT) Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37A27wbd013577; Thu, 10 Aug 2023 02:26:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=qnnfa/E2csSoEJfzcMjz6VRCrhnXshNrlw1QRh2X2YU=; b=ggcipcEMktPwU5XxEfwvspOlXm1GSbPGBkV62okWvwekGlEeh8zs06blBJnFiDCdVnmV aV9YqxdBB+gdOLekoGwIZzwLhVQxDbuUzQWqs42P6D0nzDJMLEd12Jff62ghp1uofjAt Mq7pgDUlNstjyAwjZUGEhY947db1qkZtHPeANXejQUka0cHuRxu0yAmmpX/fyAZDDGbW ie3iTVh0efBPqx5/TvAJePMcNPqFuxJUh4kfoaHppUI2hIM86/JDHecTi9oelHV0cmHv X7cwpdVd9WCw3gXXqGCEZeqCbDAUx4YOOTJYLeis3qwtekqGO53A+nY3zyn0rfNFJEVz Tg== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3scbcghbrn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 10 Aug 2023 02:26:03 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 37A2Q0Jd008904 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 10 Aug 2023 02:26:00 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Wed, 9 Aug 2023 19:25:56 -0700 From: Vikash Garodia <quic_vgarodia@quicinc.com> To: <stanimir.k.varbanov@gmail.com>, <bryan.odonoghue@linaro.org>, <agross@kernel.org>, <andersson@kernel.org>, <konrad.dybcio@linaro.org>, <mchehab@kernel.org>, <hans.verkuil@cisco.com>, <tfiga@chromium.org> CC: <linux-media@vger.kernel.org>, <linux-arm-msm@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <stable@vger.kernel.org>, Vikash Garodia <quic_vgarodia@quicinc.com> Subject: [PATCH v2 4/4] venus: hfi_parser: Add check to keep the number of codecs within range Date: Thu, 10 Aug 2023 07:55:04 +0530 Message-ID: <1691634304-2158-5-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1691634304-2158-1-git-send-email-quic_vgarodia@quicinc.com> References: <1691634304-2158-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: ZYW-dGODzw6ay9RIlfUHdLJyQ7XjRCKE X-Proofpoint-ORIG-GUID: ZYW-dGODzw6ay9RIlfUHdLJyQ7XjRCKE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-10_01,2023-08-09_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 malwarescore=0 adultscore=0 phishscore=0 mlxlogscore=925 mlxscore=0 spamscore=0 priorityscore=1501 suspectscore=0 impostorscore=0 clxscore=1015 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2308100019 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-media.vger.kernel.org> X-Mailing-List: linux-media@vger.kernel.org X-LSpam-Score: -2.5 (--) X-LSpam-Report: No, score=-2.5 required=5.0 tests=BAYES_00=-1.9,DKIM_SIGNED=0.1,DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,HEADER_FROM_DIFFERENT_DOMAINS=0.5,MAILING_LIST_MULTI=-1 autolearn=ham autolearn_force=no |
Series |
Venus driver fixes to avoid possible OOB accesses
|
|
Commit Message
Vikash Garodia
Aug. 10, 2023, 2:25 a.m. UTC
Supported codec bitmask is populated from the payload from venus firmware.
There is a possible case when all the bits in the codec bitmask is set. In
such case, core cap for decoder is filled and MAX_CODEC_NUM is utilized.
Now while filling the caps for encoder, it can lead to access the caps
array beyong 32 index. Hence leading to OOB write.
The fix counts the supported encoder and decoder. If the count is more than
max, then it skips accessing the caps.
Cc: stable@vger.kernel.org
Fixes: 1a73374a04e5 ("media: venus: hfi_parser: add common capability parser")
Signed-off-by: Vikash Garodia <quic_vgarodia@quicinc.com>
---
drivers/media/platform/qcom/venus/hfi_parser.c | 3 +++
1 file changed, 3 insertions(+)
Comments
On 10/08/2023 03:25, Vikash Garodia wrote: > + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > MAX_CODEC_NUM) > + return; > + Shouldn't this be >= ? struct hfi_plat_caps caps[MAX_CODEC_NUM]; --- bod
On 8/10/2023 5:03 PM, Bryan O'Donoghue wrote: > On 10/08/2023 03:25, Vikash Garodia wrote: >> + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > >> MAX_CODEC_NUM) >> + return; >> + > > Shouldn't this be >= ? Not needed. Lets take a hypothetical case when core->dec_codecs has initial 16 (0-15) bits set and core->enc_codecs has next 16 bits (16-31) set. The bit count would be 32. The codec loop after this check would run on caps array index 0-31. I do not see a possibility for OOB access in this case. > > struct hfi_plat_caps caps[MAX_CODEC_NUM]; > > --- > bod >
On 11/08/2023 07:04, Vikash Garodia wrote: > > On 8/10/2023 5:03 PM, Bryan O'Donoghue wrote: >> On 10/08/2023 03:25, Vikash Garodia wrote: >>> + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > >>> MAX_CODEC_NUM) >>> + return; >>> + >> >> Shouldn't this be >= ? > Not needed. Lets take a hypothetical case when core->dec_codecs has initial 16 > (0-15) bits set and core->enc_codecs has next 16 bits (16-31) set. The bit count > would be 32. The codec loop after this check would run on caps array index 0-31. > I do not see a possibility for OOB access in this case. > >> >> struct hfi_plat_caps caps[MAX_CODEC_NUM]; >> >> --- >> bod >> Are you not doing a general defensive coding pass in this series ie "[PATCH v2 2/4] venus: hfi: fix the check to handle session buffer requirement" --- bod
On 8/11/2023 2:12 PM, Bryan O'Donoghue wrote: > On 11/08/2023 07:04, Vikash Garodia wrote: >> >> On 8/10/2023 5:03 PM, Bryan O'Donoghue wrote: >>> On 10/08/2023 03:25, Vikash Garodia wrote: >>>> + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > >>>> MAX_CODEC_NUM) >>>> + return; >>>> + >>> >>> Shouldn't this be >= ? >> Not needed. Lets take a hypothetical case when core->dec_codecs has initial 16 >> (0-15) bits set and core->enc_codecs has next 16 bits (16-31) set. The bit count >> would be 32. The codec loop after this check would run on caps array index 0-31. >> I do not see a possibility for OOB access in this case. >> >>> >>> struct hfi_plat_caps caps[MAX_CODEC_NUM]; >>> >>> --- >>> bod >>> > > Are you not doing a general defensive coding pass in this series ie > > "[PATCH v2 2/4] venus: hfi: fix the check to handle session buffer requirement" In "PATCH v2 2/4", there is a possibility if the check does not consider "=". Here in this patch, I do not see a possibility. > > --- > bod
On 11/08/2023 09:49, Vikash Garodia wrote: > > On 8/11/2023 2:12 PM, Bryan O'Donoghue wrote: >> On 11/08/2023 07:04, Vikash Garodia wrote: >>> >>> On 8/10/2023 5:03 PM, Bryan O'Donoghue wrote: >>>> On 10/08/2023 03:25, Vikash Garodia wrote: >>>>> + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > >>>>> MAX_CODEC_NUM) >>>>> + return; >>>>> + >>>> >>>> Shouldn't this be >= ? >>> Not needed. Lets take a hypothetical case when core->dec_codecs has initial 16 >>> (0-15) bits set and core->enc_codecs has next 16 bits (16-31) set. The bit count >>> would be 32. The codec loop after this check would run on caps array index 0-31. >>> I do not see a possibility for OOB access in this case. >>> >>>> >>>> struct hfi_plat_caps caps[MAX_CODEC_NUM]; >>>> >>>> --- >>>> bod >>>> >> >> Are you not doing a general defensive coding pass in this series ie >> >> "[PATCH v2 2/4] venus: hfi: fix the check to handle session buffer requirement" > > In "PATCH v2 2/4", there is a possibility if the check does not consider "=". > Here in this patch, I do not see a possibility. > >> >> --- >> bod But surely hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) == MAX_CODEC_NUM is an invalid offset ? --- bod
On 8/11/2023 4:11 PM, Bryan O'Donoghue wrote: > On 11/08/2023 09:49, Vikash Garodia wrote: >> >> On 8/11/2023 2:12 PM, Bryan O'Donoghue wrote: >>> On 11/08/2023 07:04, Vikash Garodia wrote: >>>> >>>> On 8/10/2023 5:03 PM, Bryan O'Donoghue wrote: >>>>> On 10/08/2023 03:25, Vikash Garodia wrote: >>>>>> + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > >>>>>> MAX_CODEC_NUM) >>>>>> + return; >>>>>> + >>>>> >>>>> Shouldn't this be >= ? >>>> Not needed. Lets take a hypothetical case when core->dec_codecs has initial 16 >>>> (0-15) bits set and core->enc_codecs has next 16 bits (16-31) set. The bit >>>> count >>>> would be 32. The codec loop after this check would run on caps array index >>>> 0-31. >>>> I do not see a possibility for OOB access in this case. >>>> >>>>> >>>>> struct hfi_plat_caps caps[MAX_CODEC_NUM]; >>>>> >>>>> --- >>>>> bod >>>>> >>> >>> Are you not doing a general defensive coding pass in this series ie >>> >>> "[PATCH v2 2/4] venus: hfi: fix the check to handle session buffer requirement" >> >> In "PATCH v2 2/4", there is a possibility if the check does not consider "=". >> Here in this patch, I do not see a possibility. >> >>> >>> --- >>> bod > > But surely hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) == > MAX_CODEC_NUM is an invalid offset ? No, it isn't. Please run through the loop with the bitmasks added upto 32 and see if there is a possibility of OOB. > > --- > bod
On 11/08/2023 17:02, Vikash Garodia wrote: > > > On 8/11/2023 4:11 PM, Bryan O'Donoghue wrote: >> On 11/08/2023 09:49, Vikash Garodia wrote: >>> >>> On 8/11/2023 2:12 PM, Bryan O'Donoghue wrote: >>>> On 11/08/2023 07:04, Vikash Garodia wrote: >>>>> >>>>> On 8/10/2023 5:03 PM, Bryan O'Donoghue wrote: >>>>>> On 10/08/2023 03:25, Vikash Garodia wrote: >>>>>>> + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > >>>>>>> MAX_CODEC_NUM) >>>>>>> + return; >>>>>>> + >>>>>> >>>>>> Shouldn't this be >= ? >>>>> Not needed. Lets take a hypothetical case when core->dec_codecs has initial 16 >>>>> (0-15) bits set and core->enc_codecs has next 16 bits (16-31) set. The bit >>>>> count >>>>> would be 32. The codec loop after this check would run on caps array index >>>>> 0-31. >>>>> I do not see a possibility for OOB access in this case. >>>>> >>>>>> >>>>>> struct hfi_plat_caps caps[MAX_CODEC_NUM]; >>>>>> >>>>>> --- >>>>>> bod >>>>>> >>>> >>>> Are you not doing a general defensive coding pass in this series ie >>>> >>>> "[PATCH v2 2/4] venus: hfi: fix the check to handle session buffer requirement" >>> >>> In "PATCH v2 2/4", there is a possibility if the check does not consider "=". >>> Here in this patch, I do not see a possibility. >>> >>>> >>>> --- >>>> bod >> >> But surely hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) == >> MAX_CODEC_NUM is an invalid offset ? > > No, it isn't. Please run through the loop with the bitmasks added upto 32 and > see if there is a possibility of OOB. IDK Vikash, the logic here seems suspect. We have two loops that check for up to 32 indexes per loop. Why not have a capabilities index that can accommodate all 64 bits ? Why is it valid to have 16 encoder bits and 16 decoder bits but invalid to have 16 encoder bits with 17 decoder bits ? While at the same time valid to have 0 encoder bits but 17 decoder bits ? --- bod
On 8/12/2023 12:21 AM, Bryan O'Donoghue wrote: > On 11/08/2023 17:02, Vikash Garodia wrote: >> >> >> On 8/11/2023 4:11 PM, Bryan O'Donoghue wrote: >>> On 11/08/2023 09:49, Vikash Garodia wrote: >>>> >>>> On 8/11/2023 2:12 PM, Bryan O'Donoghue wrote: >>>>> On 11/08/2023 07:04, Vikash Garodia wrote: >>>>>> >>>>>> On 8/10/2023 5:03 PM, Bryan O'Donoghue wrote: >>>>>>> On 10/08/2023 03:25, Vikash Garodia wrote: >>>>>>>> + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > >>>>>>>> MAX_CODEC_NUM) >>>>>>>> + return; >>>>>>>> + >>>>>>> >>>>>>> Shouldn't this be >= ? >>>>>> Not needed. Lets take a hypothetical case when core->dec_codecs has >>>>>> initial 16 >>>>>> (0-15) bits set and core->enc_codecs has next 16 bits (16-31) set. The bit >>>>>> count >>>>>> would be 32. The codec loop after this check would run on caps array index >>>>>> 0-31. >>>>>> I do not see a possibility for OOB access in this case. >>>>>> >>>>>>> >>>>>>> struct hfi_plat_caps caps[MAX_CODEC_NUM]; >>>>>>> >>>>>>> --- >>>>>>> bod >>>>>>> >>>>> >>>>> Are you not doing a general defensive coding pass in this series ie >>>>> >>>>> "[PATCH v2 2/4] venus: hfi: fix the check to handle session buffer >>>>> requirement" >>>> >>>> In "PATCH v2 2/4", there is a possibility if the check does not consider "=". >>>> Here in this patch, I do not see a possibility. >>>> >>>>> >>>>> --- >>>>> bod >>> >>> But surely hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) == >>> MAX_CODEC_NUM is an invalid offset ? >> >> No, it isn't. Please run through the loop with the bitmasks added upto 32 and >> see if there is a possibility of OOB. > > IDK Vikash, the logic here seems suspect. > > We have two loops that check for up to 32 indexes per loop. Why not have a > capabilities index that can accommodate all 64 bits ? Max codecs supported can be 32, which is also a very high number. At max the hardware supports 5-6 codecs, including both decoder and encoder. 64 indices is would not be needed. > Why is it valid to have 16 encoder bits and 16 decoder bits but invalid to have > 16 encoder bits with 17 decoder bits ? While at the same time valid to have 0 > encoder bits but 17 decoder bits ? The addition of the encoder and decoder should be 32. Any combination which adds to it, would go through. For ex, (17 dec + 15 enc) OR (32 dec + 0 enc) OR (0 dec + 32 enc) etc are valid combination theoretically, though there are only few decoders and encoders actually supported by hardware. Regards, Vikash
On 14/08/2023 07:34, Vikash Garodia wrote: >> We have two loops that check for up to 32 indexes per loop. Why not have a >> capabilities index that can accommodate all 64 bits ? > Max codecs supported can be 32, which is also a very high number. At max the > hardware supports 5-6 codecs, including both decoder and encoder. 64 indices is > would not be needed. > But the bug you are fixing here is an overflow where we have received a full range 32 bit for each decode and encode. How is the right fix not to extend the storage to the maximum possible 2 x 32 ? Or indeed why not constrain the input data to 32/2 for each encode/decode path ? The bug here is that we can copy two arrays of size X into one array of size X. Please consider expanding the size of the storage array to accommodate the full size the protocol supports 2 x 32. --- bod
Hi Bryan, On 8/14/2023 7:45 PM, Bryan O'Donoghue wrote: > On 14/08/2023 07:34, Vikash Garodia wrote: >>> We have two loops that check for up to 32 indexes per loop. Why not have a >>> capabilities index that can accommodate all 64 bits ? >> Max codecs supported can be 32, which is also a very high number. At max the >> hardware supports 5-6 codecs, including both decoder and encoder. 64 indices is >> would not be needed. >> > > But the bug you are fixing here is an overflow where we have received a full > range 32 bit for each decode and encode. > > How is the right fix not to extend the storage to the maximum possible 2 x 32 ? > Or indeed why not constrain the input data to 32/2 for each encode/decode path ? At this point, we agree that there is very less or no possibility to have this as a real usecase i.e having 64 (or more than 32) codecs supported in video hardware. There seem to be no value add if we are extending the cap array from 32 to 64, as anything beyond 32 itself indicates rogue firmware. The idea here is to gracefully come out of such case when firmware is responding with such data payload. Again, lets think of constraining the data to 32/2. We have 2 32 bit masks for decoder and encoder. Malfunctioning firmware could still send payload with all bits enabled in those masks. Then the driver needs to add same check to avoid the memcpy in such case. > The bug here is that we can copy two arrays of size X into one array of size X. > > Please consider expanding the size of the storage array to accommodate the full > size the protocol supports 2 x 32. I see this as an alternate implementation to existing handling. 64 index would never exist practically, so accommodating it only implies to store the data for invalid response and gracefully close the session. Thanks, Vikash
On 29/08/2023 09:00, Vikash Garodia wrote: > Hi Bryan, > > On 8/14/2023 7:45 PM, Bryan O'Donoghue wrote: >> On 14/08/2023 07:34, Vikash Garodia wrote: >>>> We have two loops that check for up to 32 indexes per loop. Why not have a >>>> capabilities index that can accommodate all 64 bits ? >>> Max codecs supported can be 32, which is also a very high number. At max the >>> hardware supports 5-6 codecs, including both decoder and encoder. 64 indices is >>> would not be needed. >>> >> >> But the bug you are fixing here is an overflow where we have received a full >> range 32 bit for each decode and encode. >> >> How is the right fix not to extend the storage to the maximum possible 2 x 32 ? >> Or indeed why not constrain the input data to 32/2 for each encode/decode path ? > At this point, we agree that there is very less or no possibility to have this > as a real usecase i.e having 64 (or more than 32) codecs supported in video > hardware. There seem to be no value add if we are extending the cap array from > 32 to 64, as anything beyond 32 itself indicates rogue firmware. The idea here > is to gracefully come out of such case when firmware is responding with such > data payload. > Again, lets think of constraining the data to 32/2. We have 2 32 bit masks for > decoder and encoder. Malfunctioning firmware could still send payload with all > bits enabled in those masks. Then the driver needs to add same check to avoid > the memcpy in such case. > >> The bug here is that we can copy two arrays of size X into one array of size X. >> >> Please consider expanding the size of the storage array to accommodate the full >> size the protocol supports 2 x 32. > I see this as an alternate implementation to existing handling. 64 index would > never exist practically, so accommodating it only implies to store the data for > invalid response and gracefully close the session. What's the contractual definition of "this many bits per encoder and decoder" between firmware and APSS in that case ? Where do we get the idea that 32/2 per encoder/decoder is valid but 32 per encoder decoder is invalid ? At this moment in time 16 encoder/decoder bits would be equally invalid. I suggest the right answer is to buffer the protocol data unit - PDU maximum as an RX or constrain the maximum number of encoder/decoder bits based on HFI version. ie. - Either constrain on the PDU or - Constrain on the known number of maximum bits per f/w version --- bod
On 8/29/2023 5:29 PM, Bryan O'Donoghue wrote: > On 29/08/2023 09:00, Vikash Garodia wrote: >> Hi Bryan, >> >> On 8/14/2023 7:45 PM, Bryan O'Donoghue wrote: >>> On 14/08/2023 07:34, Vikash Garodia wrote: >>>>> We have two loops that check for up to 32 indexes per loop. Why not have a >>>>> capabilities index that can accommodate all 64 bits ? >>>> Max codecs supported can be 32, which is also a very high number. At max the >>>> hardware supports 5-6 codecs, including both decoder and encoder. 64 indices is >>>> would not be needed. >>>> >>> >>> But the bug you are fixing here is an overflow where we have received a full >>> range 32 bit for each decode and encode. >>> >>> How is the right fix not to extend the storage to the maximum possible 2 x 32 ? >>> Or indeed why not constrain the input data to 32/2 for each encode/decode path ? >> At this point, we agree that there is very less or no possibility to have this >> as a real usecase i.e having 64 (or more than 32) codecs supported in video >> hardware. There seem to be no value add if we are extending the cap array from >> 32 to 64, as anything beyond 32 itself indicates rogue firmware. The idea here >> is to gracefully come out of such case when firmware is responding with such >> data payload. >> Again, lets think of constraining the data to 32/2. We have 2 32 bit masks for >> decoder and encoder. Malfunctioning firmware could still send payload with all >> bits enabled in those masks. Then the driver needs to add same check to avoid >> the memcpy in such case. >> >>> The bug here is that we can copy two arrays of size X into one array of size X. >>> >>> Please consider expanding the size of the storage array to accommodate the full >>> size the protocol supports 2 x 32. >> I see this as an alternate implementation to existing handling. 64 index would >> never exist practically, so accommodating it only implies to store the data for >> invalid response and gracefully close the session. > > What's the contractual definition of "this many bits per encoder and decoder" > between firmware and APSS in that case ? > > Where do we get the idea that 32/2 per encoder/decoder is valid but 32 per > encoder decoder is invalid ? > > At this moment in time 16 encoder/decoder bits would be equally invalid. > > I suggest the right answer is to buffer the protocol data unit - PDU maximum as > an RX or constrain the maximum number of encoder/decoder bits based on HFI version. > > ie. > > - Either constrain on the PDU or > - Constrain on the known number of maximum bits per f/w version Let me simply ask this - What benefit we will be getting with above approaches over the existing handling ? Thanks, Vikash > --- > bod >
diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c index 9d6ba22..c438395 100644 --- a/drivers/media/platform/qcom/venus/hfi_parser.c +++ b/drivers/media/platform/qcom/venus/hfi_parser.c @@ -19,6 +19,9 @@ static void init_codecs(struct venus_core *core) struct hfi_plat_caps *caps = core->caps, *cap; unsigned long bit; + if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > MAX_CODEC_NUM) + return; + for_each_set_bit(bit, &core->dec_codecs, MAX_CODEC_NUM) { cap = &caps[core->codecs_count++]; cap->codec = BIT(bit);