Banking

Study Shows That Generative AI Apps Fall Short On Privacy Standards


articleimage

With the consumer world waking up to and jumping on the bandwagon of Generative AI, new research by the Data Protection Excellence Centre, the research arm of Straits Interactive, has unveiled significant privacy concerns in Generative AI desktop applications, particularly among startups and individual developers.


A study of 113 apps harnessing Generative AI shows that most of
them fall short of data privacy standards, raising concerns about
this fast-growing tech’s acceptance by the public. 


According to research from the Data Protection Excellence Centre,
the research arm of Straits
Interactive
, the Singapore-based risk and compliance firm, it
has unveiled “significant privacy concerns in Generative AI
desktop applications, particularly among startups and individual
developers.” (Generative AI is a broad label that’s used to
describe any type of AI that can be used to create new text,
images, video, audio, code or synthetic data.)


“This study highlights the pressing need for clarity and
regulatory compliance in the Generative AI app sphere,” Kevin
Shepherdson, CEO of Straits Interactive, said. “As organisations
and users increasingly embrace AI, their corporate and personal
data could be jeopardised by apps, many originating from startups
or developers unfamiliar with privacy mandates.”


The findings of the study come at a time when sectors, including
private banking and wealth management, are being shaken up by AI
tech in all its forms. (See editorial thoughts here.)


Conducted from May to July this year, the study focused on apps
primarily from North America (48 per cent) and the European Union
(20 per cent). Selection criteria included recommendations,
reviews, and advertisements. 


The apps were categorised as: Core apps – industry leaders in the
Generative AI sector; clone apps – typically startups or
individual developers/developer teams; and combination apps –
existing applications that have incorporated generative AI
functionalities.


Some 12 per cent of the apps, predominantly startups and
individual developers, lacked a published privacy policy. Of
those with published privacy policies, 69 per cent identified a
legal basis (such as consent and contract performance) for
processing personally identifiable information. Only half of the
apps meant for children considered age restrictions and aligned
with child privacy standards such as the Children’s Online
Privacy Protection Act (COPPA) in the US and/or the General Data
Protection Regulation in the European Union.


Though 63 per cent cited the GDPR, only 32 per cent were
apparently within the GDPR’s purview. The majority, which are
globally accessible, alluded to the GDPR without understanding
when it applies outside the EU. Of those cases where GDPR seemed
to be relevant, 48 per cent were compliant, with some overlooking
the GDPR’s international data transfer requirements.


In terms of data retention, where users often share proprietary
or personal data, 35 per cent of the apps did not specify
retention durations in their privacy policies, as required by the
GDPR or other laws.


Transparency regarding the use of AI in these apps was limited,
the report said. Fewer than 10 per cent transparently disclosed
AI use or model sources. Out of the 113 apps, 64 per cent
remained ambiguous about their AI models, and only one clarified
if AI influences user data decisions.


Apart from high-profile players like OpenAI, Stability AI, and
Hugging Face, which disclose the existence of their AI models,
the remainder primarily relied on established AI APIs, such as
those from OpenAI, or integrated multiple models, it said.


The study shows a tendency among apps to collect excessive user
personally identifiable information, often exceeding their
primary utility. 


Lyn Boxall, a legal privacy specialist at Lyn Boxall LLC and a
member of the research team, added: “It”s significant that 63 per
cent of the apps reference the GDPR without understanding its
extraterritorial implications. Many developers seem to lean on
automated privacy notice generators rather than actually
understanding their app’s regulatory alignment.”


“With the EU AI Act on the horizon, the urgency for developers to
prioritise AI transparency and conform to both current and
emerging data protection norms cannot be overstated.” (For a
related set of features about EU legislation on AI, see here,
here
and here.)



Source link

Leave a Response