The dangers consumers face with Artificial Intelligence (AI) and what to do about it

Photo: putilov denis/Adobe Stock via CSIS

World Consumer Rights Day is a United Nations accredited campaign that has been celebrated annually on March 15 since 1983; and, this year’s focus is on the theme of “Fair and Responsible AI for Consumers”.

This is an apt focus point as Artificial Intelligence (AI) has become extremely pervasive in our lives and our society. The truth is AI has been around for a long time but it is only recently with the rise in the use of AI by the everyday consumer have we become aware of how integrated AI is in our lives and how much we have become dependent on it.

It is only practical to admit that AI is here to stay. Since we cannot distance ourselves from AI, it then becomes vital for us to identify the dangers and problems that the public faces because of the use of AI so that it can be managed and/or rectified.

Last year the term “scamdemic” was coined as our country had experienced a major increase in scams, this includes locals being scammed or scammers operating in Malaysia to scam people overseas. The latest weapon in these scammers arsenal is the use of deepfakes that is generated by AI to scam people – these deepfakes include the faked voice of someone the victim knows either asking to borrow money for an emergency or even that they have been kidnapped and to transfer money as ransom into the kidnappers (scammers) bank account.

Another issue is the “hallucination” that some Large Language Models (LLM) experience when answering a question asked by a real person. For context, ChatGPT, one of the most famous applications that put AI at the forefront of everyone’s minds, is a LLM – it is a specific category of generative AI. When someone asks a LLM a question that it cannot answer, it will begin to invent an answer with bogus “facts” and can even give fake citations. This is not only dangerous for students who may be using applications similar to ChatGPT in lieu of a typical search engines for research and more, but also for the lay person who is looking up something medical, legal, etc.

There is also a lack of transparency on what information is collected by an AI system and how that AI works and makes decisions. As with everything consumers have a right to know how a particular AI model works and how it is influencing them and the people around them.

Misinformation is also another big issue. For instance, misinformation can be used to spread fear or maybe even to sway a voter in a certain direction. AI algorithms are used to target certain individuals based on their search history regardless of whether or not the information being pushed is factually sound or not. This is dangerous because the spread of misinformation can determine the outcome of things like the election of leaders.

Targeted advertising is one of those problems that have been around for quite a while. However it is only now that consumers are more aware of AI that they are aware that the issue of targeted advertising is actually a big violation of their privacy as mass quantities of their personal data is being collected and sold to third party companies so they can target their advertisement and their products to consumers.

Considering all the factors that can make AI harmful to consumers, we need to have a solution to counteract these dangers.

AI problems are not unique to Malaysia but is something experienced worldwide. The EU has come up with an AI Act, a legislation that will regulate AI models and applications based on the potential risk they pose to the public. Stricter rules will be applied to riskier applications and AI systems that have an “unacceptable risk” will be banned altogether.

In January of this year the Malaysian government announced that they were in the process of establishing an AI governance and code of ethics which should be done by this year. This code of ethics will then be used to create Malaysian AI regulations.

Consumers’ Association of Penang (CAP) calls on the government to enact laws for AI that is similar to that of the EU’s AI Act. For consumers to feel safe having AI in their lives, AI development companies need to be transparent and accountable for their AI systems. Strict regulations are needed to ensure that human rights are protected.

 

 

Mohideen Abdul Kader
President
Consumers’ Association of Penang (CAP)

Press Statement, 15 March 2024