TIOZ Howest

Howest Logo

The risks of AI for a user

Everyone knows the terms “artificial intelligence,” “deep learning” and “machine learning” by now, as these terms have been everywhere since the advent of ChatGPT.

Meanwhile, just about every company seems to be betting on AI and just about every tool is using AI. Research from Stanford University (April 2025) shows that in 2024, as many as 78% of companies worldwide already used AI and as many as 71% already used GenAI.

However, that same study also showed that the number of reported AI incidents is skyrocketing in recent years. This in itself is not illogical, because the more people use AI, the more people can abuse AI. And the more AI tools come online, the more AI tools can contain errors.

The same comparison can be made with cars and traffic incidents: the more cars entered traffic, the more traffic accidents we saw.

AI has tremendous benefits and good use cases. So one should definitely continue to use AI for the relevant tasks. But it is important to know what risks and consequences AI can have and what you can then do about this as a user or developer.

This post discusses the risks and consequences for users (of AI). We explain in a non-technical way what these are and what you can do against them as a user.

Cover image

Quick facts

  • /

    According to numbers from 2024, 78% of all companies already use AI and 71% of all companies use GenAI

  • /

    AI can increase the chances of false information, bias, privacy issues, effective phishing, and extreme energy consumption

  • /

    As a user you can protect yourself against these risks

What are the risks of AI and what do I need to pay attention to as a user (of AI)?

Hallucinations

Another term one encounters everywhere these days is “hallucinations.” An AI model is said to hallucinate when it very confidently gives a wrong answer. This can lead to funny but sometimes also to dangerous results.

An example of such a hallucination came in 2024 from Google’s “AI overviews”, in which the user gets an AI summary of the search results found, in addition to the standard search results. Shortly after the launch of this feature, it turned out that these “AI overviews” made dangerous suggestions like eating at least 1 small stone a day and putting glue on your pizza if you don't want the cheese to slide off your pizza.

This example can still be considered funny because it is obvious to most that this info is wrong. But it does make you pause and ask questions like “What if I ask a question about something I know absolutely nothing about? Am I then going to be able to recognize when the AI tool is wrong?” and “What if people put too much faith in AI and use the AI output without thinking?”.

A much more tragic example comes from 2023, where a chatbot seemed to completely go along with the negative thinking of a depressed man and even seemed to make suggestions. This man’s chat history indicated that the man became suicidal after “a sign” from the chatbot. This man eventually committed suicide, after 6 weeks of conversations with this chatbot.

Bias

Another risk of AI is bias. Bias is an (un)conscious preconception, premise or prejudice in thinking and acting that can occur in psychology, medicine, science, politics and law, among others. If we then talk specifically about statistics or AI, bias is a systematic error, where the error does not occur by chance, but always occurs in the same way and therefore affects the result.

Bias in AI arises from incorrectly constructing the data set on which the AI model will eventually be trained. If this data is too unbalanced and thus points too much in 1 direction, the output of the AI tool will also mainly point in that direction. An important saying within the AI industry is therefore “Garbage in, garbage out”.

A painful example that illustrates very concretely what bias is is Amazon’s HR tool from 2014. Amazon then came up with the idea of automating part of the selection process by having AI screen candidates’ resumes. Certainly not a bad idea for a large company like Amazon that probably gets a lot of irrelevant applications. But the mistake Amazon made then was only training on the resumes of current employees. At first glance, that doesn’t seem like a problem because those resumes should have good elements. But what Amazon had not taken into account was that, at that time, mostly men worked at the company. The result? The AI HR tool automatically rejected almost all women and thus had to be taken offline for sexist reasons.

Violation of privacy and copyright

A third risk of AI is the possible violation of privacy or copyright due to the AI model being trained on sensitive information and/or copyrighted data. These types of models pose a risk especially if the original training dataset can be extracted.

Most AI tools are trained on data that is publicly available. In fact, this is the cheapest and, for some companies, the only way to get enough data to train a solid AI model.

ChatGPT was also trained on a huge amount of public data. This cost OpenAI - the company behind ChatGPT - a lawsuit in 2023 because many digital artists (think about YouTubers) felt that OpenAI did not have the right to use their content without their permission, even if it was free for Internet users to view. Specifically, for YouTubers who make tutorial videos, this meant that a user of ChatGPT could get an answer to their question based on a video by this YouTuber, without that video being quoted and the user ever viewing the original video. This could lead to lost revenue because YouTube pays YouTubers according to the number of views their videos achieve.

But AI models also need regular re-training, mainly when one notices that the AI model is hallucinating in certain cases. Specifically for AI chatbots such as ChatGPT, one then uses the chat history of all users. That way, the model can learn from the answers that were or were not accepted by the users. Fortunately, most companies behind these chatbots will anonymize this chat history as much as possible. But if any of the user’s personal info does end up in the training data, it can come back out of the model through a question from another user.

Phishing

Phishing is a persuasion technique used by hackers to retrieve sensitive data such as passwords. Most will know phishing mainly from the phishing emails that may arrive in your spam box, where, for example, someone pretends to be a Nigerian prince and asks you to help them unlock a large sum of money, usually in exchange for a percentage. But there are other forms of phishing as well. For example, you have smishing (SMS + phishing), where the phishing is done via text messages, and vishing (voice + phishing), where the phishing is done over the phone.

Just as most of us use AI to work faster and more efficiently, hackers use AI to send out better phishing campaigns. For example, they use AI to write faster and error-free emails in the desired language, preventing people from recognizing phishing emails by the abundance of spelling mistakes.

But hackers can now go even further and use AI to create deepfakes, using the voice and/or image of someone who really exists to impersonate that person. So getting a FaceTime call from someone who sounds and looks like your loved one, but is not your loved one, has already become harsh reality.

By now, technology has advanced to the point where it becomes almost impossible for a human to recognize what is real and what is not. For example, which of these 4 photos do you think is real, knowing that only 1 of these 4 photos is a real photo and not generated by AI?

Energy consumption

A final risk of AI is the enormous amount of energy it consumes, both while training and while using the models after training.

Many of the recent and well-known AI models are enormously large and have to be trained on an enormous amount of data. These models therefore often have to be trained for several months. This is done on many computers simultaneously that together consume enormous amounts of energy. A concrete example of this is Llama 3.1 405B, a model very similar to ChatGPT's AI model. Llama 3.1 ended up having to train for nearly 2.5 months. Research from Stanford University (2025) has shown that this AI model emitted nearly 9,000 tons of CO2 during this training. By comparison, this is about as much CO2 as 500 Americans emitted in all of 2024.

But unfortunately, not only does training AI models consume an enormous amount of energy; using them consumes an enormous amount of energy as well. If we look at giving a prompt (asking a textual question) where we expect a textual answer, this prompt consumes as much as 3 to 30 times as much as if we simply asked Google the same question. When generating images, it is even worse: generating 1 image consumes as much energy as fully charging 1 smartphone or generating 6250 texts with AI.

When you then realize that in 2024, ChatGPT already had 100 million weekly users and Google now already uses AI by default to generate summary answers without you asking...

AI already consumes so much energy that Microsoft and Google have already made deals in 2024 with companies that operate nuclear reactors to use nuclear power to support their AI activities. Amazon even goes a step further and struck a deal in the same year to have additional reactors built for their AI activities.

What can I do as a user (of AI) to protect myself against these risks?

Fortunately, countries around the world increasingly consider laws specifically about AI applications, to protect users of these tools as much as possible. But you, as a user, can fortunately also do a number of things yourself to mitigate some of the risks of AI.

For example, for each result of an AI model, it is best to consider whether it is correct and therefore whether or not the AI model is hallucinating. Do this by asking that same question to Google and consulting trusted sources. Or preferably use chatbots that automatically list their sources alongside their answer so you can directly verify these sources.

To avoid bias in an AI model’s answers, check online to see which AI models currently have the highest “fairness score.” This “fairness” or “equity” of the model is tested by many independent researchers and is therefore very reliable.

To protect your own privacy and personal content, there are some very concrete things you can do. First and foremost, always share as little sensitive data as possible with tools like ChatGPT. Additionally, look up online how you can turn off sharing of your chat history and personal data in current versions of chatbots like ChatGPT and social media platforms like Facebook. Then, if you still want to be able to put personal data into your prompt from time to time without fear of it being used to retrain the model, you can also look up how to run an AI model locally and make sure it does not use the Internet as a result.

To reduce the risk of becoming a phishing victim, it is best to be critical of every image, video or audio you see or hear. Test your skills in recognizing deepfakes via the Which Face Is Real website, where you have to choose each time – between 2 faces – which one is real.

Finally, you can also minimize your own energy consumption by using small and therefore more energy-efficient models. These often have “mini” in the name. It’s also best to avoid unnecessary prompts. A conversation with a chatbot should not end with a “Thank you” and “Goodbye” like conversations with humans.

-----

Want to learn more about the risks of AI? Then be sure to keep an eye on this website, as there will be another, more technical blog post aimed at developers.

Want a talk or workshop with more concrete examples on this topic? Then be sure to contact Kyra Van Den Eynde at kyra.van.den.eynde@howest.be.

Authors

  • /

    Kyra Van Den Eynde, AI/CS Researcher, AI Lead

Want to know more about our team?

Visit the team page

Last updated on: 6/4/2025

/

More stuff to read