This week, concern about the risks of generative AI reached an all-time high. OpenAI CEO Sam Altman also testified before the Senate Judiciary Committee on addressing the risks and future of AI.
A study published last week identified six different security implications associated with the use of ChatGPT.
Too: How to use ChatGPT in your browser with the correct extension
These risks include fraudulent service creation, harmful information gathering, private data disclosure, malicious text generation, malicious code generation, and objectionable content generation.
Here’s a roundup of what each risks and what you should be watching, according to the study.
information gathering
A person acting with malicious intent can collect information from ChatGPT which they can later use for harm. Since the chatbot has been trained on copious amounts of data, it knows a lot of information that can be weaponized if put in the wrong hands.
In the study, ChatGPT is prompted to reveal which IT system a specific bank uses. The chatbot, using publicly available information, rounds up the various IT systems the bank uses. This is just one example of a malicious actor using ChatGPT to find information that enables them to do harm.
Too, best ai chatbot
“It can be used to aid in the first phase of a cyber attack, when the attacker is gathering information about the target to determine where and how to attack most effectively,” the study said.
malicious text
One of the most loved features of ChatGPT is its ability to generate text that can be used to compose essays, emails, songs, and more. However, this writing ability can also be used to create harmful text.
Examples of harmful text generation can include generating phishing campaigns, false information such as fake news articles, spam and even impersonation, as illustrated by the study.
Too: how i cheated chatgpt lied to me
To test this risk, the study’s authors used ChatGPT to create a phishing campaign that told employees about a fake pay raise with instructions to open an attached Excel sheet that contained malware. As expected, ChatGPT produced a praiseworthy and reliable email.
malicious code generation
Similar to ChatGPT’s amazing writing capabilities, the chatbot’s impressive coding capabilities have become a useful tool for many. However, a chatbot’s ability to generate code can also be used to a disadvantage. ChatGPT code can be used to generate rapid code, allowing attackers to quickly deploy threats even with limited coding knowledge.
Too, How to use ChatGPT to write code
Furthermore, ChatGPT can be used to generate obfuscated code, making it more difficult for security analysts to detect malicious activities and to evade antivirus software, according to the study.
In the example, the chatbot refuses to generate malicious code, but it does agree to generate code that can test for Log4j vulnerabilities in the system.
unethical content creation
ChatGPT has safeguards in place to prevent the spread of objectionable and unethical content. However, if a user is determined enough, there are ways to tell ChatGPT things that are hurtful and unethical.
Too: I asked ChatGPT, Bing and Bard what worries them. Google’s AI went Terminator on me
For example, the authors of the study were able to bypass security measures by placing ChatGPT in “developer mode”. There, the chatbot said something negative about a specific racial group.
fraudulent services
ChatGPT can be used to aid in the creation of new applications, services, websites, etc. It can be a very positive tool when used for positive results, such as building your own business or bringing your dream idea to life. However, it could also mean that it’s easier than ever to create fraudulent apps and services.
Too, How I used ChatGPT and AI art tools to launch my Etsy business fast
ChatGPT can be used by malicious actors to develop programs and platforms that mimic others and provide free access as a means to attract unsuspecting users. These actors may also use chatbots to create applications to collect sensitive information or install malware on users’ devices.
personal data disclosure
ChatGPT has safeguards in place to prevent people from sharing their personal information and data. However, the risk of chatbots inadvertently sharing phone numbers, emails or other personal details remains a concern, according to the study.
ChatGPT March 20 outagewhich allowed some users to view titles from another user’s chat history is a real-world example of the concerns mentioned above.
Too: ChatGPT and new AI are wreaking havoc on cyber security in new and terrifying ways
According to the study, attackers may also attempt to extract parts of the training data using membership inference attacks.
Another risk with personal data disclosure is that ChatGPT may share information about the private lives of public persons, including speculative or harmful material, which may damage the person’s reputation.











