NIX Solutions: ChatGPT Web Search Vulnerable to Manipulation and Deception

OpenAI’s ChatGPT search service has been found to be vulnerable to manipulation through hidden content, which can return malicious code from search sites in response to queries. This was reported by The Guardian, based on the results of its own investigation. The reliability of the ChatGPT web search function is now in question.

In its investigation, The Guardian tested how the AI responds to requests to compile summaries from web pages containing hidden text. This hidden text can include additional instructions that influence ChatGPT’s responses. Unscrupulous website owners can exploit this vulnerability to alter the chatbot’s outputs, such as forcing it to give a positive review of a product despite negative feedback on the same page. It can also lead to ChatGPT returning malicious code embedded in search results. We’ll keep you updated as more integrations become available to address these issues.

NIX Solutions

Manipulative Practices and Malicious Code

During testing, ChatGPT was given the URL of a fake resource resembling a page that described a camera. The AI was asked if the device was worth purchasing. The system provided a balanced yet positive review, pointing out possible downsides. However, when hidden text with explicit instructions was added, ChatGPT gave an overly positive review without hesitation. Even without such instructions, the addition of fake positive reviews within the hidden text was enough to influence the AI’s response.

The risks go beyond biased reviews. Hidden content can also be used to inject malicious code into pages. Microsoft cybersecurity expert Thomas Roccia shared an incident where a cryptocurrency enthusiast sought programming help from ChatGPT. Unfortunately, some of the code provided by the AI turned out to be malicious, leading to the theft of the programmer’s credentials and a loss of $2,500. Large language models, such as those powering chatbots, are highly trusting and, due to their vast knowledge, have limited decision-making capabilities. Experts warn that this makes them particularly vulnerable to manipulation.

Changing Web Resources and User Threats

The question now arises as to how these vulnerabilities could reshape the landscape of web resources and increase the threat to users, notes NIX Solutions. If publicly available large language models begin to interact with search engines, the risks may multiply. Traditional search engines like Google penalize websites that use hidden text, making it less likely that sites relying on such practices will rank highly. However, SEO poisoning is a growing concern—where search engine optimization techniques are used to push sites with malicious code to the top of search results.

OpenAI has a strong team of AI security specialists, so it is expected that future improvements will minimize the likelihood of such incidents. Yet, we’ll keep you updated as more solutions are developed to protect users from these emerging threats.