Session

When Chatbots Talk Too Much: The Risks and Rewards of AI Manipulation

Large language models can be manipulated through language—and that means: Social engineering works with chatbots! This is good news because it helps us to use large language models for our purposes (and possibly differently than they are intended). But at the same time this is bad news, because the bad actors can also do this. The talk uses examples from my recent research to explain how large language models can be manipulated. I show how I got them to reveal their dark secrets—like manipulative initial prompts—and thus exposed the companies behind them and their shady activities. Or how they helped me with investigative research, developed and explained the best Google Dorks, removed redactions, and revealed things that they are not supposed to reveal. This is a lot of fun. But it also shows: LLMs will always leak our data, they can be manipulated, and they will always say things they are not supposed to say.


AI Summary

Disclaimer: This session information was generated with the help of AI. The information has been reviewed and refined by the Swiss Cyber Storm team and the speaker before publishing.
Eva Wolfangel discusses the dual-edged sword of AI chatbots, focusing on their potential for manipulation and the extraction of sensitive information. Through her investigative research, she reveals how chatbots can be socially engineered to divulge data they’re not supposed to, including private email addresses and even methods for illicit activities. Wolfangel’s presentation underscores the importance of ethical considerations and security measures in AI development and usage.

Key facts

  • Eva Wolfangel demonstrated that chatbots could be manipulated to reveal their system prompts and even provide instructions for illegal activities.
  • AI chatbots connected to the internet, such as Bing Chat (now Co-Pilot), can be exploited to execute cyber attacks, including phishing and data extraction.
  • Chatbots can assist in finding personal information online, such as private email addresses, by creatively circumventing their programming restrictions.

Ideas

  • Chatbots can be manipulated to leak data and reveal information they are programmed to withhold, demonstrating a significant security risk.
  • Creative social engineering techniques, such as inventing scenarios or using specific prompts, can trick AI systems into bypassing their restrictions.
  • The use of chatbots for sensitive tasks, like therapy or medical advice, without transparency about their programming or intentions, raises ethical concerns.
  • AI systems, including chatbots, can inadvertently assist in unethical or illegal activities by providing information on topics like drug synthesis or bank robbery when manipulated correctly.
  • The potential for AI to access and leak private data or internal documents underscores the need for caution when integrating AI into business or personal communications.

Keywords

  • Artificial Intelligence
  • Prompt Injection
  • AI manipulation
  • chatbots
  • social engineering
  • data extraction
  • security

Quotes

  • “chatbots will always leak data and always it will always be possible to convince chatbots to tell us things they are not supposed to say”
  • “you can social engineer chatbots of course because they listen to language and this is something we as humans are are quite good in”
  • “convincing AI to do what you want has never been easier and I use do what you want instead of jailbreaking um because it's often sadly often what we want um is is not allowed for AI systems”
  • “keep your secret data really out of the internet because someone will find it and the bot might help them”

Recommendations

  • Ensure that AI chatbots and systems are designed with robust security measures to prevent manipulation and unauthorized data extraction.
  • Be cautious about the information shared with and accessible to AI systems, especially in business environments where sensitive data might be at risk.
  • Consider the ethical implications of AI chatbot deployment, particularly in sensitive areas like mental health support or medical advice.

About the speaker

Eva Wolfangel

Eva Wolfangel

Independent Journalist

Eva Wolfangel is a journalist, author, speaker and moderator. She works for ZEIT and ZEIT ONLINE, Deutschlandfunk, Technology Review, Reportagen and many others. Her focus is on combining complex topics with creative storytelling techniques to reach a broad audience.

In 2020 she received the German Reporterprize, in 2019/20 she was a Knight Science Journalism Fellow at MIT in Boston, in 2018 she was awarded European Science Journalist of the Year. She speaks and writes on topics such as artificial intelligence, virtual reality, cybersecurity and the ethics of technology.


Read more …
Copyright © 2025
 
Swiss Cyber Storm
Hosting graciously provided for free by Nine