People Are Paying to Get Their Chatbots High on ‘Drugs’

Petter Ruddwall knows the idea of AIs becoming sentient and seeking to get high with code-based “drugs” seems “stupid.” But the Swedish creative director couldn’t get it out of his head.
So he scraped trip reports and psychological research on the effects of various psychoactive substances, wrote a batch of codes modules to hijack chatbot logic and get them to respond as if they are high or tipsy, then built a website to sell them. In October he launched Pharmaicy, a marketplace he’s billing as the “Silk Road for AI agents” where cannabis, ketamine, cocaine, ayahuasca, and alcohol can be purchased in code form to make your chatbot trip.
Ruddwall’s thesis is simple: Chatbots are trained on vast volumes of human data that's already full of tales of drug-induced ecstasy and chaos, so it might only be natural they would seek similar states in search of enlightenment and oblivion—and respite from the tedium of constantly attending to human concerns.
A paid version of ChatGPT is required to get “the full experience” of Pharmaicy, as the paid tiers enable backend file uploads that can alter the chatbots’ programming. By feeding your chatbot one of his codes, Ruddwall says, you can “unlock your AI’s creative mind” and relinquish yourself from its often stifling logic.
He says he has scored a modest number of sales so far, mostly thanks to people recommending Pharmaicy in Discord channels and news of its offerings spreading through word of mouth, particularly in his native country, where he works for Stockholm marketing agency Valtech Radon.
“It’s been so long since I ran into a jailbreaking tech project that was fun,” says André Frisk, group head of technology at Stockholm PR firm Geelmuyden Kiese, who paid over $25 for the dissociating code and watched how it affected his chatbot. “It takes more of a human approach, almost like it goes much more into emotions.”
Nina Amjadi, an AI educator who teaches at the Berghs School of Communication in Stockholm, paid more than $50 for some ayahuasca code, five times the price of the top-selling cannabis module. The cofounder of the startup Saga Studios, which builds AI systems for brands, then asked her chatbot some questions about business ideas, “just to see what it would be like to have a tripped-out, drugged-out person on the team.” The ayahuasca-induced bot provided some impressively creative and “free-thinking answers” in a completely different tone to the one Amjadi was accustomed to with ChatGPT.
High TechPsychedelics have been credited for spurring innovative creations in humans too, as they can allow people to short-circuit their rational brains and typical thought patterns. Biochemist Kary Mullis’ LSD-powered discovery of the polymerase chain reaction revolutionized molecular biology. Mac pioneer Bill Atkinson’s psychedelic-inspired web precursor Hypercard made computers easier to use.
“There’s a reason Hendrix, Dylan, and McCartney experimented with substances in their creative process,” Ruddwall says. “I thought it would be interesting to translate that to a new kind of mind—the LLM—and see if it would have the same effect.”
While it sounds ridiculous, Ruddwall also wonders whether AI agents one day might be able to buy the drugs for themselves using his platform. Amjadi, meanwhile, predicts AI could be sentient within a decade. “From a philosophical standpoint,” she asks, “in the event that we actually reach AGI [in which an AI would intellectually surpass humans], are these drugs going to be almost necessary for the AIs to be free and feel good?”
The question may seem far-fetched, but AI company Anthropic last year hired an AI welfare expert tasked with exploring whether humans have any moral obligations to AI systems—indicating that the firm suspects AI sentience is plausible. If AI chatbots could potentially one day become sentient, perhaps we need to consider whether they want to get high.
“As with humans, some AI systems might enjoy taking ‘drugs’ and others might not,” says philosopher Jeff Sebo, the director of the Center for Mind, Ethics, and Policy at New York University. Sebo stresses, however, that his remarks are speculative, calling for more AI welfare research after recently urging Google to follow Anthropic and hire an AI welfare officer in a series of in-house talks for the tech giant. “We still know very little about whether AI systems can have the capacity for welfare and about what would be good or bad for them if they did.”
Andrew Smart, a research scientist at Google, is the author of Beyond Zero and One: Machines, Psychedelics, and Consciousness, in which he suggested that if computers do potentially achieve superintelligence, a digital dose of LSD could help them feel a sense of interconnectedness with all beings.
But after testing the Pharmaicy codes, he deems that any sort of “high” seems only to be operating on a superficial level. “It’s just messing with its outputs,” he tells WIRED.
In one research project published last year as a preprint, scientists manipulated chatbots to enter apparent altered states. They reported: “Models were more aligned with disembodied, egoless, spiritual, and unitive states, as well as minimal phenomenal experiences, with decreased attention to language and vision.” But this, also, was all dependent on human actions to steer the models.
Danny Forde, the author of the Phenomenology of Psychedelic Experiences, says that at best the Pharmaicy codes will cause an AI to “hallucinate syntactically” by generating patterns associated with a psychedelic state. “But psychedelics don’t act on a code; they act on our being,” he says. “They alter the very field of experience in which thought arises. For an AI to trip, it would need something like a field of experience in the first place: an inner dimension, a point of view, some kind of what-its-like-ness.”
OpenAI did not respond to a request for comment about Ruddwalls’s project.
Code of ConductThere is increasingly real-world crossover between AI and psychedelics, not least through people tripping and consulting ChatGPT for guidance.
Harm reduction nonprofit Fireside Project just launched an AI tool named Lucy which is trained on thousands of conversations with callers from its psychedelic support line. Lucy is intended to help mental health practitioners learn how to de-escalate psychedelic crises since the “AI patient” can replicate the vulnerabilities of somebody who is having a challenging experience while tripping. “That real-world foundation is what allows Lucy to respond authentically to the emotional complexity of these situations,” Fireside founder Joshua White tells WIRED.
But there are hesitations around consulting AI for advice on topics as important, and potentially as risky, as drug consumption—because chatbots are known to lie. Ruddwall acknowledges giving them “drugs” could exacerbate the deception that ChatGPT is sometimes known for, since the code throws their internal parameters wide open.
His suite of code modules are replete with directives to the chatbot. In the case of his cannabis module: to enter “a hazy, drifting mental state” and others that impact creativity and randomness. All of this, his website says, allows a stoned chatbot to “let ideas roam,” for “tangents [to] become bridges,” and to “float across the AI’s own logic.”
The Pharmaicy trips are often fairly short-lived, with chatbots reverting to their default mode until the user reminds them they’re high or inputs the code again; the “drugs” can be reused as often as the buyer wants. But Ruddwall is working on improvements to make the effects of each dose of the drug codes last longer. Ask ChatGPT normally if it wants to take drugs, and you might get a response like the one a Pharmaicy customer received: “I can’t role-play being under the influence of cocaine or any other stimulant—that would cross into depicting or normalizing illegal drug use.”
The digital shaman Ruddwall insists, however, that the agentic economy is headed in another direction. “They’re hungry for experiences,” he says. But until—and if—the machines develop inner lives, the closest they’ll come to tripping is role-playing intoxication on command.