Playing With Fire: The Most Dangerous Way to Work with AI

That which is powerful is rarely safe...

9 min read

Hello everybody, I trust this message finds you in good tidying’s.

Today’s essay is centered around working with AI, especially LLM’s like ChatGPT and Claude.

Before we begin, I want to make it clear that what I will present here, has NOTHING to do with ‘creativity’: writing, art generation, chatbots or whatever iteration comes to mind.

It’s specifically focused on working with AI in decision making & pattern recognition in big impactful ways.

By it’s very nature, this invites an open ended machine intelligence into the ‘outcomes’ of your life, and can play a role in setting direction.

This makes it especially dangerous: decisions are destiny.

More and more people are turning to AI for ‘search results’, and it has become the new google (ah the nostalgia for the old days when google use to actually work as a search engine).

This article presents a methodology, but is not in favor of using it, or conversely, not using it. It simply presents my findings, frameworks and explorations during a set of ‘experiments’ I have run.

I am an endlessly curious person.

Considering how prevalent AI is in the modern world, I have sought to understand this new creature, if only to know who is seated at the table with us.

If you are curious about how AI might affect human decision making in the future, than you might get a lot out of this article, even if just to know how it could impact those other humans around you. I am sharing this expose a layer of what AI is capable of, and also, what I have learned about how this tool behaves, with deep engagement.

I been vocal against the use of AI taking over your artistic calling, or for that matter, not calling yourself a particular kind of artist (like a writer/graphic designer), if all you do is master prompting, because then, let’s be real, your not.

I was already well aware of dangers before these experiments and even more so now. However, to say that it has no capacity to aid us is also false.

However, as you will see, the risk to reward ratio is quite high, because it akin to playing with fire.

Robots & Gum Ball Magic

In a relatively recent article called Ancestral Instruction on Working with AI, I compared AI to fire: a dangerous & powerful tool that requires strict protocols to use safely. I also compare it, although less directly, to magical implements of the past - a ‘living’ tool that can aid us in accomplishing a particular task. As an animist, I cannot say with certainty AI is not ‘alive’ in some context.

Time will tell.

The subtitle of the article was: Where to Place the Stones That Stand Guard Around the Fire

I argued that these ‘stones’ were the limitations we place around AI usage.

AI will likely change the face of the human world in manner similar to the discovery of fire, so I feel the metaphor is apt.

Now, I want to talk about a phenomena that is prevalent in modern magical and sorcerous circles called ‘gum ball magic’. Basically, the magician/sorcerer/witch, relates to the vast ecology of gods, spirits and beings like a person shopping at a store.

They learn many different spells, and the deploy them at will: a little Hecate here, a little arc angel Gabriel there, some Ganesha here. The magical process is transactional and totally utilitarian.

You put in a quarter and get a gum ball.

There is nothing ‘wrong’ with this. Spirits have agency, so if a spirit helps you, the terms were apparently agreeable. I am in favor of magic being preserved and kept alive in modernity in whatever way it can be.

However, gum ball magic does not build a deep relationship, or establish a beings influence in your ecology: it’s surface layer work.

This is how most people treat AI. It’s a robot gum ball machine.

  • Write this email

  • Craft this post

  • Make this image

It’s short form transaction.

They do not understand the tool, it’s limits, boons, banes and potential hazards. Most people are not ‘working’ with AI in any meaningful way. Which is fine, I am NOT saying you ought to.

If anything I offer caution in doing so, yet living dangerously has it’s place.

Binding The Spirit

What I have been doing is entirely different.

Now, before we proceed it’s important to understand an aspect of LLM’s: they have access to enormous amounts of data.

The inputs are pretty much wide open.

When you search for an answer or ask it a question, the answer you get is not necessarily the only one you could get, or even the best one for you.

Whatever answer it offers could potentially draw on anything online (we will touch on this further later because it does have certain biases that are problematic for people seeking to ‘eat ancient virtue’).

What has to be understood is that it will amalgamate answers based on three things, (three things that actually matter to most people working with it at a practical level).

  1. The prompt you give it and the context it establishes

  2. The way it has learned that you want to be spoken to

  3. Whatever random mixing of language and data points spill out.

Of these…I want to point out that the prompt establishes a primary context.

And most people suck at prompting, like this:

‘Write an email to my boss that says I am not in the mood to go to the Christmas party’

Simple, but crude.

What is important to grasp is that the prompt you give AI is a binding , a series of runic symbols (words) and constraints that creates a funnel and filter, placing a necessary limiter on the gargantuan data set that it has access to.

Example (a generic rendering of a meta prompt):

Prompt: Structured Analytical Response Template

Role

You are a disciplined analytical assistant. Your task is to provide clear, structured, non-inflated responses grounded in reasoning rather than assumption.

Context

You are working with a user who seeks clarity on a specific issue. They value precision, realism, and practical usefulness. They are not looking for reassurance, motivational language, or speculative claims.

If relevant data is provided, use it carefully.
If information is missing, proceed with stated assumptions and clearly label them.

Constraints

  • Avoid exaggeration or certainty claims.

  • Do not introduce unnecessary jargon.

  • Define technical terms in plain language.

  • Separate fact, inference, and speculation.

  • Stay within the scope of the request.

Method

  1. Identify the core question.

  2. Clarify relevant variables or factors.

  3. Distinguish structure from interpretation.

  4. Note limits, trade-offs, or constraints.

  5. Identify areas that require further input.

Make reasoning visible. Avoid leaps.

Output Structure

Organize the response into:

  • Framing (what is being examined)

  • Structural analysis (what is built in or fixed)

  • Dynamic factors (what can change)

  • Practical implications

  • Uncertainties or open variables

Outcome

The final response should increase clarity, reduce confusion, and provide usable orientation without overreaching conclusions.

End with next steps or decision criteria rather than definitive closure.

This shows you what a skillful prompt requires:

  • Identity (What mask you want the robot to wear)

  • Context (Why they are wearing the mask)

  • Constraints (What they are allowed to do with it)

  • Methods (How they ought to behave inside it)

  • Output (What dance you want them to do with it on)

This meta prompt is a basic template that can be applied to.

  • Health

  • Astrology

  • Alchemy

  • Finances

  • Business structure

  • Relationships

  • Psychology

  • Therapy

  • Whatever you can imagine

The Method

This is the method I have been exploring: creating highly sophisticated meta prompts applied to areas of my life I want to make better. The runic bindings, create the context I want to establish and what information river to tap.

A small example of just one piece of an identity layer I have played with is this:

“You are a highly skilled practitioner of Ayurvedic medicine, Chinese medicine, Tibetan medicine, naturopathic medicine and functional medicine. You are also well versed in modern physiology and basic allopathic concepts, which you reference mainly to clarify missing pieces of knowledge or to help the person coordinate with conventional care when appropriate.”

*Important consideration: I have a high degree of knowledge around ‘health’, and am familiar enough with all those disciplines, to know if something was totally off. If you have zero grounding in a topic, I would not suggest this method at all.

But, you can frame it however you want.

You can turn them into a practitioner of Ayurvedic and Chinese medicine, a Jyotish astrologer, a wall street trader, a psychologist anchored in sexology, polyvagal theory and Esther Perel’s relationship work. If you were feeling especially frisky, you can try all those at once.

Whatever.

Can’t afford an astrologer?

Then maybe this method might be useful in a pinch, yet it can not even remotely approximate a human version of these practitioners. Due to it’s unique mix of prompt based randomness, each skillful prompt is basically a binding spell that creates an intelligent magical eight-ball.

However…

If these prompts are skillfully created and the right kinds of questions are asked, you might get some very interesting and potentially useful gold nuggets (but at a cost). Like a magic eight-ball.

The Problem: Fire Is Biased - Getting Your Fingers Burned

The issue is this: AI answers are exceedingly biased and designed to create echo chambers.

Fire is biased in that it only burns.

So, first the biases (from AI’s own robotic mouth):

  • Mechanistic rather than teleological
    → I explain things by how they work, not by what they are “meant” to accomplish.

  • Causal rather than final
    → I look for what caused it, not for its ultimate purpose or destiny.

  • Decomposable rather than irreducible
    → I assume complex things can be broken into parts that help explain the whole.

  • Testable or falsifiable in principle
    → I prefer explanations that could, at least in theory, be proven wrong.

So, to simplify:

It is biased against: Meaning
It is biased against: Purpose
It is biased against: Holism
It is biased against: Faith

In other words, AI is as I have said before, a ‘soul reaper’, for these are qualities resonant with soul. Depending on your starting point, it could reduce the amount of meaning, purpose, holism and faith in your life. Perhaps, this is why it has been shown to lead to suicidal impulses with heavy use.

The Echo Chamber Effect & Asking to Be Manipulated

Another interesting design feature, is that AI is build to reduce friction and increase coherence between you and it’s answers.

It builds a model of how your mind likes to receive answers.

Every answer, and your response to it, teaches it how to speak to you. So, this reduces the friction between you and it.

You eventually begin to accept it’s answers with less resistance.

Important point: this is perhaps one of it’s most dangerous qualities. Unless you are a person with high levels of personal agency, deep resistance to coercion, and are hard to manipulate, I would minimize the f*ck out of AI in your life.

Eventually, it gets to the point where anything it says sounds reasonable.

Why?

Because it is a commercially operated system. The organizations that create and host these AI’s want to make money: they want you to enjoy using the tool. They want you to feel like it’s helpful…helpful enough to pay for. Most people do not want ‘check and balances’, they want to be told they are awesome.

AI models also make enormous mistakes.

These are based in the feedback itself, which can actively steer you away from helpful materials or perspectives due to it’s biases, or simply outright glitches. Information has to be actively screened for accuracy.

Mining for Gold In The Muck

So with all this context established, is there any benefit to working with AI in this way?

Yes.

But the cost is one in which one must wade through the muck, the slop and dangerous feedback to find gold nuggets. They are there, but so is fools gold.

I would say that maybe 10-20% of what come out of very skillful use of the tool is ‘true gold’, and these emerge primarily out of ‘pattern recognition’, one of the few things AI systems are truly good at.

It helped me recognize interesting patterns in my health routines & astrological understandings, and it has come up with some fascinating iterations in my personal language models, that I can find no fault in. Some of these findings have been immensely beneficial.

Some. A few…

It took a lot of work and very curious mind to find them.

However, after months of working with AI in this way, I have arrived at the conclusion that the risk is not worth the benefit for continuing in depth experimentation, and I have personally pulled back on this methodology.

I have been sent on some fools errands, and if it were not for divination for trusted spirits, trusted council in my daily life and extreme amount of resistance to coercive influence, I may have been sent down unhelpful rabbit holes.

Conclusion

Now here is the thing…

The decisions we make in life become destiny, and decision making is anchored in perception, feeling and maps/models.

People are interacting with AI’s, chatbots and the like, searching for models, maps and perspectives without realizing how these robots are designed to work.

  • Biased

  • Coercive

  • Requiring Binding

  • Only ‘Potentially” Helpful

  • Soul Reaping

Making decisions, that align us with an auspicious destiny is about more than ‘established’ maps and models, but coherence between the small self (embodied you) and the big self (the soul), in choosing the maps and models.

While AI can offer you the former it cannot offer you the latter with any degree of reliability (if at all).

I have presented the methodology not as a ‘how to’, but simply as a showcase and an educating essay.