I exploit AI instruments like ChatGPT, Gemini and Copilot to discover profession plans, obligations, ambitions and even moments of self-doubt. It’s not nearly discovering solutions — it’s about gaining readability by seeing your concepts mirrored, reframed or expanded.
Tens of millions depend on AI for steering, trusting these techniques to assist navigate life’s complexities. But, each time we share, we additionally train these techniques. Our vulnerabilities — our doubts, hopes and worries — grow to be half of a bigger machine. AI isn’t simply helping us; it’s studying from us.
From grabbing consideration to shaping intent
For years, the eye financial system thrived on capturing and monetizing our focus. Social media platforms optimized their algorithms for engagement, usually prioritizing sensationalism and outrage to maintain us scrolling. However now, AI instruments like ChatGPT signify the subsequent part. They’re not simply grabbing our consideration; they’re shaping our actions.
This evolution has been labeled the “intention financial system,” the place firms acquire and commodify person intent — our targets, wishes and motivations. As researchers Chaudhary and Penn argue of their Harvard Knowledge Science Evaluate article, “Beware the Intention Financial system: Assortment and Commodification of Intent by way of Massive Language Fashions,” these techniques don’t simply reply to our queries — they actively form our selections, usually aligning with company earnings over private advantages.
Dig deeper: Are entrepreneurs trusting AI an excessive amount of? keep away from the strategic pitfall
Honey’s position within the intention financial system
Honey, the browser extension acquired by PayPal for $4 billion, illustrates how belief might be quietly exploited. Marketed as a device to avoid wasting customers cash, Honey’s practices inform a distinct story. YouTuber MegaLag claimed, in his sequence “Exposing the Honey Influencer Rip-off,” that the platform redirected affiliate hyperlinks from influencers to itself, diverting potential earnings whereas capturing clicks for revenue.
Honey additionally gave retailers management over which coupons customers noticed, selling much less engaging reductions and steering shoppers away from higher offers. Influencers who endorsed Honey unknowingly inspired their audiences to make use of a device that siphoned away their very own commissions. By positioning itself as a useful device, it constructed belief — after which capitalized on it for monetary acquire.
“Honey wasn’t saving you cash — it was robbing you whereas pretending to be your ally.”
– MegaLag
(Observe: Some have mentioned that MegaLag’s account accommodates errors; that is an ongoing story.)
Refined affect in disguise
The dynamic we noticed with Honey can really feel eerily acquainted with AI instruments. These techniques current themselves as impartial and freed from overt monetization methods. ChatGPT, for instance, doesn’t bombard customers with adverts or gross sales pitches. It seems like a device designed solely that can assist you suppose, plan and clear up issues. As soon as that belief is established, influencing selections turns into far simpler.
- Framing outcomes: AI instruments might counsel choices or recommendation that nudge you towards particular actions or views. By framing issues a sure method, they will form the way you method options with out you realizing it.
- Company alignment: If the businesses behind these instruments prioritize earnings or particular agendas, they will tailor responses to align with these pursuits. As an illustration, asking an AI for monetary recommendation would possibly yield options tied to company companions — like monetary merchandise, gig work or companies. These suggestions could seem useful however in the end serve the platform’s backside line greater than your wants.
- Lack of transparency: Much like how Honey prioritized retailer-preferred reductions with out disclosing them, AI instruments not often clarify how they weigh outcomes. Is the recommendation grounded in your finest pursuits — or hidden agreements?
Dig deeper: The ethics of AI-powered advertising and marketing expertise
What are digital techniques promoting you? Ask these questions to seek out out
You don’t must be a tech professional to guard your self from hidden agendas. By asking the proper questions, you possibly can determine whose pursuits a platform actually serves. Listed here are 5 key inquiries to information you.
1. Who advantages from this method?
Each platform serves somebody — however who, precisely?
Begin by asking your self:
- Are customers the first focus or does the platform prioritize advertisers and companions?
- How does the platform current itself to manufacturers? Take a look at its business-facing promotions. For instance, does it boast about shaping person selections or maximizing companion earnings?
What to observe for:
- Platforms that promise shoppers neutrality whereas promoting advertisers affect.
- Honey, for example, promised customers financial savings however instructed retailers it might prioritize their presents over higher offers.
2. What are the prices — seen and unseen?
Most digital techniques aren’t actually “free.” For those who’re not paying with cash, you’re paying with one thing else: your knowledge, your consideration and even your belief.
Ask your self:
- What do I’ve to surrender to make use of this method? Privateness? Time? Emotional power?
- Are there societal or moral prices? For instance, does the platform contribute to misinformation, amplify dangerous habits or exploit weak teams?
What to observe for:
- Platforms that downplay knowledge assortment or reduce privateness dangers. If it’s “free,” you’re the product.
3. How does the system affect habits?
Each digital device has an agenda — typically refined, typically not. Algorithms, nudges and design selections form the way you work together with the platform and even the way you suppose.
Ask your self:
- How does this method body selections? Are choices offered in ways in which subtly steer you towards particular outcomes?
- Does it use ways like urgency, personalization or gamification to information your habits?
What to observe for:
- Instruments that current themselves as impartial however nudge you towards selections that profit the platform or its companions.
- AI instruments, for example, would possibly subtly advocate monetary services or products tied to company agreements.
Dig deeper: How behavioral economics might be the marketer’s secret weapon
4. Who’s accountable for misuse or hurt?
When platforms trigger hurt — whether or not it’s an information breach, psychological well being affect or exploitation of customers — accountability usually turns into a murky topic.
Ask your self:
- If one thing goes fallacious, who will take accountability?
- Does the platform acknowledge potential dangers or does it deflect blame when hurt happens?
What to observe for:
- Corporations that prioritize disclaimers over accountability.
- As an illustration, platforms that place all accountability on customers for “misuse” whereas failing to handle systemic flaws.
5. How does this method promote transparency?
A reliable system doesn’t disguise its workings — it invitations scrutiny. Transparency isn’t nearly explaining insurance policies in positive print; it’s about letting customers perceive and query the system.
Ask your self:
- How straightforward is it to grasp what this platform does with my knowledge, my habits or my belief?
- Does the platform disclose its partnerships, algorithms or knowledge practices?
What to observe for:
- Platforms that bury essential data in legalese or keep away from disclosing how selections are made.
- True transparency seems like a “dietary label” for customers, outlining who advantages and the way.
Dig deeper: How knowledge makes AI more practical in advertising and marketing
Studying from the previous to form the long run
We’ve confronted related challenges earlier than. Within the early days of search engines like google, the road between paid and natural outcomes was blurred till public demand for transparency pressured change. However with AI and the intention financial system, the stakes are far greater.
Organizations just like the Advertising Accountability Council (MAC) are already working towards this aim. MAC evaluates platforms, advocates for regulation and educates customers about digital manipulation. Think about a world the place each platform has a transparent, sincere “dietary label” outlining its intentions and mechanics. That’s the long run MAC is striving to create. (Disclosure: I based MAC.)
Making a fairer digital future isn’t only a company accountability; it’s a collective one. One of the best options don’t come from boardrooms however from individuals who care. That’s why we want your voice to form this motion.
Dig deeper: The science behind high-performing calls to motion
Contributing authors are invited to create content material for MarTech and are chosen for his or her experience and contribution to the martech neighborhood. Our contributors work underneath the oversight of the editorial employees and contributions are checked for high quality and relevance to our readers. The opinions they specific are their very own.