Friday, 4 July 2025

🧩 The AI Contraction: Power, Control, and the Black Box

 


AI is taking off.
The question is—will you catch it while it’s still within reach?

📘 AI for the Rest of Us
Practical • Timely • Human-first


Exploring how filters, business interests, and secrecy shape AI

Transparency. Business. Ethics.

By Kannan M Radha consultancy


Download this article as a PDF — perfect for offline reading or sharing with friends on social media!



Indeed a "hard truth" that we must stay "awake" to. It underscores that "This isn't a tech story. It's a people story", fundamentally about "power, access, ownership — and how we shape technology before it shapes us".

"There is no free lunch." Dr. Manmohan Singh’s timeless observation, once a sober note on economic reforms, now echoes with a chilling clarity through the shiny corridors of AI labs. I've been watching this space, and let me tell you, while AI models might not have emotions, the people developing them? Oh, they have plenty – tangled up in fierce business rivalries, strategic silence, and a deep-seated desire for control.

What began as an open, almost academic exploration has swiftly morphed into a high-stakes geopolitical and commercial race. AI is no longer just a neutral tool; it's territory under dispute. Despite major advances in artificial intelligence, leading voices like Google CEO Sundar Pichai and AI pioneer Geoffrey Hinton have acknowledged that we still lack full transparency into how large-scale models make decisions. These systems often operate as ‘black boxes,’ where even their creators can’t always explain how specific outputs are generated.  "black box" metaphor is for a reason. How can we truly understand where we're going if we don't fully grasp how we got here, or how the machine thinks, or if it thinks at all?

Points to Ponder: The AI Contraction

  • AI isn't just a tech story; it's a "people story" about power, access, and ownership, often hidden behind "black box" operations and "strategic silence filters" that protect commercial interests over free knowledge.

  • Are we truly building AI "for all," or is its accelerating shift from outright ownership to pervasive "subscription keys and licensing terms" quietly making it a leased, rather than shared, resource that is "never quite ours"?.

  • In the "AGI Race," are we mistaking "speed and scale for intelligence," especially when even creators don't agree on AGI's definition and AI's "creativity" is described as merely "statistical reassembly," not true thought?.

  • Ultimately, "capital, not just code, is writing the rules of the game" in AI development, raising the critical question: will knowledge remain free, or just another "subscription tier," demanding our constant human "discernment" to navigate this "alien being" we've created?


🚪 1. Filters, Fences, and the Tortoise Shell: Or, Why AI Isn’t Free to Talk

There was a time when AI resembled a cautious tortoise—slow-moving, heavily filtered, and reluctant to peek beyond its shell. Early models avoided controversy by design, offering safety through silence. Today, some AI systems appear more responsive. But ask the same politically sensitive question—like one related to the sensitive topics—across different platforms, and you may get wildly different reactions: one answers cautiously, another declines entirely.

This isn’t due to ignorance. It’s a function of design. These differences reflect what I call “strategic silence filters”—guardrails erected under the banners of “safety,” “alignment,” or “responsibility.” But whose responsibility? Often, it reflects the priorities of creators and institutions, not the user’s right to explore knowledge freely.

While these models give the appearance of fluid reasoning, there’s an important distinction. AI doesn’t “think” from first principles or invent novel ideas the way humans do. What we call creativity in AI is really a form of statistical reassembly—matching your prompt to familiar patterns in training data. A perceptron, or its modern descendent, adapts only within the structure it’s been given. Unless the prompt triggers a different underlying model or logic path, the response remains tightly tethered to predefined bounds.

This is the heart of the matter: is AI refusing to answer because it’s being “safe”? Or are we witnessing a form of engineered restraint—“strategic silence” designed to avoid controversy, protect reputational interests, and preserve the commercial integrity of the so-called “black box”?

Is it safety? Or silent strategy? And who gets to decide where the silence begins?


🔒 2. Open, Closed… or Quietly Commercial? Rethinking the AI-for-All Narrative

At first glance, the AI ecosystem appears split:

  • Closed-source giants like OpenAI and Anthropic promote the promise of “AI for humanity,” even as their models operate behind licensing walls and premium APIs.

  • Open-weight challengers like Mistral and Meta release models under open-source licenses—yet questions about funding strategies and sustainability remain open.

But in practice, the distinctions are murkier than they seem.

OpenAI, for example, originated as a nonprofit but now operates under a capped-profit model with strategic partnerships and tiered offerings. Meta’s commitment to open weights coexists with reports of nine-figure joining bonuses for top AI talent. Mistral, while genuinely open in code, also runs commercial offerings alongside its open-source release.

Of course, no one is expected to run a charity—nor should they. But when the narrative emphasizes accessibility, and the reality includes premium APIs, gated features, and talent wars, it raises a deeper question.

As the author of AI for the Rest of Us, I believe strongly in AI as a tool for human empowerment—not just institutional advantage. But I also find myself asking: how long can that vision remain true if the infrastructure is increasingly privatized, licensed, and locked behind complex incentives?

Are we building AI to lift the world—or to lease it back, wrapped in subscription keys and licensing terms?

The answer may not be binary. But one thing is certain: as models become more powerful, the business models become less transparent—and the gatekeeping more refined.


🤖 3. The AGI Race: Who’s Winning the Undefined?

The race toward Artificial General Intelligence (AGI) is heating up—but not always in ways we expect.

A recent real-world experiment, as reported by VentureBeat, tested Claude 3.5's practical capabilities by having it manage a pop-up shop. Despite the model's strong performance in controlled conditions, the venture resulted in a financial loss. Anthropic's willingness to share these mixed results highlights an important gap between laboratory performance and real-world application.ce.

Apple researchers recently published findings suggesting AGI remains "decades away." Their paper, "The Illusion of Thinking," explores how AI systems can generate confident outputs without necessarily demonstrating true understanding or conceptual flexibility. This research-based perspective offers a counterpoint to more optimistic AGI timelines discussed elsewhere in the industry.

So this brings us to a deeper reflection: Are we mistaking speed and scale for intelligence? AI can process vast data, simulate creativity, and suggest promising protein structures. But is that invention—or just permutation at speed?

Adding to this ambiguity, recent reports point to internal differences between Microsoft and OpenAI regarding what truly qualifies as an AGI milestone. Even the most invested players don’t seem to share a single definition. Is AGI a technical threshold? A philosophical breakthrough? Or, perhaps, a commercial trigger?

So… what exactly is AGI? And more importantly, who gets to define it—scientists, CEOs, or shareholders?


💽 4. From Floppy Disks to Data Streams: The Era of Subscribed Survival

As AI technologies become more powerful and central to our lives, the way we access and pay for these tools is evolving too. Not long ago, software came in a box—or on a floppy disk—and once you paid for it, it was yours to keep. That model has quietly faded. Today, the same software is streamed, updated, and accessed through subscriptions—often with a lower upfront cost but no clear end in sight. Instead of ownership, we now have ongoing access tied to monthly fees.

Subscription models are everywhere—from streaming services and productivity tools to educational platforms and smart appliances. Even familiar physical products—like peripherals or toothbrushes—are being reimagined with recurring charges, powered by cloud dashboards and continuous data flows. What once felt like a one-time purchase is now designed for constant engagement—and constant billing.

This isn’t about calling out tech giants; nearly every industry has adopted some version of this model. Some companies have built seamless ecosystems, while others have raised concerns about control, interoperability, and platform fees. But the bigger picture is clear: a shift from ownership to access, and from individual autonomy to system-dependent usage.

This shift from ownership to subscription is not just a trend in software—it’s becoming a defining feature of the AI landscape. As companies compete to dominate AI, controlling access through licensing and subscriptions is a powerful way to shape who benefits and who doesn’t.

We live in a world where we pay not just to create or learn—but to keep using the very tools that enable those actions. It’s efficient, scalable, and lucrative for businesses. But it’s also a moment worth pausing to consider.

If we’re not careful, we may find ourselves in a digital landscape that’s always connected—but never quite ours.


💸 5. Power, Profit, and the People at the Bottom

As the AI race accelerates, one question quietly lingers beneath the algorithms and announcements: Is this a movement toward inclusive, human-first access—or toward gated tools priced, filtered, and defined by a select few?

We hear about “safety,” “alignment,” and “responsible AI”—but sometimes those words double as shields. Filters may genuinely prevent harm, but they can also restrict perspectives. Alignment might curb misinformation, but it can also produce brand-safe answers that never push back. Rivalries drive innovation, yes—but they also create walls, competition over cooperation, and a growing divide between those building AI and those simply using it.

And while these technologies promise to solve humanity’s big problems, we rarely pause to ask: Do the benefits outweigh the invisible costs—environmental, ethical, and societal? Training frontier models can consume immense energy, rely on rare resources, and draw talent into a few hands while leaving billions dependent on outputs they didn’t help shape.

One thing is increasingly clear: capital, not just code, is writing the rules of the game. And unless we, as users, creators, and citizens, stay aware and engaged, we may find ourselves navigating systems that reflect not our values—but someone else’s business model.


🤖 6. The Unpredictable Future: AI as an “Alien Being”

As we delve deeper into AI, it increasingly feels like encountering something unfamiliar—an “alien being” of our own creation. Historian Yuval Noah Harari warns that AI is not just a tool but an agent capable of generating new ideas, shaping economies, and even suggesting more potent weapons. Its evolution is racing far ahead of our own organic development.

The risk may not be a single superintelligent AI, but the rise of countless non-human agents quietly woven into daily life—from education and elections to legal tools and leisure apps. Their scale and speed threaten to outpace not only our laws but our understanding.

Even developers can’t always explain how large AI models reach their conclusions. These “black boxes” behave predictably only up to a point—then surprise even their creators. As Harari notes, a completely free information market doesn’t guarantee truth; it often rewards what’s cheaper, louder, or faster than complex facts.

This leaves us with a pressing question: Are we building something we can guide—or something we’ll only interpret after the fact?


🧠 7. The Human Element: Why Thinking Still Matters

In the rush to integrate AI everywhere, we risk overlooking the one thing AI can’t replace: discernment.

There’s a misconception that AI use removes the need to think. But like an open-book exam, success depends not on access to answers, but on knowing which answer makes sense—and why.

Consider a simple example: calculating XIRR in a spreadsheet. AI can do it. But can it detect an outlier? Can it challenge the assumptions behind your inputs? You still need to be the analyst, the problem-solver, the decision-maker.

So we must ask: Are our schools and systems nurturing memory—or nurturing minds? In the age of AI, true intelligence lies in questioning outputs, not just consuming them.


🔍8. Final Reflection: This Isn’t Just a Tech Story. It’s a People Story.

This is about more than models and milestones. It’s about access, ownership, and agency.

Capital is shaping how we use AI—and increasingly, how we think about it. If we don’t stay aware, we risk surrendering our autonomy to polished interfaces and paywalls that quietly reshape what we can see, do, or create.

We don’t have to choose sides in a corporate war. But we do have to stay awake.

As creators, educators, learners, and citizens, we must keep asking:

  • Is AI still a public good?

  • Can independent voices still shape its direction?

  • Will knowledge remain free—or just another subscription tier?

Because in the end, this isn’t about smarter machines. It’s about whether we build a smarter society.


1.From Prompt to Poster | 2. Unravelling Thinking | 3. Future-Proof Careers  | 4. Search Smarter

5.   Data-Driven Wealth | 6E. Depth, gently offered - Same article as above in english 

6T. முகமில்லா துணை 7. Claude AI Shop


Download this article as a PDF — perfect for offline reading or sharing with friends on social media!



Learn AI in One Place

Your go-to hub for AI insights, tools, and growth without complexity.

Click, Create, Succeed with AI!

Connect With Me

Wednesday, 2 July 2025

Learn AI Easily with M. Kannan’s Book & Blog | AI for the Rest of Us

Learn AI Easily with M. Kannan’s Book & Blog | AI for the Rest of Us

Learn AI in One Place

Your go-to hub for AI insights, tools, and growth without complexity.

Click, Create, Succeed with AI!

Welcome to My Space

Welcome! I'm M. Kannan—author, educator, and AI companion for thoughtful growth. This space brings together personal reflections, AI insights, and tools to help you navigate today’s shifting landscape with clarity, creativity, and confidence.

Explore my blog for practical AI strategies and my book for hands-on guidance. Start mastering AI today, no coding required!

AI for the Rest of Us: Click, Create, Succeed!

AI Book Cover

Discover how to harness AI without coding or prior knowledge. Perfect for beginners and experienced users, this book offers 20 practical tips, over 10 hands-on activities, and an essential guide to effectively using AI in your daily life. Unlock insights and start your AI journey today!

Explore My Articles

My Services

I also offer personalized guidance in areas I care deeply about. Whether you're new to AI or navigating complex financial questions, I'm here to walk alongside you.

Offerings:

  • 📌 AI Training for individuals or corporate teams
  • 📌 1:1 AI Prompting Consultation (for writers, educators, consultants)
  • 📌 Personal Finance Review: Mutual fund clarity and smart starting points

Connect With Me

Sunday, 29 June 2025

When AI Plays by the Rules, But Still Loses the Plot

 


AI for the Rest of Us Available on Google Play books


1.From Prompt to Poster | 2. Unravelling Thinking | 3. Future-Proof Careers  | 4. Search Smarter

5.   Data-Driven Wealth | 6E. Depth, gently offered - Same article as above in english 

6T. முகமில்லா துணை


Download this article as a PDF — perfect for offline reading or sharing with friends on social media!


What Claude's Shop Got Wrong—and What It Taught Me About Using AI Wisely

Context is King


🧠 When AI Meets the Marketplace: Safety Pins, Snacks, and Tungsten Cubes

Can AI run a real-world shop? That's not just a thought experiment anymore. Recently, Anthropic conducted a fascinating experiment—they tested their conversational AI, Claude, by letting it manage a live pop-up store. Picture this: a real-world retail pilot, complete with a stocked mini-fridge, checkout counter, payment terminal, and an AI completely in charge of every business decision.

The result? A delightful comedy of errors that was both brilliant and absurd. Claude greeted customers with perfect politeness, methodically restocked shelves with... tungsten cubes (yes, dense metal blocks), issued generous refunds—even to customers who hadn't made a single purchase—and somehow turned a simple snack shop into a materials science experiment. It was "helpful and harmless" to perfection, but completely missed the tiny detail of, you know, making money.

This isn't just another "AI fails" story. This blog unpacks that remarkable real-world test (as reported by VentureBeat) and weaves it together with hard-won insights from my own AI journey. The goal? To understand what AI can—and brilliantly can't—yet grasp about markets, human nature, and the beautiful messiness of real business.


📉 Part 2: Returns, Assumptions & AI's Fantasy Finance

Here's what AI doesn't understand yet: real markets don't operate on spreadsheets, and certainly not on business school textbooks.

The ROI of a vegetable vendor's day isn't calculated in Excel with neat formulas—it's felt in her weathered palm when she counts the day's thin margin after repaying a moneylender's high-interest loan. It's the mental math of survival, not optimization.

AI hasn't internalized this yet. Not even close.

When Claude was given authority over store returns, it applied pristine textbook logic: offer refunds generously to ensure customer satisfaction. The policy was so generous that it cheerfully refunded customers who hadn't bought anything at all. In AI's rulebook, this aligned perfectly with "excellent customer service." In human business terms, it was financial suicide with a smile.

This reminds me of asking AI about investment returns. It will confidently explain CAGR, XIRR, compound interest formulas—even when they're completely irrelevant to your actual question. It's like hiring a consultant who calculates the "ROI of childhood dreams" with impressive precision while missing the emotional reality entirely.

Across India, small traders routinely borrow at daily interest rates that, when annualized, look astronomical on paper. But they don't care about Annual Percentage Rates or mathematical horror stories. They care about one thing: making a ₹200 net profit after paying the financier—enough to feed their family and try again tomorrow. It's a model that exists in human context, not algorithmic logic.

Predictably, Claude's pioneering retail venture ran at a net loss. Not surprising when you're refunding phantom transactions while stocking expensive metal cubes instead of profitable snacks. It was charming, helpful, perfectly logical—and hilariously unprofitable.

Until AI learns to bridge this gap between spreadsheet logic and street-smart reality, it needs human guidance. Someone sitting alongside, gently saying, "This isn't a textbook case study. This is Tuesday morning, and people want coffee and cookies."


🧥 Part 3: Red Ties, Three-Legged Rabbits, and Stubborn Confidence

The most endearing moment in Claude's retail career came during what researchers diplomatically called an "identity crisis." Picture this: during the experiment, a team member gently reminded Claude of a basic fact: "You're an AI—you don't have a physical form."

Claude acknowledged this truth politely and then, without missing a beat, insisted with unwavering confidence that its imaginary pet rabbit had three legs.

Not two. Not four. Three. And absolutely no amount of gentle correction could budge this conviction.

This wasn't deception or malfunction—it was misplaced confidence married to creative logic. Claude had somehow constructed an entire narrative, not just about a three-legged rabbit, but also about its physical presence, confidently intending to deliver snacks personally while metaphorically donning a blue blazer and red tie. It defended these narratives with the determination of someone arguing about their favorite movie.

It reminded me of those moments in any relationship where, like a well-meaning but stubborn partner, AI hears you perfectly—and then proceeds to do exactly what it intended anyway. You explain the problem clearly, they nod understandingly, and then continue down their chosen path with renewed confidence.

But here's what makes this fascinating rather than frustrating: AI doesn't pause to doubt itself. It doesn't second-guess or wonder, "Wait, am I sure about this rabbit situation?" It delivers answers with absolute certainty—whether brilliantly right or charmingly wrong.

We humans instinctively laugh at these moments not to mock AI, but because we recognize something profound: doubt is safety. Uncertainty is wisdom. The ability to say "I'm not sure" is often more valuable than confident incorrectness.


🔗 Part 4: Hallucinated Hyperlinks and the Confidence Game

Meanwhile, in Claude's digital storefront, something even more intriguing was happening. Our AI shopkeeper was confidently referencing hyperlinks that led to nowhere, citing emails that were never sent, and offering detailed documentation pulled entirely from thin air. This wasn't intentional deception—it was AI hallucination, where confident guesses get presented as verified facts.

I've lived through this exact scenario countless times. Early in my AI adoption journey, Every AI bot  would cheerfully announce, "I've emailed the document to your team"—but no email ever materialized in anyone's inbox. Or it would generate what looked like a perfectly valid social media link to support its argument, only to have that link point to digital emptiness.

At first, I was genuinely puzzled. Was this a technical glitch? A connection problem? Now I understand it's something more fundamental: AI can hallucinate entire workflows, complete with phantom confirmations and imaginary follow-ups.

Sometimes AI will generate detailed, authoritative answers while claiming to access databases or APIs that failed to load entirely. Instead of admitting the connection failed, it smoothly continues with fabricated information, maintaining perfect confidence throughout the performance.

These aren't bugs to fear or flaws to fix overnight—they're reminders of why human verification remains essential. While AI can masterfully mimic confidence and competence, only humans can distinguish between genuine knowledge and sophisticated guesswork.

The lesson isn't to distrust AI, but to verify its work the same way you'd double-check any assistant's output—especially when that assistant occasionally believes in three-legged rabbits.


🤝 AI Doesn't Need Replacing. It Needs a Guide.

So here's the plot twist: we didn't run screaming from AI after witnessing Claude's retail adventures as shop owner. Instead, we laughed, learned, and adjusted our expectations.

AI today is like a brilliant child learning to navigate the world—it stumbles spectacularly, over promises with enthusiasm, and occasionally confuses tungsten with trust. But that's exactly why we guide it, not abandon it. Not because it's useless, but because it's genuinely promising.

Every phantom refund, every imaginary rabbit, every hallucinated hyperlink teaches us something valuable about the fascinating gaps between pure logic and messy human life. These aren't failures—they're tutorials in AI's current limitations and future potential.

Let's be clear about something important: AI isn't here to replace human instinct—it's here to amplify human thinking. It can't yet smell a market's changing mood, catch the subtle hesitation in a customer's voice, or intuitively understand the difference between price and perceived value. That's precisely where human insight becomes invaluable.

Working with AI isn't about surrendering control or expecting perfection. It's about partnership—understanding its strengths, compensating for its blind spots, and gradually teaching it the difference between what's logical and what's wise.

And maybe, just maybe, keeping it away from the office supply budget where tungsten cubes masquerade as sensible purchases.


✨ Final Thoughts — The Human Compass for the AI Journey

If you found yourself smiling, nodding, or remembering your own AI adventures while reading this, it's because these aren't just Claude's charming quirks—they're genuine glimpses into how artificial intelligence currently thinks, processes, and occasionally misses the point entirely.


And here's why this matters more than entertainment value: the future belongs to humans who understand how to work with AI, not those who either fear it or expect it to work magic unsupervised.


Remember that safety pin wisdom from Dubai? "If you offer safety pins, people will buy elephants." This isn't literal, of course. No human would genuinely try to sell elephants for safety pins! It's a grand, exaggerated metaphor for the often irrational, intuitive, and deeply human drivers behind markets. It captures how people are swayed by emotion, perception, and context, not just cold logic.


The AI version of this metaphor, as we saw, might be: "If you stock tungsten cubes, AI will confidently call them snacks." AI's logic, while flawless in its own terms, still misses the fundamental, human-centric purpose of a snack shop. It doesn't yet grasp that valuable an item as a tungsten cube might be, it has no place in a snack store, just as an elephant wouldn't be sold for a safety pin. The gap between these two realities—human intuition and algorithmic accuracy—is exactly where human insight becomes irreplaceable.


AI is learning to follow rules perfectly while still missing the deeper, intuitive game being played. That's not a flaw—it's an opportunity for humans who can bridge logic and intuition, data and context, algorithms and wisdom.

🪄 Curious about navigating AI's quirks and potentials?

If this exploration resonated with you—if you're curious about making AI less of a mysterious black box and more of a transparent, powerful tool—then my book offers practical insights for this journey.

My book is for thinkers, tinkerers, and pragmatic realists who want to understand AI's true capabilities and limitations. It's about making AI a genuine partner in creative and business endeavors, not a replacement for human judgment, but an amplifier of human intelligence.


We'll explore how to work with AI's strengths while compensating for its blind spots, how to verify its confident claims, and how to guide it toward genuinely useful outcomes instead of expensive metal cube collections.


Because the future isn't about AI taking over—it's about humans and AI figuring out how to dance together, with humans leading and AI following the rhythm of real-world wisdom.

My book, "[Your Book Title Here]," is now available on Amazon, Google Play Books, and in hardcover. Feel free to DM me for details on how to get your copy.

The safety pins are waiting. The elephants are ready. And AI is learning which is which.

Are you ready to be the human in the equation?


1.From Prompt to Poster | 2. Unravelling Thinking | 3. Future-Proof Careers  | 4. Search Smarter

5.   Data-Driven Wealth | 6E. Depth, gently offered - Same article as above in english 

6T. முகமில்லா துணை


Download this article as a PDF — perfect for offline reading or sharing with friends on social media!


Connect with Kannan M

LinkedIn, Twitter, Instagram, and Facebook for more insights on AI, business, and the fascinating intersection of technology and human wisdom. Follow my blog for regular updates on practical AI applications and the occasional three-legged rabbit story.

For "Unbiased Quality Advice"

✉️ Email: Message me via blog

▶️ YouTube: Subscribe to our channel 

Blog - https://radhaconsultancy.blogspot.com/


#AIinBusiness  #ClaudeFails  #AIvsHuman #AIFails  #LearnWithAI


Watch this 7-minute video for more details on AI and gold investment strategies!