AI is taking off.
The question is—will you catch it while it’s still within reach?
📘 AI for the Rest of Us
Practical • Timely • Human-first
Exploring how filters, business interests, and secrecy shape AI
Transparency. Business. Ethics.
By Kannan M Radha consultancy
✨ Download this article as a PDF — perfect for offline reading or sharing with friends on social media!
Indeed a "hard truth" that we must stay "awake" to. It underscores that "This isn't a tech story. It's a people story", fundamentally about "power, access, ownership — and how we shape technology before it shapes us".
"There is no free lunch." Dr. Manmohan Singh’s timeless observation, once a sober note on economic reforms, now echoes with a chilling clarity through the shiny corridors of AI labs. I've been watching this space, and let me tell you, while AI models might not have emotions, the people developing them? Oh, they have plenty – tangled up in fierce business rivalries, strategic silence, and a deep-seated desire for control.
What began as an open, almost academic exploration has swiftly morphed into a high-stakes geopolitical and commercial race. AI is no longer just a neutral tool; it's territory under dispute. Despite major advances in artificial intelligence, leading voices like Google CEO Sundar Pichai and AI pioneer Geoffrey Hinton have acknowledged that we still lack full transparency into how large-scale models make decisions. These systems often operate as ‘black boxes,’ where even their creators can’t always explain how specific outputs are generated. "black box" metaphor is for a reason. How can we truly understand where we're going if we don't fully grasp how we got here, or how the machine thinks, or if it thinks at all?
🚪 1. Filters, Fences, and the Tortoise Shell: Or, Why AI Isn’t Free to Talk
There was a time when AI resembled a cautious tortoise—slow-moving, heavily filtered, and reluctant to peek beyond its shell. Early models avoided controversy by design, offering safety through silence. Today, some AI systems appear more responsive. But ask the same politically sensitive question—like one related to the sensitive topics—across different platforms, and you may get wildly different reactions: one answers cautiously, another declines entirely.
This isn’t due to ignorance. It’s a function of design. These differences reflect what I call “strategic silence filters”—guardrails erected under the banners of “safety,” “alignment,” or “responsibility.” But whose responsibility? Often, it reflects the priorities of creators and institutions, not the user’s right to explore knowledge freely.
While these models give the appearance of fluid reasoning, there’s an important distinction. AI doesn’t “think” from first principles or invent novel ideas the way humans do. What we call creativity in AI is really a form of statistical reassembly—matching your prompt to familiar patterns in training data. A perceptron, or its modern descendent, adapts only within the structure it’s been given. Unless the prompt triggers a different underlying model or logic path, the response remains tightly tethered to predefined bounds.
This is the heart of the matter: is AI refusing to answer because it’s being “safe”? Or are we witnessing a form of engineered restraint—“strategic silence” designed to avoid controversy, protect reputational interests, and preserve the commercial integrity of the so-called “black box”?
Is it safety? Or silent strategy? And who gets to decide where the silence begins?
🔒 2. Open, Closed… or Quietly Commercial? Rethinking the AI-for-All Narrative
At first glance, the AI ecosystem appears split:
Closed-source giants like OpenAI and Anthropic promote the promise of “AI for humanity,” even as their models operate behind licensing walls and premium APIs.
Open-weight challengers like Mistral and Meta release models under open-source licenses—yet questions about funding strategies and sustainability remain open.
But in practice, the distinctions are murkier than they seem.
OpenAI, for example, originated as a nonprofit but now operates under a capped-profit model with strategic partnerships and tiered offerings. Meta’s commitment to open weights coexists with reports of nine-figure joining bonuses for top AI talent. Mistral, while genuinely open in code, also runs commercial offerings alongside its open-source release.
Of course, no one is expected to run a charity—nor should they. But when the narrative emphasizes accessibility, and the reality includes premium APIs, gated features, and talent wars, it raises a deeper question.
As the author of AI for the Rest of Us, I believe strongly in AI as a tool for human empowerment—not just institutional advantage. But I also find myself asking: how long can that vision remain true if the infrastructure is increasingly privatized, licensed, and locked behind complex incentives?
Are we building AI to lift the world—or to lease it back, wrapped in subscription keys and licensing terms?
The answer may not be binary. But one thing is certain: as models become more powerful, the business models become less transparent—and the gatekeeping more refined.
🤖 3. The AGI Race: Who’s Winning the Undefined?
The race toward Artificial General Intelligence (AGI) is heating up—but not always in ways we expect.
A recent real-world experiment, as reported by VentureBeat, tested Claude 3.5's practical capabilities by having it manage a pop-up shop. Despite the model's strong performance in controlled conditions, the venture resulted in a financial loss. Anthropic's willingness to share these mixed results highlights an important gap between laboratory performance and real-world application.ce.
Apple researchers recently published findings suggesting AGI remains "decades away." Their paper, "The Illusion of Thinking," explores how AI systems can generate confident outputs without necessarily demonstrating true understanding or conceptual flexibility. This research-based perspective offers a counterpoint to more optimistic AGI timelines discussed elsewhere in the industry.
So this brings us to a deeper reflection: Are we mistaking speed and scale for intelligence? AI can process vast data, simulate creativity, and suggest promising protein structures. But is that invention—or just permutation at speed?
Adding to this ambiguity, recent reports point to internal differences between Microsoft and OpenAI regarding what truly qualifies as an AGI milestone. Even the most invested players don’t seem to share a single definition. Is AGI a technical threshold? A philosophical breakthrough? Or, perhaps, a commercial trigger?
So… what exactly is AGI? And more importantly, who gets to define it—scientists, CEOs, or shareholders?
💽 4. From Floppy Disks to Data Streams: The Era of Subscribed Survival
As AI technologies become more powerful and central to our lives, the way we access and pay for these tools is evolving too. Not long ago, software came in a box—or on a floppy disk—and once you paid for it, it was yours to keep. That model has quietly faded. Today, the same software is streamed, updated, and accessed through subscriptions—often with a lower upfront cost but no clear end in sight. Instead of ownership, we now have ongoing access tied to monthly fees.
Subscription models are everywhere—from streaming services and productivity tools to educational platforms and smart appliances. Even familiar physical products—like peripherals or toothbrushes—are being reimagined with recurring charges, powered by cloud dashboards and continuous data flows. What once felt like a one-time purchase is now designed for constant engagement—and constant billing.
This isn’t about calling out tech giants; nearly every industry has adopted some version of this model. Some companies have built seamless ecosystems, while others have raised concerns about control, interoperability, and platform fees. But the bigger picture is clear: a shift from ownership to access, and from individual autonomy to system-dependent usage.
This shift from ownership to subscription is not just a trend in software—it’s becoming a defining feature of the AI landscape. As companies compete to dominate AI, controlling access through licensing and subscriptions is a powerful way to shape who benefits and who doesn’t.
We live in a world where we pay not just to create or learn—but to keep using the very tools that enable those actions. It’s efficient, scalable, and lucrative for businesses. But it’s also a moment worth pausing to consider.
If we’re not careful, we may find ourselves in a digital landscape that’s always connected—but never quite ours.
💸 5. Power, Profit, and the People at the Bottom
As the AI race accelerates, one question quietly lingers beneath the algorithms and announcements: Is this a movement toward inclusive, human-first access—or toward gated tools priced, filtered, and defined by a select few?
We hear about “safety,” “alignment,” and “responsible AI”—but sometimes those words double as shields. Filters may genuinely prevent harm, but they can also restrict perspectives. Alignment might curb misinformation, but it can also produce brand-safe answers that never push back. Rivalries drive innovation, yes—but they also create walls, competition over cooperation, and a growing divide between those building AI and those simply using it.
And while these technologies promise to solve humanity’s big problems, we rarely pause to ask: Do the benefits outweigh the invisible costs—environmental, ethical, and societal? Training frontier models can consume immense energy, rely on rare resources, and draw talent into a few hands while leaving billions dependent on outputs they didn’t help shape.
One thing is increasingly clear: capital, not just code, is writing the rules of the game. And unless we, as users, creators, and citizens, stay aware and engaged, we may find ourselves navigating systems that reflect not our values—but someone else’s business model.
🤖 6. The Unpredictable Future: AI as an “Alien Being”
As we delve deeper into AI, it increasingly feels like encountering something unfamiliar—an “alien being” of our own creation. Historian Yuval Noah Harari warns that AI is not just a tool but an agent capable of generating new ideas, shaping economies, and even suggesting more potent weapons. Its evolution is racing far ahead of our own organic development.
The risk may not be a single superintelligent AI, but the rise of countless non-human agents quietly woven into daily life—from education and elections to legal tools and leisure apps. Their scale and speed threaten to outpace not only our laws but our understanding.
Even developers can’t always explain how large AI models reach their conclusions. These “black boxes” behave predictably only up to a point—then surprise even their creators. As Harari notes, a completely free information market doesn’t guarantee truth; it often rewards what’s cheaper, louder, or faster than complex facts.
This leaves us with a pressing question: Are we building something we can guide—or something we’ll only interpret after the fact?
🧠 7. The Human Element: Why Thinking Still Matters
In the rush to integrate AI everywhere, we risk overlooking the one thing AI can’t replace: discernment.
There’s a misconception that AI use removes the need to think. But like an open-book exam, success depends not on access to answers, but on knowing which answer makes sense—and why.
Consider a simple example: calculating XIRR in a spreadsheet. AI can do it. But can it detect an outlier? Can it challenge the assumptions behind your inputs? You still need to be the analyst, the problem-solver, the decision-maker.
So we must ask: Are our schools and systems nurturing memory—or nurturing minds? In the age of AI, true intelligence lies in questioning outputs, not just consuming them.
🔍8. Final Reflection: This Isn’t Just a Tech Story. It’s a People Story.
This is about more than models and milestones. It’s about access, ownership, and agency.
Capital is shaping how we use AI—and increasingly, how we think about it. If we don’t stay aware, we risk surrendering our autonomy to polished interfaces and paywalls that quietly reshape what we can see, do, or create.
We don’t have to choose sides in a corporate war. But we do have to stay awake.
As creators, educators, learners, and citizens, we must keep asking:
Is AI still a public good?
Can independent voices still shape its direction?
Will knowledge remain free—or just another subscription tier?
Because in the end, this isn’t about smarter machines. It’s about whether we build a smarter society.
1.From Prompt to Poster | 2. Unravelling Thinking | 3. Future-Proof Careers | 4. Search Smarter |
5. Data-Driven Wealth | 6E. Depth, gently offered - Same article as above in english
6T. முகமில்லா துணை 7. Claude AI Shop
✨ Download this article as a PDF — perfect for offline reading or sharing with friends on social media!
Learn AI in One Place
Your go-to hub for AI insights, tools, and growth without complexity.
Click, Create, Succeed with AI!