The idea of a humanoid is that you can drop the technology into existing environments. Factories, warehouses, and industrial clusters have been designed around human bodies for decades. Retrofitting them for fixed automation is costly and slow, and any change in product mix or volume often requires expensive redesigns. One idea of a humanoid is that you don’t need physical infrastructure set up to use the tech. TINA peddling: Be highly skeptical of any AI tool rush via the nexus of monopolies in Robotic-media, Power-politics, Techno-fascism and HNW-pedophile. Do not be consumed by these hype tools being aggressively released to the market and forcefully installed / upgraded tactically, often with privacy / security vulnerabilities everywhere. Resist the urge to throw yourself upon API tools or platforms bombarded to address some hallucinated market needs – they usually had no forethought about security or privacy but just push unreasonable assumption of risk & fear on end users’ gadgets.
Addictive app features violate human rights. Things like infinite scrolling, auto-play videos, push notifications, and hyper-personalized recommendations make these AI apps highly addictive. EU is effectively banning children under 15 from using such platforms after it found that some social media giants did not sufficiently assess the impact of these addictive features on users’ physical and mental health, particularly among minors and vulnerable adults.
The people (include govt) responsible for developing AI into its current form like to pretend that its answers are created out of whole cloth by emergent intelligence. In truth, the only real innovation of AI is its ability to rapidly search an enormous pool of data and collate the results into something readable. ChatGPT and its brethren are only as good as their datasets and data-centers.
No company could generate enough good writing on its own, so LLMs learn by crawling text made public on the internet. Apps, websites, blogs, forums, news articles, and other publicly available content are all fair game. Design of AI / LLM chatbots are mostly done via data theft, data hoarding, plagiarism and deception. Though some occasionally pays for licensed content and filters a little for biased, fake, hype or harmful misinformation.
The big techie reasoning is two-pronged. First, if you put information online, you can’t complain about what someone else does with it. Second, it’s not plagiarism because the bots are just learning and processing information the same way a human does. Both arguments are nonsense. You’re absolutely allowed to complain about what someone does with the words on your website viz DCMA interfaces. As for the other argument, there’s a huge difference between a human getting inspired by a blog post and a for-profit corporation using that blog post to improve its own product without compensating or crediting the original author.
The sad fact is that LLM creators are only getting away with this because tech is too new to be regulated, on the contrary policymakers are incentivizing them. Happily, there is also selective momentum in some governments to clamp down on data theft. Until new online privacy laws pass, though, everyone with content online is responsible for defending themselves. AI is currently in its Wild West development period. With no laws or standards of behavior, AI purveyors like Amazon, Apple, OpenAI, Meta, Telecom, ISPs, Google, Microsoft, NSA, etc can shroud all their actions in secrecy and face no consequences.
Until the law catches up, website owners and authors have limited ways to fight back. Start by updating your robots.txt. Most important thing you can do is lobby your government officials to pass laws regulating AI. A great start would be a bill forcing all LLMs to openly name their data-collecting agents, trolling universities and respect robots.txt requests to remove them. Universally generative opt-out mechanics would make everyone’s online material safer.
Bizarre spending in AI-related investments has become the new normal to the point that it’s now background noise. Even still, occasionally there’s a sonic boom to hallucinate markets. Amazon announced that it would be spending $200 billion in 2026, or $50 billion more than predicted. Investors didn’t like that, and the company’s shares took a steep 9% nosedive, taking some of its friends along for the ride with a combined sell-off approaching $1 trillion.
Big Tech players are set to spend a $660 billion on AI investments. Investors who were once very bullish on the AI race, not wanting to be left out, are reportedly starting to get cold feet. So, business model hallucination is a big question for all of these AI companies, agents and events. They claim not making any profits, even when they charge people thousands a month. Later, economies of scale and progress of time, may bring costs down. Ref: http://techrights.org