0 5 min 4 dys

The idea of a humanoid is that you can drop the technology into existing environments. Factories, warehouses, and industrial clusters have been designed around human bodies for decades. Retrofitting them for fixed automation is costly and slow, and any change in product mix or volume often requires expensive redesigns. One idea of a humanoid is that you don’t need physical infrastructure set up to use the tech. Be highly skeptical of any AI tool rush. Do not be consumed by these hype tools being aggressively released to the market and installed / upgraded frequently, often with security vulnerabilities everywhere. Resist the urge to throw yourself upon tools or platforms that have rushed to address a market need – they usually had no forethought about security, or just push an unreasonable assumption of risk on the end user.

Addictive app features violates human rights. Features such as infinite scrolling, auto-play videos, push notifications, and hyper-personalized recommendations make these AI apps highly addictive. EU is effectively banning children under 15 from using such social media platforms after it found that some social media giants did not sufficiently assess the impact of these addictive features on users’ physical and mental health, particularly among minors and vulnerable adults.

The people (incl govt) responsible for developing AI into its current form like to pretend that its answers are created out of whole cloth by emergent intelligence. In truth, the only real innovation of AI is its ability to rapidly search an enormous pool of data and collate the results into something readable. ChatGPT and its brethren are only as good as their datasets and data-centers.

No company could generate enough good writing on its own, so LLMs learn by crawling text made public on the internet. Apps, websites, blogs, forums, news articles, and other publicly available content are all fair game. They occasionally pays for licensed content and filters a little for biased or harmful misinformation.

The techie reasoning is two-pronged. First, if you put information online, you can’t complain about what someone else does with it. Second, it’s not plagiarism because the bots are just learning and processing information the same way a human does. Both arguments are nonsense. You’re absolutely allowed to complain about what someone does with the words on your website viz DCMA. As for the other argument, there’s a huge difference between a human getting inspired by a blog post and a for-profit corporation using that blog post to improve its own product without compensating or crediting the original author.

The sad fact is that LLM creators are only getting away with this because their technology is too new to be regulated yet. Happily, there does seem to be momentum in some governments to clamp down on AI data theft. Until new online privacy laws pass, though, everyone with content online is responsible for defending themselves. AI is currently in its Wild West period. With no laws or standards of behavior, AI purveyors like OpenAI, Meta, Google and Microsoft can shroud all their actions in secrecy and face no consequences.

Until the law catches up, website owners and authors have limited ways to fight back. Start by updating your robots.txt. Most important thing you can do is lobby your government officials to pass laws regulating AI. A great start would be a bill forcing all LLMs to openly name their data-collecting agents and respect robots.txt requests to remove them. A universal opt-out mechanic would make everyone’s online material safer.

Big spending in AI-related investments has become the new normal to the point that it’s now background noise. Even still, occasionally there’s a sonic boom. Amazon announced that it would be spending $200 billion in 2026, or $50 billion more than predicted. Investors didn’t like that, and the company’s shares took a steep 9% nosedive, taking some of its friends along for the ride for a combined sell-off approaching $1 trillion.

Big Tech players are set to spend a $660 billion on AI investments. Investors who were once very bullish on the AI race, not wanting to be left out, are reportedly starting to get cold feet. So, business model is a big question for all of these AI companies and events. They claim not making any profits, even when they charge people thousands a month. Later, economies of scale and the progress of time, may bring costs down. Ref: http://techrights.org

Leave a Reply