[ Reuters | Slashdot | BBC News ] [ Image Archive ] |
Slashdot
Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release. "These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet. Read more of this story at Slashdot. - ByteDance's Seedance 2 Criticized Over AI-Generated Video of Tom Cruise Fighting Brad Pitt 1.5 million people have now viewed a slick 15-second video imagining Tom Cruise fighting Brad Pitt that was generated by ByteDance's new AI video generation tool Seedance 2.0. But while ByteDance gushes their tool "delivers cinematic output aligned with industry standards," the cinema industry isn't happy, reports the Los Angeles Times reports: Charles Rivkin, chief executive of the Motion Picture Assn., wrote in a statement that the company "should immediately cease its infringing activity." "In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale," wrote Rivkin. "By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs." The video was posted on X by Irish filmmaker Ruairi Robinson. His post said the 15-second video came from a two-line prompt he put into Seedance 2.0. Rhett Reese, writer-producer of movies such as the "Deadpool" trilogy and "Zombieland," responded to Robinson's post, writing, "I hate to say it. It's likely over for us." He goes on to say that soon people will be able to sit at a computer and create a movie "indistinguishable from what Hollywood now releases." Reese says he's fearful of losing his job as increasingly powerful AI tools advance into creative fields. "I was blown away by the Pitt v Cruise video because it is so professional. That's exactly why I'm scared," wrote Reese on X. "My glass half empty view is that Hollywood is about to be revolutionized/decimated...." In a statement to The Times, [screen/TV actors union] SAG-AFTRA confirmed that the union stands with the studios in "condemning the blatant infringement" from Seedance 2.0, as video includes "unauthorized use of our members' voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood. Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent," wrote a spokesperson from SAG-AFTRA. "Responsible A.I. development demands responsibility, and that is nonexistent here." Read more of this story at Slashdot. - Earth is Warming Faster Than Ever. But Why? "Global temperatures have been rising for decades," reports the Washington Post. "But many scientists say it's now happening faster than ever before." According to a Washington Post analysis, the fastest warming rate on record occurred in the last 30 years. The Post used a dataset from NASA to analyze global average surface temperatures from 1880 to 2025. "We're not continuing on the same path we had before," said Robert Rohde, chief scientist at Berkeley Earth. "Something has changed...." Temperatures over the past decade have increased by close to 0.27 degrees C per decade — about a 42 percent increase... For decades, a portion of the warming unleashed by greenhouse gas emissions was "masked" by sulfate aerosols. These tiny particles cause heart and lung disease when people inhale polluted air, but they also deflect the sun's rays. Over the entire planet, those aerosols can create a significant cooling effect — scientists estimate that they have canceled out about half a degree Celsius of warming so far. But beginning about two decades ago, countries began cracking down on aerosol pollution, particularly sulfate aerosols. Countries also began shifting from coal and oil to wind and solar power. As a result, global sulfur dioxide emissions have fallen about 40 percent since the mid-2000s; China's emissions have fallen even more. That effect has been compounded in recent years by a new international regulation that slashed sulfur emissions from ships by about 85 percent. That explains part of why warming has kicked up a bit. But some researchers say that the last few years of record heat can't be explained by aerosols and natural variability alone. In a paper published in the journal Science in late 2024, researchers argued that about 0.2 degrees C of 2023's record heat — or about 13 percent — couldn't be explained by aerosols and other factors. Instead, they found that the planet's low-lying cloud cover had decreased — and because low-lying clouds tend to reflect the sun's rays, that decrease warmed the planet... That shift in cloud cover could also be partly related to aerosols, since clouds tend to form around particles in the atmosphere. But some researchers also say it could be a feedback loop from warming temperatures. If temperatures warm, it can be harder for low-lying clouds to form. If most of the current record warmth is due to changing amounts of aerosol pollution, the acceleration would stop once aerosol pollutants reach zero — and the planet would return to its previous, slower rate of warming. But if it's due to a cloud feedback loop, the acceleration is likely to continue — and bring with it worsening heat waves, storms and droughts. "Scientists thought they understood global warming," reads the Post's original headline. "Then the past three years happened." Just last month Nuuk, Greenland saw temperatures over 20 degrees Fahrenheit above average, their article points out. And "Parts of Australia, meanwhile, have seen temperatures push past 120 degrees Fahrenheit amid a record heat wave..." Read more of this story at Slashdot. - The EU Moves To Kill Infinite Scrolling Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children. The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design. Read more of this story at Slashdot. - Sudden Telnet Traffic Drop. Are Telcos Filtering Ports to Block Critical Vulnerability? An anonymous reader shared this report from the Register: Telcos likely received advance warning about January's critical Telnet vulnerability before its public disclosure, according to threat intelligence biz GreyNoise. Global Telnet traffic "fell off a cliff" on January 14, six days before security advisories for CVE-2026-24061 went public on January 20. The flaw, a decade-old bug in GNU InetUtils telnetd with a 9.8 CVSS score, allows trivial root access exploitation. GreyNoise data shows Telnet sessions dropped 65 percent within one hour on January 14, then 83 percent within two hours. Daily sessions fell from an average 914,000 (December 1 to January 14) to around 373,000, equating to a 59 percent decrease that persists today. "That kind of step function — propagating within a single hour window — reads as a configuration change on routing infrastructure, not behavioral drift in scanning populations," said GreyNoise's Bob Rudis and "Orbie," in a recent blog [post]. The researchers unverified theory is that infrastructure operators may have received information about the make-me-root flaw before advisories went to the masses... 18 operators, including BT, Cox Communications, and Vultr went from hundreds of thousands of Telnet sessions to zero by January 15... All of this points to one or more Tier 1 transit providers in North America implementing port 23 filtering. US residential ISP Telnet traffic dropped within the US maintenance window hours, and the same occurred at those relying on transatlantic or transpacific backbone routes, all while European peering was relatively unaffected, they added. Read more of this story at Slashdot. - Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas). The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.] OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini... OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest." OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons: "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions. "If you want to pay for ChatGPT Plus or Pro, we don't show you ads." "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be." Read more of this story at Slashdot. - Israeli Soldiers Accused of Using Polymarket To Bet on Strikes An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets. One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February. The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows. Read more of this story at Slashdot. - Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code "I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change." "Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet. I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat... It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine. "How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...") And amazingly, Shambaugh then had another run-in with a hallucinating AI... I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves. This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here... So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference. Thanks to long-time Slashdot reader steak for sharing the news. Read more of this story at Slashdot. - 600% Memory Price Surge Threatens Telcos' Broadband Router, Set-Top Box Supply Telecom operators planning aggressive fiber and fixed wireless broadband rollouts in 2026 face a serious supply problem -- DRAM and NAND memory prices for consumer applications have surged more than 600% over the past year as higher-margin AI server segments absorb available capacity, according to Counterpoint Research. Routers, gateways and set-top boxes have been hit hardest, far worse than smartphones; prices for "consumer memory" used in broadband equipment jumped nearly 7x over the last nine months, compared to 3x for mobile memory. Memory now makes up more than 20% of the bill of materials in low-to-mid-end routers, up from around 3% a year ago. Counterpoint expects prices to keep rising through at least June 2026. Telcos that were also looking to push AI-enabled customer premises equipment -- requiring even more compute and memory content -- face additional headwinds. Read more of this story at Slashdot. - Anna's Archive Quietly 'Releases' Millions of Spotify Tracks, Despite Legal Pushback Anna's Archive, the shadow library that announced last December it had scraped Spotify's entire catalog, has quietly begun distributing the actual music files despite a federal preliminary injunction signed by Judge Jed Rakoff on January 16 that explicitly barred the site from hosting or distributing the copyrighted works. The site's backend torrent index now lists 47 new torrents added on February 8, containing roughly 2.8 million tracks across approximately 6 terabytes of audio data. Anna's Archive had previously released only Spotify metadata -- about 200 GB compressed -- and appeared to comply by removing its dedicated Spotify download section and marking it "unavailable until further notice." Read more of this story at Slashdot. - Detroit Automakers Take $50 Billion Hit The Detroit Big Three -- General Motors, Ford and Stellantis -- have collectively announced more than $50 billion in write-downs on their electric-vehicle businesses after years of aggressive investment into a transition that, even before Republican lawmakers abolished a $7,500 federal tax credit last fall, was already running below expectations. U.S. EV sales fell more than 30% in the fourth quarter of 2025 once the credit expired in September, and Congress also eliminated federal fuel-efficiency mandates. More than $20 billion in previously announced investments in EV and battery facilities were canceled last year -- the first net annual decrease in years, according to Atlas Public Policy. GM has laid off thousands of workers and is converting plants once earmarked for EV trucks and motors to produce gas-powered trucks and V-8 engines. Ford dissolved a joint venture with a South Korean conglomerate to make batteries and now plans to build just one low-cost electric pickup by 2027. Stellantis is unloading its stake in a battery-making business after booking the largest EV-related charge of any automaker so far. Outside the U.S., the trajectory looks different: China's BYD recently overtook Tesla as the world's largest EV seller. Read more of this story at Slashdot. - Meta's New Patent: an AI That Likes, Comments and Messages For You When You're Dead Meta was granted a patent in late December that describes how a large language model could be trained on a deceased user's historical activity -- their comments, likes, and posted content -- to keep their social media accounts active after they're gone. Andrew Bosworth, Meta's CTO, is listed as the primary author of the patent, first filed in 2023. The AI clone could like and comment on posts, respond to DMs, and even simulate video or audio calls on the user's behalf. A Meta spokesperson told Business Insider the company has "no plans to move forward" with the technology. Read more of this story at Slashdot. - Google Warns EU Risks Undermining Own Competitiveness With Tech Sovereignty Push Europe risks undermining its own competitiveness drive by restricting access to foreign technology, Google's president of global affairs and chief legal officer Kent Walker told the Financial Times, as Brussels accelerates efforts to reduce reliance on U.S. tech giants. Walker said the EU faces a "competitive paradox" as it seeks to spur growth while restricting the technologies needed to achieve that goal. He warned against erecting walls that make it harder to use some of the best technology in the world, especially as it advances quickly. EU leaders gathered Thursday for a summit in Belgium focused on increasing European competitiveness in a more volatile global economy. Europe's digital sovereignty push gained momentum in recent months, driven by fears that President Donald Trump's foreign policy could force a tech decoupling. Read more of this story at Slashdot. - Spotify Says Its Best Developers Haven't Written a Line of Code Since December, Thanks To AI Spotify's best developers have stopped writing code manually since December and now rely on an internal AI system called Honk that enables remote, real-time code deployment through Claude Code, the company's co-CEO Gustav Soderstrom said during a fourth-quarter earnings call this week. Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office. The system has helped Spotify ship more than 50 new features throughout 2025, including AI-powered Prompted Playlists, Page Match for audiobooks, and About This Song. Soderstrom credited the system with speeding up coding and deployment tremendously and called it "just the beginning" for AI development at Spotify. The company is building a unique music dataset that differs from factual resources like Wikipedia because music-related questions often lack single correct answers -- workout music preferences vary from American hip-hop to Scandinavian heavy metal. Read more of this story at Slashdot. - FTC Ratchets Up Microsoft Probe, Queries Rivals on Cloud, AI The US Federal Trade Commission is accelerating scrutiny of Microsoft as part of an ongoing probe into whether the company illegally monopolizes large swaths of the enterprise computing market with its cloud software and AI offerings, including Copilot. From a report: The agency has issued civil investigative demands in recent weeks to companies that compete with Microsoft in the business software and cloud computing markets, according to people familiar with the matter. The demands feature an array of questions on Microsoft's licensing and other business practices, according to the people, who were granted anonymity to discuss a confidential investigation. With the demands, which are effectively like civil subpoenas, the FTC is seeking evidence that Microsoft makes it harder for customers to use Windows, Office and other products on rival cloud services. The agency is also requesting information on Microsoft's bundling of artificial intelligence, security and identity software into other products, including Windows and Office, some of the people said. Read more of this story at Slashdot. |
|