[ Reuters | Slashdot | BBC News ] [ Image Archive ] |
Slashdot
An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets. One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February. The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows. Read more of this story at Slashdot. - Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code "I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change." "Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet. I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat... It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine. "How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...") And amazingly, Shambaugh then had another run-in with a hallucinating AI... I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves. This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here... So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference. Thanks to long-time Slashdot reader steak for sharing the news. Read more of this story at Slashdot. - 600% Memory Price Surge Threatens Telcos' Broadband Router, Set-Top Box Supply Telecom operators planning aggressive fiber and fixed wireless broadband rollouts in 2026 face a serious supply problem -- DRAM and NAND memory prices for consumer applications have surged more than 600% over the past year as higher-margin AI server segments absorb available capacity, according to Counterpoint Research. Routers, gateways and set-top boxes have been hit hardest, far worse than smartphones; prices for "consumer memory" used in broadband equipment jumped nearly 7x over the last nine months, compared to 3x for mobile memory. Memory now makes up more than 20% of the bill of materials in low-to-mid-end routers, up from around 3% a year ago. Counterpoint expects prices to keep rising through at least June 2026. Telcos that were also looking to push AI-enabled customer premises equipment -- requiring even more compute and memory content -- face additional headwinds. Read more of this story at Slashdot. - Anna's Archive Quietly 'Releases' Millions of Spotify Tracks, Despite Legal Pushback Anna's Archive, the shadow library that announced last December it had scraped Spotify's entire catalog, has quietly begun distributing the actual music files despite a federal preliminary injunction signed by Judge Jed Rakoff on January 16 that explicitly barred the site from hosting or distributing the copyrighted works. The site's backend torrent index now lists 47 new torrents added on February 8, containing roughly 2.8 million tracks across approximately 6 terabytes of audio data. Anna's Archive had previously released only Spotify metadata -- about 200 GB compressed -- and appeared to comply by removing its dedicated Spotify download section and marking it "unavailable until further notice." Read more of this story at Slashdot. - Detroit Automakers Take $50 Billion Hit The Detroit Big Three -- General Motors, Ford and Stellantis -- have collectively announced more than $50 billion in write-downs on their electric-vehicle businesses after years of aggressive investment into a transition that, even before Republican lawmakers abolished a $7,500 federal tax credit last fall, was already running below expectations. U.S. EV sales fell more than 30% in the fourth quarter of 2025 once the credit expired in September, and Congress also eliminated federal fuel-efficiency mandates. More than $20 billion in previously announced investments in EV and battery facilities were canceled last year -- the first net annual decrease in years, according to Atlas Public Policy. GM has laid off thousands of workers and is converting plants once earmarked for EV trucks and motors to produce gas-powered trucks and V-8 engines. Ford dissolved a joint venture with a South Korean conglomerate to make batteries and now plans to build just one low-cost electric pickup by 2027. Stellantis is unloading its stake in a battery-making business after booking the largest EV-related charge of any automaker so far. Outside the U.S., the trajectory looks different: China's BYD recently overtook Tesla as the world's largest EV seller. Read more of this story at Slashdot. - Meta's New Patent: an AI That Likes, Comments and Messages For You When You're Dead Meta was granted a patent in late December that describes how a large language model could be trained on a deceased user's historical activity -- their comments, likes, and posted content -- to keep their social media accounts active after they're gone. Andrew Bosworth, Meta's CTO, is listed as the primary author of the patent, first filed in 2023. The AI clone could like and comment on posts, respond to DMs, and even simulate video or audio calls on the user's behalf. A Meta spokesperson told Business Insider the company has "no plans to move forward" with the technology. Read more of this story at Slashdot. - Google Warns EU Risks Undermining Own Competitiveness With Tech Sovereignty Push Europe risks undermining its own competitiveness drive by restricting access to foreign technology, Google's president of global affairs and chief legal officer Kent Walker told the Financial Times, as Brussels accelerates efforts to reduce reliance on U.S. tech giants. Walker said the EU faces a "competitive paradox" as it seeks to spur growth while restricting the technologies needed to achieve that goal. He warned against erecting walls that make it harder to use some of the best technology in the world, especially as it advances quickly. EU leaders gathered Thursday for a summit in Belgium focused on increasing European competitiveness in a more volatile global economy. Europe's digital sovereignty push gained momentum in recent months, driven by fears that President Donald Trump's foreign policy could force a tech decoupling. Read more of this story at Slashdot. - Spotify Says Its Best Developers Haven't Written a Line of Code Since December, Thanks To AI Spotify's best developers have stopped writing code manually since December and now rely on an internal AI system called Honk that enables remote, real-time code deployment through Claude Code, the company's co-CEO Gustav Soderstrom said during a fourth-quarter earnings call this week. Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office. The system has helped Spotify ship more than 50 new features throughout 2025, including AI-powered Prompted Playlists, Page Match for audiobooks, and About This Song. Soderstrom credited the system with speeding up coding and deployment tremendously and called it "just the beginning" for AI development at Spotify. The company is building a unique music dataset that differs from factual resources like Wikipedia because music-related questions often lack single correct answers -- workout music preferences vary from American hip-hop to Scandinavian heavy metal. Read more of this story at Slashdot. - FTC Ratchets Up Microsoft Probe, Queries Rivals on Cloud, AI The US Federal Trade Commission is accelerating scrutiny of Microsoft as part of an ongoing probe into whether the company illegally monopolizes large swaths of the enterprise computing market with its cloud software and AI offerings, including Copilot. From a report: The agency has issued civil investigative demands in recent weeks to companies that compete with Microsoft in the business software and cloud computing markets, according to people familiar with the matter. The demands feature an array of questions on Microsoft's licensing and other business practices, according to the people, who were granted anonymity to discuss a confidential investigation. With the demands, which are effectively like civil subpoenas, the FTC is seeking evidence that Microsoft makes it harder for customers to use Windows, Office and other products on rival cloud services. The agency is also requesting information on Microsoft's bundling of artificial intelligence, security and identity software into other products, including Windows and Office, some of the people said. Read more of this story at Slashdot. - EPA Reverses Long-Standing Climate Change Finding, Stripping Its Own Ability To Regulate Emissions President Donald Trump announced Thursday that the Environmental Protection Agency is rescinding the legal finding that it has relied on for nearly two decades to limit the heat-trapping pollution that spews from vehicle tailpipes, oil refineries and factories. From a report: The repeal of that landmark determination, known as the endangerment finding, will upend most U.S. policies aimed at curbing climate change. The finding -- which the EPA issued in 2009 -- said the global warming caused by greenhouse gases like carbon dioxide and methane endangers the health and welfare of current and future generations. "We are officially terminating the so-called endangerment finding, a disastrous Obama-era policy," Trump said at a news conference. "This determination had no basis in fact -- none whatsoever. And it had no basis in law. On the contrary, over the generations, fossil fuels have saved millions of lives and lifted billions of people out of poverty all over the world." Major environmental groups have disputed the administration's stance on the endangerment finding and have been preparing to sue in response to its repeal. Read more of this story at Slashdot. - OpenAI Claims DeepSeek Distilled US Models To Gain an Edge An anonymous reader shares a report: OpenAI has warned US lawmakers that its Chinese rival DeepSeek is using unfair and increasingly sophisticated methods to extract results from leading US AI models to train the next generation of its breakthrough R1 chatbot, according to a memo reviewed by Bloomberg News. In the memo, sent Thursday to the House Select Committee on China, OpenAI said that DeepSeek had used so-called distillation techniques as part of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs." The company said it had detected "new, obfuscated methods" designed to evade OpenAI's defenses against misuse of its models' output. OpenAI began privately raising concerns about the practice shortly after the R1 model's release last year, when it opened a probe with partner Microsoft Corp. into whether DeepSeek had obtained its data in an unauthorized manner, Bloomberg previously reported. In distillation, one AI model relies on the output of another for training purposes to develop similar capabilities. Distillation, largely tied to China and occasionally Russia, has persisted and become more sophisticated despite attempts to crack down on users who violate OpenAI's terms of service, the company said in its memo, citing activity it has observed on its platform. Read more of this story at Slashdot. - Waymo is Asking DoorDash Drivers To Shut the Doors of Its Self-Driving Cars Waymo's autonomous vehicles can transport passengers across six cities without a human driver, but the Alphabet-owned company has discovered that its cars become completely inert if a passenger accidentally leaves a door open. The company confirmed that it is now paying DoorDash drivers in Atlanta to close these doors as part of a pilot program. A Reddit post from a DoorDash driver showed an offer of $6.25 to drive less than one mile to a Waymo vehicle and close its door, plus an additional $5 after verified completion. Waymo and DoorDash told TechCrunch the post is legitimate. The door-closing partnership began earlier this year and is separate from the autonomous delivery service the two companies launched in Phoenix in October. Waymo has also worked with Honk, a towing service app, in Los Angeles on the same problem. Honk users in L.A. have been offered up to $24 to close a Waymo door. Future Waymo vehicles will have automated door closures. Read more of this story at Slashdot. - Bill Introduced To Replace West Virginia's New CS Course Graduation Requirement With Computer Literacy Proficiency theodp writes: West Virginia lawmakers on Tuesday introduced House Bill 5387 (PDF), which would repeal the state's recently enacted mandatory stand-alone computer science graduation requirement and replace it with a new computer literacy proficiency requirement. Not too surprisingly, the Bill is being opposed by tech-backed nonprofit Code.org, which lobbied for the WV CS graduation requirement (PDF) just last year. Code.org recently pivoted its mission to emphasize the importance of teaching AI education alongside traditional CS, teaming up with tech CEOs and leaders last year to launch a national campaign to mandate CS and AI courses as graduation requirements. "It would basically turn the standalone computer science course requirement into a computer literacy proficiency requirement that's more focused on digital literacy," lamented Code.org as it discussed the Bill in a Wednesday conference call with members of the Code.org Advocacy Coalition, including reps from Microsoft's Education and Workforce Policy team. "It's mostly motivated by a variety of different issues coming from local superintendents concerned about, you know, teachers thinking that students don't need to learn how to code and other things. So, we are addressing all of those. We are talking with the chair and vice chair of the committee a week from today to try to see if we can nip this in the bud." Concerns were also raised on the call about how widespread the desire for more computing literacy proficiency (over CS) might be, as well as about legislators who are associating AI literacy more with digital literacy than CS. The proposed move from a narrower CS focus to a broader goal of computer literacy proficiency in WV schools comes just months after the UK's Department for Education announced a similar curriculum pivot to broader digital literacy, abandoning the narrower 'rigorous CS' focus that was adopted more than a decade ago in response to a push by a 'grassroots' coalition that included Google, Microsoft, UK charities, and other organizations. Read more of this story at Slashdot. - Meta Plans To Let Smart Glasses Identify People Through AI-Powered Facial Recognition Meta plans to add facial recognition technology to its Ray-Ban smart glasses as soon as this year, New York Times reported Friday, five years after the social giant shut down facial recognition on Facebook and promised to find "the right balance" for the controversial technology. The feature, internally called "Name Tag," would let wearers identify people and retrieve information about them through Meta's AI assistant, the report added. An internal memo from May acknowledged the feature carries "safety and privacy risks" and noted that political tumult in the United States would distract civil society groups that might otherwise criticize the launch. The company is exploring restrictions that would prevent the glasses from functioning as a universal facial recognition tool, potentially limiting identification to people connected on Meta platforms or those with public accounts. Read more of this story at Slashdot. - Ring Cancels Its Partnership With Flock Safety After Surveillance Backlash Following intense backlash to its partnership with Flock Safety, a surveillance technology company that works with law enforcement agencies, Ring has announced it is canceling the integration. From a report: In a statement published on Ring's blog and provided to The Verge ahead of publication, the company said: "Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners ... The integration never launched, so no Ring customer videos were ever sent to Flock Safety." [...] Over the last few weeks, the company has faced significant public anger over its connection to Flock, with Ring users being encouraged to smash their cameras, and some announcing on social media that they are throwing away their Ring devices. The Flock partnership was announced last October, but following recent unrest across the country related to ICE activities, public pressure against the Amazon-owned Ring's involvement with the company started to mount. Flock has reportedly allowed ICE and other federal agencies to access its network of surveillance cameras, and influencers across social media have been claiming that Ring is providing a direct link to ICE. Read more of this story at Slashdot. |
|